There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

ClamDrinker

@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

ClamDrinker ,

“You know you don’t need to bring a dead horse every time you want catering right, Jim?”

ClamDrinker ,

If you’re here because of the AI headline, this is important to read.

We’re looking at how we can use local, on-device AI models – i.e., more private – to enhance your browsing experience further. One feature we’re starting with next quarter is AI-generated alt-text for images inserted into PDFs, which makes it more accessible to visually impaired users and people with learning disabilities.

They are implementing AI how it should be. Don’t let all the shitty companies blind you to the fact what we call AI has positive sides.

ClamDrinker , (edited )

It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn’t exist in the physical world. Humans hallucinate too - all the time. It’s just that our approximations are usually correct, and then we don’t call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It’s also why we don’t notice our blinks, or why we don’t see the blind spot our eyes have.

AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

Hallucinations shouldn’t be treated like a bug. They are a feature - just not one the big tech companies wanted.

When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.

ClamDrinker ,

I’m not an expert in AI, I will admit. But I’m not a layman either. We’re all anonymous on here anyways. Why not leave a comment explaining what you disagree with?

ClamDrinker ,

Hallucinations in AI are fairly well understood as far as I’m aware. Explained in high level on the Wikipedia page for it.And I’m honestly not making any objective assessment of the technology itself. I’m making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it’s given, but that’s something even a layman might know)

How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don’t have an answer there either), but a true fix should be impossible.

I can’t exactly say why I’m passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I’m also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.

ClamDrinker , (edited )

I’m not sure where you think I’m giving it too much credit, because as far as I read it we already totally agree lol. You’re right, methods exist to diminish the effect of hallucinations. That’s what the scientific method is. Current AI has no physical body and can’t run experiments to verify objective reality. It can’t fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.

ClamDrinker , (edited )

Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.

It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.

ClamDrinker ,

Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.

What I mentioned can’t really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you ‘hallucinated’ a truth that never existed, but you were just that confident it was correct to share and spread it. It’s how we get myths, popular belief, and folklore.

For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what’s going to happen, you basically can’t function in reality.

ClamDrinker ,

The thing is, games have a minimum difficulty to be somewhat generally enjoyable, and the game designers have often built their game around this. The fun is generally in the obstacles providing real resistance that can be overcome by optimizing your strategy. It means that these obstacles need to be mentally picked apart by the player to proceed. They are built like puzzles.

This design philosophy - anyone who plays these games can tell you - is deeply rewarding if you go through it, because it requires genuine improvement that you can notice and be proud of. Hence why there is often a limit to how much easier you can make games like these without losing that because you forget the obstacle before even realizing it was preventing you from doing something.

It’s often not as easy as just tweaking numbers. And often these development teams don’t have the time to rebalance a game for those lower difficulties, so they just don’t.

Honestly, the first wojack could be quite mad too, because often making an easy game harder also misses the point, where the game is just more difficult, but doesn’t actually provide you with that carefully crafted feeling of constant improvement. Instead some easy games can become downright frustrating because obstacles feel “cheap” or “lacking depth” now that you have to spend a lot more time on them.

But making an easy game harder by just tweaking the numbers is definitely easier on the development team, and gives existing players a chance to re-experience the game, which wouldn’t happen the other way around. But it’s almost certainly not a better option for new players wanting a harder difficulty.

At the end of the day though, often there are ways to get what you want. Either by cheating, modding, or otherwise using ‘OP’ usables in the game. Do whatever you want to make the game more enjoyable to yourself. But if you make it too easy on yourself you might come out on the other end wondering why other people enjoyed the game so much more than you did.

ClamDrinker ,

Well by this logic, hurricanes and tornadoes must be targeting republican states. What’s the message being sent there? 🤔 At least you can somewhat design and build architecture against earthquakes…

ClamDrinker ,

I agree, but I guess it has to do with their relative unpredictability (as far as I understand). Hurricanes you can prepare for days in advance. And at least tornado’s you can ‘see them coming’ to the point where if you’re unlucky you might lose your house, but not your life. Not sure how the numbers back that up (or if they can even be compared), but emotionally that feels like the answer.

ClamDrinker ,

Funnily enough, looking at the stats for the US from 2020 onwards to now (averaged to annual data), 272 deaths per year were caused by Storms, 63 by extreme temperatures, 56 by wildfires, and only 0.5 (so in the last 4 years only about 2 people) by earthquakes. source

So statistically, they should be more afraid of hurricanes and tornadoes (But to be fair, the odds of dying to these are extremely low to begin with, car accidents are probably far more common)

ClamDrinker ,

They said struggle - not that they couldn’t. Dont just attribute such a horrible thing based on your own reading. You can have all the empathy for the Russian people but no empathy for the Russian state. After all, the Russian state is also directly responsible for the continuous cold blooded murder of Ukrainian civilians. Not like they gave much warning on February 24th 2022, or in 2014.

ClamDrinker ,

Its funny how something like this get posted every few days and people keep falling for it like its somehow going to end AI. The people that make these models are acutely aware of how to avoid model collapse.

It’s totally fine for AI models to train on AI generated content that is of high enough quality. Part of the research to train models is building data sets with a text description matching the content, and filtering out content that is not organic enough (or even specifically including it as a ‘bad’ example for the AI to avoid). AI can produce material indistinguishable from human work, and it produces material that wasn’t originally in the training data. There’s no reason that can’t be good training data itself.

George Carlin Estate Files Lawsuit Against Group Behind AI-Generated Stand-Up Special: ‘A Casual Theft of a Great American Artist’s Work’ (variety.com)

George Carlin Estate Files Lawsuit Against Group Behind AI-Generated Stand-Up Special: ‘A Casual Theft of a Great American Artist’s Work’::George Carlin’s estate has filed a lawsuit against the creators behind an AI-generated comedy special featuring a recreation of the comedian’s voice.

ClamDrinker ,

I agree and I get it’s a funny way to put it, but in this case they started the video with a massive disclaimer that they were not Carlin and that it was AI. So it’s hard to argue they were putting things in his mouth. If anything it’s praiseworthy of a standard when it comes to disclosing if AI was involved, considering the hate mob revealing that attracts.

ClamDrinker , (edited )

While the estate might have a fair case on whether or not this is infringement (courts simply have not ruled enough on AI to say) I think this is a silly way to characterize the people that made this. If you wanted to turn a profit from a dead person using AI to copy their likeness, why Carlin? He’s beloved for sure, but he’s not very ‘marketable’. Without context to those who have never seen him before, he could be seen as a grumpy old man making aggressive statements. There are far better dead people to pick if your goal was to make a profit.

Which leads me to believe that he was in part picked because the creators of the video were genuine fans of his work (the video even states so as far as I remember) and felt they could provide enough originality and creativity. George Carlin is truly a one of a kind comedian whose words and jokes still inspire people today. Due to this video (and to an extent, the controversy), some people will be reminded of him. Some people will learn about him for the first time. His unique view on things can be extended to modern times. A view I feel we desperately need at times. None of that would be an issue as long as it was made excessively clear that this isn’t actually George. That it’s a homage. Which these people did. As far as I see, they could be legally in the wrong, but morally in the right. It’s unfair to characterize them purely by their usage of AI.

ClamDrinker ,

I mean, fair enough. But what alive person titles their show “I’m glad I’m dead?” Especially since people that know George know he’s dead. It’s almost The Onion level of satire. And once the video starts, it immediately starts with a disclaimer that it’s not Carlin, but AI. Nobody would sit through the entire show only to be dumbfounded later that it wasn’t actually Carlin risen from the dead.

ClamDrinker ,

You’re right, it can lead to a flood of new material that could overshadow his old works. But that would basically require it to be as good if not better than his old works, which I just don’t think will happen. Had nobody bat an eye at this, it would have just sunk into obscurity, as is the fate of many creative works. Should more shows be made, I think after the third people would just not even care anymore. Most haven’t even bothered to watch the first, after all.

ClamDrinker , (edited )

I agree that George is one of the best stand up comedians, but that doesn’t change that his material is very much counter-culture. It’s made to rub people the wrong way, to get them to think differently about why things are the way they are. That makes it inherently not as good of a money maker as someone who tries to please all sides in their jokes. I’d like to believe if he was alive today he would do a beautiful piece on AI.

In your second point I have to wonder though. Who made it a headline? Who decided this was worth bringing attention to? Clearly, the controversy did not come from them. There is nothing controversial about an homage. But it is AI, and that got people talking. You can be of the opinion they did it for that reason, but I would argue that they simply expected the same lukewarm reception they had always gotten. After all, people don’t often solicit themselves to be at the center of hate. Even when the association pays off, experiencing that stuff has lasting mental effects on people.

And again, if they wanted to be controversial to stir up as much drama, they could have done so much more. Just don’t disclose it’s AI even though it’s obviously AI, or make George do things out of character, like a product endorsement, or a piece about how religion is actually super cool. All of that would have gotten them 10x the hate and exposure they got now.

But instead, they made something that looks like and views like an homage with obvious disclosure. The only milder thing they could have done is found someone whose voice naturally sounds like George and put him in a costume that looks like George, at which point nobody would have bat an eye. Even though the intent is the same, just the way it was achieved is different.

ClamDrinker ,

For sure! Deceit should be punished. Ethical AI usage should not go without disclosure, so I think we must be understanding to people choosing to be open about that, rather than having to hide it to dodge hate.

I like Vernor Vinge’s take on it in one of his short stories where copyrights are lessened to 6 months and companies must quickly develop their new Worlds/Characters before they become public domain.

That’s an interesting idea. Although 6 months does sound like an awfully short time to actually develop something more grand. But I do think with fairer copyright limits we could also afford to provide more protections in the early days after a work’s release. It’s definitely worth discussing such ideas to make copyright better for everyone.

ClamDrinker ,

We can argue their motives all we want (I’m pretty uninterested in it personally), but we aren’t them and we don’t even know what the process was to make it

Yes, that is sort of my point. I’m not sure either, but neither did the person I responded to (in my first comment before yours). And to make assumptions with such negative implications is very unhealthy in my opinion.

and I think that is because the whole thing sure would seem less impressive if they just admitted that they wrote it.

It’s the first time I hear someone suggest they passed of their own work as AI, but it could also be true. Although AI assisted material is considered to be the same as fully AI generated by some. But again, we don’t know.

I laughed maybe once, because the whole thing was not very funny in addition to being a (reverse?) hack attempt by them to deliver bits of their own material as something Carlin would say.

I definitely don’t think it meets George’s level. But it was amusing to me. Which is about what I’d expect of an homage.

ClamDrinker ,

Completely true. But we cannot reasonably push the responsibility of the entire internet onto someone when they did their due diligence.

Like, some people post CoD footage to youtube because it looks cool, and someone else either mistakes or malicious takes that and recontextualizes it to being combat footage from active warzones to shock people. Then people start reposting that footage with a fake explanation text on top of it, furthering the misinformation cycle. Do we now blame the people sharing their CoD footage for what other people did with it? Misinformation and propaganda are something society must work together on to combat.

If it really matters, people would be out there warning people that the pictures being posted are fake. In fact, even before AI that’s what happened after tragedy happens. People would post images claiming to be of what happened, only to later be confirmed as being from some other tragedy. Or how some video games have fake leaks because someone rebranded fanmade content as a leak.

Eventually it becomes common knowledge or easy to prove as being fake. Take this picture for instance: https://lemmy.world/pictrs/image/5c11ddc9-f234-4743-8881-60be66bdc196.jpeg

It’s been well documented that the bottom image is fake, and as such anyone can now find out what was covered up. It’s up to society to speak up when the damage is too great.

ClamDrinker , (edited )

Healthy or not, my lived experience is that assuming people are motivated by the things people are typically motivated by (e.g. greed, the desire for fame) is more often correct than assuming people have pure motives.

Everyone likes praise to a certain extent, and desiring recognition for what you’ve made is independent from your intentions otherwise. My personal experience working with talented creative people is that the two are often intertwined. If you can make something that’s both fulfilling and economically sustainable, that’s what you’ll do. You can make something that’s extremely fulfilling, but if it doesn’t appeal to anyone but yourself, it doesn’t pay the bills. I’m not saying it’s not possible for them to not have that motivation, but in my opinion anyone ascribed to be malicious must be to some point proven to be that way. I have seen no such proof.

I really understand your second point but… as with many things, some things require consent and some things don’t. Making a parody or an homage doesn’t (typically) require that consent. It would be nice to get it, but the man is dead and even his children cannot speak for him other than as legal owners of his estate. I personally would like to believe he wouldn’t care one bit, and I would have the same basis as anyone else to defend that, because nobody can ask a dead man for his opinions. It’s clear his children do not like it, but unless they have a legal basis for that it can be freely dismissed as not being something George would stand behind.

I’ve watched pretty much every one of his shows, but I haven’t seen that documentary. I’ll see if I can watch it. But knowing George, he would have many words to exchange on both sides of the debate. The man was very much an advocate for freedom of creativity, but also very much in favor of artist protection. Open source AI has leveled the playing field for people that aren’t mega corporations to compete, but has also brought along insecurity and anxiety to creative fields. It’s not black and white.

In fact, there is a quote attributed to him which sort of speaks on this topic. (Although I must admit, the original source is of a defunct newspaper and the wayback machine didn’t crawl the article)

[On his work appearing on the Internet] It’s a conflicted feeling. I’m really a populist, down in the very center of me. I like the power people can accrue for themselves, and I like the idea of user-generated content and taking power from the corporations. The other half of the conflict, though, is that, traditionally speaking, artists are protected from copyright infringement. Fortunately, I don’t have to worry about solving this issue. It’s someone else’s job.

August 9, 2007 in Las Vegas CityLife. So just a little less than a year before his death too.

EDIT: Minor clarification

ClamDrinker , (edited )

A complete false equivalence. Just because improper disclaimers exist, doesn’t mean there aren’t legitimate reasons to use them. Impersonation requires intent, and a disclaimer is an explicit way to make it clear that they are not attempting to do that, and to explicitly make it clear to viewers who might have misunderstood. It’s why South Park has such a text too at the start of every episode. It’s a rather fool proof way to illegitimize any accusation of impersonation.

ClamDrinker ,

You’re right, South Park doesnt need it either. But a disclaimer removes all doubt. The video doesnt need a disclaimer either, but they made it anyways to remove all doubt. And no, they disclaimed any notion that they are George Carlin. Admitting to a crime in a disclaimer is not what it said, that much should be obvious.

ClamDrinker , (edited )

There’s another thing here which is that you seem to believe this was actually made in large part by an AI while simultaneously stating the motivations of humans. So which is it?

AI assisted works are, funnily enough, mostly a human production at this point. If you asked AI to make another George Carlin special for you, it would suck extremely hard. AI requires humans to succeed, it does not succeed at being human. And as such, it’s a human work at the end of the day. My opinion is that if we were being truthful, this comedy special would likely be considered AI assisted rather than fully AI generated.

You seem really sure that I think this is fully (or largely) AI generated, but that’s never been a question I answered or alluded to believing before. I don’t believe that. I don’t even believe fully AI generated works to be worthy of being called true art. AI assisted works on the other hand, I do believe to be art. AI is a tool, and for it to be used for art it requires humans to provide input and humans to make decisions for it to be something that people will actually enjoy. And that is clearly what was done here.

The primary beneficiary of all of the AI hype is Microsoft. Secondary beneficiary is Nvidia. These aren’t tiny companies.

“The primary beneficiaries of art hype are pencil makers, brush makers, canvas makers, and of course, Adobe for making photoshop, Samsung and Wacom for making drawing tablets. Not to mention the art investors selling art from museums and art galleries all over the world for millions. These aren’t tiny entities.”

See how ridiculous it is to make that argument? If something is popular, people and companies who are in a prime position to make money off it will try to do so, that is to be expected under our capitalist society. But small artists and small creators get the most elevation by the advance of open source AI. Big companies can already push out enough money to bring any work they create to the highest standards. A small creator cannot, but they can get far more, and far better results by using AI in their workflow. And because small creators often put far more heart and soul into their works, it allows them to compete with giants more easily. A clear win for small creators and artists.

Just to be extra clear: I don’t like OpenAI. I don’t like Microsoft. I don’t like Nvidia to a certain degree. Open Source AI is not their piece of cake. They like proprietary, closed source AI. The kind where only they and the people that pay them get to use the advancements AI has made. That disgusts me. Open Source AI is the tool of choice for ethical AI.

ClamDrinker ,

The court might rule in favor of his estate for this reason. But honestly, I do think there are differences to a singer (whose voice becomes an instrument in their song) and a comedian (whose voice is used to communicate the ideas and jokes they want to tell). A different voice could tell the same jokes as Carlin, and if done with the same level of care to communicate his emotions and cadence, could effectively create the same feeling as we know it. A song could literally be a different song if you swap an instrument. But the courts will have to rule.

ClamDrinker ,

I don’t disagree with that, but such differences can matter when it comes to ruling if imitation and parody are allowed, and to what extent.

ClamDrinker ,

Well then we agree. Lets leave ridiculous arguments out of it. There are far better arguments to make.

ClamDrinker , (edited )

I mean, you ignored the entire rest of my comment to respond only to a hyperbole to illustrate that something is a bad argument. I’m sure they are making money off it, but small creators and artists can relatively make more money off it. And you claim that is not ‘actually happening’. But that is your opinion, how you view things. I talk with artists daily, and they use AI when it’s convenient to them, when it saves them work or allows them to focus on work they actually like. Just like how they use any other tool to their disposal.

I know there are some very big name artists on social media who are making a fuss about this stuff, but I highly question their motives with my point of view in mind. Of course it makes sense for someone with a big social media following to rally up their supporters so they can get a payday. I regularly see them speak complete lies to their followers, and of course it works. When you actually talk to artists in real life, you’ll get a far more nuanced response.

ClamDrinker ,

That’s a pretty sloppy reason. A nuanced topic is not well suited to be explained in anything but descriptive language. Especially if you care about people’s livelihoods and passion. I care about my artist friends, colleagues, and acquaintances. Hence I will support them in securing their endeavors in this changing landscape.

Artists are largely not computer experts and artists using AI are buying Microsoft or Adobe or using freebies and pondering paid upgrades. They are also renting rather than buying because everything’s a subscription service now.

I really don’t like this characterization of artists. They are not dumb nor incapable of learning. Technical artists exist too. Installing open source AI is relatively easy. Pretty much down to pressing a button. And because it’s open source, it’s free. Using them to it’s fullest effect is where the skill goes, and the artists I know are more than happy to develop their skills.

A far bigger market for AI is for non-artists and scammers to fill up Amazon’s bookstore and the broader Internet full of more trash than it already was.

The existence of bad usage of AI does not invalidate good usage of AI. The internet was already full of bad content before AI. The good stuff is what floats to the top. No sane person is going to pay to read some no name AI generated trash. But people will read a highly regarded book that just happened to be AI assisted.

But the whole premise is silly. Did we demonize cars because bank robbers started using them to escape the police? Did we demonize cameras because people could take exact photo copies of someone else’s work? No. We demonized those that misused the tool. AI is no different.

A scammer can generate thousands of garbage images and text without worth, before an artist being assisted by AI can make a single work. Just like a burglar can make more money easily by breaking into someone’s house and stealing all their money compared to working a day job for a month. There’s a reason these things are illegal and/or unethical. But those are reflections of the people doing this, not the things they use.

ClamDrinker ,

Perhaps. The world can use more kindness when despite everything, loneliness is at an all time high. It’s not a fix but maybe it can be a brake on someone’s downwards spiral.

I’d prefer and love to see someone new match George Carlin’s level too though, much more than someone trying to become him. I dont think we’ve quite had a chance to savor the good side of AI yet, but hey you’re entitled to your opinion.

ClamDrinker ,

Finally. A human readable format. And pretty too.

ClamDrinker ,

Lets be real - This isn’t going to change on it’s own. The only way for it to change is if everyone collectively took a stand against it. Which simply just won’t happen. The most reasonable thing to do is to focus your energy on collectives that actively reject such practices. Oh hey, you’re already in one: Lemmy, good job. As long as we work together to create a small corner of the internet that remains true to what the internet should be, we can grow it and create a better internet in the long term.

ClamDrinker ,

It’s a damn miracle this didnt just kill everyone in the rather small room if you watch the video. What the hell.

ClamDrinker ,

PC is typically easier to develop for because of the lack of strict (and frequently silly) platform requirements. Which typically makes game development more expensive and slow than it needs to be when just targeting PC. If that barrier to entry was reduced to that of PC, you’d see a lot more games on there from smaller developers.

With current gen consoles, pretty much every game starts as a PC game already, because thats where the development and testing happens.

Rockstar here is the exception in that they are intentionally skipping PC - something that should be well within reach of a company their size while clearly being capable of doing so.

If another AAA game comes out with only PC support I’ll be right there with you - but most game developers with the capability release for all major platforms now. But not the small console indie studio called Rockstar Games it seems.

Mozilla Senior Director of Content explained why Mozilla has taken an interest in the fediverse and Mastodon (techcrunch.com)

"the company looked at the history of social media over the past decade and didn’t like what it saw… existing companies that are only model motivated by profit and just insane user growth, and are willing to tolerate and amplify really toxic content because it looks like engagement… "

ClamDrinker ,

It’s because the current version has nothing wrong with it. If the Lemmy devs should choose to sabotage the Lemmy software, you’d be surprised how easily that happens when it pisses off all the instances and their owners. Instances will simply refuse to upgrade. And like most things, eventually some fork will win the race to become the dominant fork and the current Lemmy devs would be essentially disowned. Different forks also doesn’t necessarily mean API breaking changes, so different forks would have no issue communicating (at least for a while).

ClamDrinker ,

This was my gut reaction as well, but dont do this, the makers of uBlock Origin warn against it!https://www.reddit.com/r/uBlockOrigin/wiki/solutions/youtube/detection-faq/

Can’t I just hide the pop-up with uBO’s Picker?

No. Cosmetic filters don’t stop the message - they just temporarily hide it from view. The anti-adblock script will continue to run in the background and will eventually block you from watching videos. Please don’t use, share or recommend using any of those filters and don’t report any issues when using them.

ClamDrinker , (edited )

It’s the choice between trusting one company (or if you self host, trusting yourself) to have their security all in order and properly encrypt the password vault. Using one password for every site you use means that you have to trust each of those sites equally, because if one leaks your password because they have atrocious password policies (eg. storing it in plain text), it’s leaked everywhere and you need to remember every place you used it before.

Good password managers allow audits, and do at times still get hacked naturally (which isn’t 100% preventable). Yet neither of these should result in passwords being leaked. Why? Because they properly secure your master password so it can’t be reverse engineered to plain text, and without the master password your encrypted password vault is just a bunch of random bytes. And even in the extreme situation it did, you know to switch to a better password manager, and you have a nice big list of all the places where you need to change your password rather than trying to remember them all.

Human memory is fallible and we want the least amount of effort, because of that we usually make bad passwords. Your average site does not have their password security up to date (There’s almost a 0% chance not one of your passwords can be found here). If you data is encrypted accordingly, it doesn’t matter if it gets leaked in any way or stolen by some rogue employee, so long as they do not have your master password. So yes, I’d say that’s a good idea.

ClamDrinker ,

If it’s a fairly inconsequential service (no payment/personal info, nothing lost if it gets hacked), you can just generate a far shorter password. Even randomly generated passwords can be remembered eventually if you have to type it enough times, and that’s still better than the same one.

If it’s not inconsequential, I’d be questioning if my money is well spent on a sadistic service that makes my life hell trying to have a minimum level of security. I would say that even if it wasn’t a generated password that you have to type over.

ClamDrinker ,

If you use StableDiffusion through a web UI (might exist for others as well), you might have access to a feature called ‘interrogate’, which allows you to find an approximate prompt to an image. Can be useful if you need it for future images.

It can also be done online: huggingface.co/spaces/…/CLIP-Interrogator

This new data poisoning tool lets artists fight back against generative AI (www.technologyreview.com)

A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways....

ClamDrinker ,

LLM is the wrong term. That’s Large Language Model. These are generative image models / text-to-image models.

Truthfully though, while it will be there when the image is trained, it won’t ‘notice’ it unless you distort it significantly (enough for humans to notice as well). Otherwise it won’t make much of a difference because these models are often trained on a compressed and downsized version of the image (in what’s called latent space)

ClamDrinker ,

You know this one’s dated because I’m pretty sure by today’s standards having only a C: drive is quite unusual. Hard to find statistics on it but I’d wager most people have at least 2 storage devices, especially an SSD / HDD combo is pretty popular.

OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series (www.businessinsider.com)

OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

ClamDrinker , (edited )

This is just OpenAI covering their ass by attempting to block the most egregious and obvious outputs in legal gray areas, something they’ve been doing for a while, hence why their AI models are known to be massively censored. I wouldn’t call that ‘hiding’. It’s kind of hard to hide it was trained on copyrighted material, since that’s common knowledge, really.

ClamDrinker ,

Get one of those pillows where you can remove or add stuffing - Be your own Walter White.

ClamDrinker ,

It’s a bit of a flawed comparison (AI vs a hammer) - but let me try.

If you put a single nail into wood with a hammer, which anyone with a hammer can also do, and even a hammer swinging machine could do without human input, you can’t protect it.

If you put nails into wood with the hammer so that it shows a face, you can protect it. But you would still not be protecting the process of the single nail (even though the nail face is made up of repeating that process many times), you would specifically be protecting the identity of the face made of nails as your human artistic expression.

To bring it back to AI, if the AI can do it without sufficient input from a human author (eg. only a simple prompt, no post processing, no compositing, etc) it’s likely not going to be protectable, since anyone can take the same AI model, use the same prompt, and get the same or very similar result as you did (which would be the equivalent of putting a single nail into the wood).

Take the output, modify it, refine it, composite it, and you’re creating the hammer equivalent of a nail face. The end result was only possible because of your human input, and that means it can be protected.

ClamDrinker ,

It’s a good thing they weren’t making an argument then - but asking a (flawed) question. Just like comparing machine learning to stealing is a flawed comparison.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines