There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates

Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they’re extracting general patterns and concepts - the “Bob Dylan-ness” or “Hemingway-ness” - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in “vector space”. When generating new content, the AI isn’t recreating copyrighted works, but producing new expressions inspired by the concepts it’s learned.

This is fundamentally different from copying a book or song. It’s more like the long-standing artistic tradition of being influenced by others’ work. The law has always recognized that ideas themselves can’t be owned - only particular expressions of them.

Moreover, there’s precedent for this kind of use being considered “transformative” and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it’s understandable that creators feel uneasy about this new technology, labeling it “theft” is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn’t make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. twit.tv/shows/floss-weekly/episodes/744

TommySoda ,

Here’s an experiment for you to try at home. Ask an AI model a question, copy a sentence or two of what they give back, and paste it into a search engine. The results may surprise you.

And stop comparing AI to humans but then giving AI models more freedom. If I wrote a paper I’d need to cite my sources. Where the fuck are your sources ChatGPT? Oh right, we’re not allowed to see that but you can take whatever you want from us. Sounds fair.

someguy3 ,

Can you just give us the TLDE?

freeman ,

It’s not a breach of copyright or other IP law not to cite sources on your paper.

Getting your paper rejected for lacking sources is also not infringing in your freedom. Being forced to pay damages and delete your paper from any public space would be infringement of your freedom.

explore_broaden ,

I’m pretty sure that it’s true that citing sources isn’t really relevant to copyright violation, either you are violating or not. Saying where you copied from doesn’t change anything, but if you are using some ideas with your own analysis and words it isn’t a violation either way.

TommySoda ,

I mean, you’re not necessarily wrong. But that doesn’t change the fact that it’s still stealing, which was my point. Just because laws haven’t caught up to it yet doesn’t make it any less of a shitty thing to do.

Octopus1348 ,
@Octopus1348@lemy.lol avatar

When I analyze a melody I play on a piano, I see that it reflects the music I heard that day or sometimes, even music I heard and liked years ago.

Having parts similar or a part that is (coincidentally) identical to a part from another song is not stealing and does not infringe upon any law.

freeman ,

It’s not stealing, its not even ‘piracy’ which also is not stealing.

Copyright laws need to be scaled back, to not criminalize socially accepted behavior, not expand.

kibiz0r ,

Not even stealing cheese to run a sandwich shop.

Stealing cheese to melt it all together and run a cheese shop that undercuts the original cheese shops they stole from.

TheKMAP ,

Whatever happened to copying isn’t stealing?

I think the crux of the conversation is whether or not the world is better with ChatGPT. I say yes. We can tackle the disinformation in another effort.

Varyk ,

tweet is good, your body argument is completely wrong

fancyl ,

Are the models that OpenAI creates open source? I don’t know enough about LLMs but if ChatGPT wants exemptions from the law, it result in a public good (emphasis on public).

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

The STT (speech to text) model that they created is open source (Whisper) as well as a few others:

github.com/openai/whisper

github.com/orgs/openai/repositories?type=all

WalnutLum ,

Those aren’t open source, neither by the OSI’s Open Source Definition nor by the OSI’s Open Source AI Definition.

The important part for the latter being a published listing of all the training data. (Trainers don’t have to provide the data, but they must provide at least a way to recreate the model given the same inputs).

Data information: Sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data. Data information shall be made available with licenses that comply with the Open Source Definition.

They are model-available if anything.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I did a quick check on the license for Whisper:

Whisper’s code and model weights are released under the MIT License. See LICENSE for further details.

So that definitely meets the Open Source Definition on your first link.

And it looks like it also meets the definition of open source as per your second link.

Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of the paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.

graycube ,

Nothing about OpenAI is open-source. The name is a misdirection.

If you use my IP without my permission and profit it from it, then that is IP theft, whether or not you republish a plagiarized version.

dariusj18 ,

So I guess every reaction and review on the internet that is ad supported or behind a payroll is theft too?

RicoBerto ,

No, we have rules on fair use and derivative works. Sometimes they fall on one side, sometimes another.

InvertedParallax ,

Fair use by humans.

There is no fair use by computers, otherwise we couldn’t have piracy laws.

masterspace ,

OpenAI does not publish their models openly. Other companies like Microsoft and Meta do.

lettruthout ,

If they can base their business on stealing, then we can steal their AI services, right?

LibertyLizard ,

Pirating isn’t stealing but yes the collective works of humanity should belong to humanity, not some slimy cabal of venture capitalists.

sorghum ,
@sorghum@sh.itjust.works avatar

Also, ingredients to a recipe aren’t covered under copyright law.

ProstheticBrain ,

ingredients to a recipe may well be subject to copyright, which is why food writers make sure their recipes are “unique” in some small way. Enough to make them different enough to avoid accusations of direct plagiarism.

E: removed unnecessary snark

General_Effort ,

In what country is that?

Under US law, you cannot copyright recipes. You can own a specific text in which you explain the recipe. But anyone can write down the same ingredients and instructions in a different way and own that text.

General_Effort ,

Yes, that’s exactly the point. It should belong to humanity, which means that anyone can use it to improve themselves. Or to create something nice for themselves or others. That’s exactly what AI companies are doing. And because it is not stealing, it is all still there for anyone else. Unless, of course, the copyrightists get there way.

WaxedWookie ,

Unlike regular piracy, accessing “their” product hosted on their servers using their power and compute is pretty clearly theft. Morally correct theft that I wholeheartedly support, but theft nonetheless.

LibertyLizard ,

Is that how this technology works? I’m not the most knowledgeable about tech stuff honestly (at least by Lemmy standards).

masterspace ,

How do you feel about Meta and Microsoft who do the same thing but publish their models open source for anyone to use?

lettruthout ,

Well how long to you think that’s going to last? They are for-profit companies after all.

masterspace ,

I mean we’re having a discussion about what’s fair, my inherent implication is whether or not that would be a fair regulation to impose.

WalnutLum ,

Those aren’t open source, neither by the OSI’s Open Source Definition nor by the OSI’s Open Source AI Definition.

The important part for the latter being a published listing of all the training data. (Trainers don’t have to provide the data, but they must provide at least a way to recreate the model given the same inputs).

Data information: Sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data. Data information shall be made available with licenses that comply with the Open Source Definition.

They are model-available if anything.

helenslunch ,
@helenslunch@feddit.nl avatar

Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology.

Or maybe they’re not talking about copyright law. They’re talking about basic concepts. Maybe copyright law needs to be brought into the 21st century?

dhork ,

Bullshit. AI are not human. We shouldn’t treat them as such. AI are not creative. They just regurgitate what they are trained on. We call what it does “learning”, but that doesn’t mean we should elevate what they do to be legally equal to human learning.

It’s this same kind of twisted logic that makes people think Corporations are People.

masterspace ,

Ok, ignore this specific company and technology.

In the abstract, if you wanted to make artificial intelligence, how would you do it without using the training data that we humans use to train our own intelligence?

We learn by reading copyrighted material. Do we pay for it? Sometimes. Sometimes a teacher read it a while ago and then just regurgitated basically the same copyrighted information back to us in a slightly changed form.

doctortran , (edited )

We learn by reading copyrighted material.

We are human beings. The comparison is false on it’s face because what you all are calling AI isn’t in any conceivable way comparable to the complexity and versatility of a human mind, yet you continue to spit this lie out, over and over again, trying to play it up like it’s Data from Star Trek.

This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

Moreover, human beings make their own choices, they aren’t actual tools.

They pointed a tool at copyrighted works and told it to copy, do some math, and regurgitate it. What the AI “does” is not relevant, what the people that programmed it told it to do with that copyrighted information is what matters.

There is no intelligence here except theirs. There is no intent here except theirs.

masterspace , (edited )

We are human beings. The comparison is false on it’s face because what you all are calling AI isn’t in any conceivable way comparable to the complexity and versatility of a human mind, yet you continue to spit this lie out, over and over again, trying to play it up like it’s Data from Star Trek.

If you fundamentally do not think that artificial intelligences can be created, the onus is on yo uto explain why it’s impossible to replicate the circuitry of our brains. Everything in science we’ve seen this far has shown that we are merely physical beings that can be recreated physically.

Otherwise, I asked you to examine a thought experiment where you are trying to build an artificial intelligence, not necessarily an LLM.

This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

Or you are over complicating yourself to seem more important and special. Definitely no way that most people would be biased towards that, is there?

Moreover, human beings make their own choices, they aren’t actual tools.

Oh please do go ahead and show us your proof that free will exists! Thank god you finally solved that one! I heard people were really stressing about it for a while!

They pointed a tool at copyrighted works and told it to copy, do some math, and regurgitate it. What the AI “does” is not relevant, what the people that programmed it told it to do with that copyrighted information is what matters.

“I don’t know how this works but it’s math and that scares me so I’ll minimize it!”

pmc ,

If we have an AI that’s equivalent to humanity in capability of learning and creative output/transformation, it would be immoral to just use it as a tool. At least that’s how I see it.

masterspace ,

I think that’s a huge risk, but we’ve only ever seen a single, very specific type of intelligence, our own / that of animals that are pretty closely related to us.

Movies like Ex Machina and Her do a good job of pointing out that there is nothing that inherently means that an AI will be anything like us, even if they can appear that way or pass at tasks.

It’s entirely possible that we could develop an AI that was so specifically trained that it would provide the best script editing notes but be incapable of anything else for instance, including self reflection or feeling loss.

drosophila ,

This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

I do think the complexity of artificial neural networks is overstated. A real neuron is a lot more complex than an artificial one, and real neurons are not simply feed forward like ANNs (which have to be because they are trained using back-propagation), but instead have their own spontaneous activity (which kinda implies that real neural networks don’t learn using stochastic gradient descent with back-propagation). But to say that there’s nothing at all comparable between the way humans learn and the way ANNs learn is wrong IMO.

If you read books such as V.S. Ramachandran and Sandra Blakeslee’s Phantoms in the Brain or Oliver Sacks’ The Man Who Mistook His Wife For a Hat you will see lots of descriptions of patients with anosognosia brought on by brain injury. These are people who, for example, are unable to see but also incapable of recognizing this inability. If you ask them to describe what they see in front of them they will make something up on the spot (in a process called confabulation) and not realize they’ve done it. They’ll tell you what they’ve made up while believing that they’re telling the truth. (Vision is just one example, anosognosia can manifest in many different cognitive domains).

It is V.S Ramachandran’s belief that there are two processes that occur in the Brain, a confabulator (or “yes man” so to speak) and an anomaly detector (or “critic”). The yes-man’s job is to offer up explanations for sensory input that fit within the existing mental model of the world, whereas the critic’s job is to advocate for changing the world-model to fit the sensory input. In patients with anosognosia something has gone wrong in the connection between the critic and the yes man in a particular cognitive domain, and as a result the yes-man is the only one doing any work. Even in a healthy brain you can see the effects of the interplay between these two processes, such as with the placebo effect and in hallucinations brought on by sensory deprivation.

I think ANNs in general and LLMs in particular are similar to the yes-man process, but lack a critic to go along with it.

What implications does that have on copyright law? I don’t know. Real neurons in a petri dish have already been trained to play games like DOOM and control the yoke of a simulated airplane. If they were trained instead to somehow draw pictures what would the legal implications of that be?

There’s a belief that laws and political systems are derived from some sort of deep philosophical insight, but I think most of the time they’re really just whatever works in practice. So, what I’m trying to say is that we can just agree that what OpenAI does is bad and should be illegal without having to come up with a moral imperative that forces us to ban it.

Geobloke ,

And that’s all paid for. Think how much just the average high school graduate has has invested in them, ai companies want all that, but for free

masterspace ,

It’s not though.

A huge amount of what you learn, someone else paid for, then they taught that knowledge to the next person, and so on. By the time you learned it, it had effectively been pirated and copied by human brains several times before it got to you.

Literally anything you learned from a Reddit comment or a Stack Overflow post for instance.

Geobloke ,

If only there was a profession that exchanges knowledge for money. Some one who “teaches.” I wonder who would pay them

Wiz ,

The things is, they can have scads of free stuff that is not copyrighted. But they are greedy and want copyrighted stuff, too

masterspace ,

We all should. Copyright is fucking horseshit.

It costs literally nothing to make a digital copy of something. There is ZERO reason to restrict access to things.

Wiz ,

You sound like someone who has not tried to make an artistic creation for profit.

masterspace ,

You sound like someone unwilling to think about a better system.

EldritchFeminity ,

The argument that these models learn in a way that’s similar to how humans do is absolutely false, and the idea that they discard their training data and produce new content is demonstrably incorrect. These models can and do regurgitate their training data, including copyrighted characters.

And these things don’t learn styles, techniques, or concepts. They effectively learn statistical averages and patterns and collage them together. I’ve gotten to the point where I can guess what model of image generator was used based on the same repeated mistakes that they make every time. Take a look at any generated image, and you won’t be able to identify where a light source is because the shadows come from all different directions. These things don’t understand the concept of a shadow or lighting, they just know that statistically lighter pixels are followed by darker pixels of the same hue and that some places have collections of lighter pixels. I recently heard about an ai that scientists had trained to identify pictures of wolves that was working with incredible accuracy. When they went in to figure out how it was identifying wolves from dogs like huskies so well, they found that it wasn’t even looking at the wolves at all. 100% of the images of wolves in its training data had snowy backgrounds, so it was simply searching for concentrations of white pixels (and therefore snow) in the image to determine whether or not a picture was of wolves or not.

Riccosuave ,
@Riccosuave@lemmy.world avatar

Even if they learned exactly like humans do, like so fucking what, right!? Humans have to pay EXORBITANT fees for higher education in this country. Arguing that your bot gets socialized education before the people do is fucking absurd.

v_krishna ,
@v_krishna@lemmy.ml avatar

That seems more like an argument for free higher education rather than restricting what corpuses a deep learning model can train on

Malfeasant ,

Tomato, tomato…

interdimensionalmeme ,

The solution is any AI must always be released on a strong copyleft and possibly abolish copyright outright has it has only served the powerful by allowing them to enclose humanity common intellectual heritage (see Disney’s looting and enclosing if ancestral children stories). If you choose to strengthen the current regime, don’t expect things to improve for you as an irrelevant atomised individual,

Dran_Arcana ,

Devil’s Advocate:

How do we know that our brains don’t work the same way?

Why would it matter that we learn differently than a program learns?

Suppose someone has a photographic memory, should it be illegal for them to consume copyrighted works?

ricecake ,

Basing your argument around how the model or training system works doesn’t seem like the best way to frame your point to me. It invites a lot of mucking about in the details of how the systems do or don’t work, how humans learn, and what “learning” and “knowledge” actually are.

I’m a human as far as I know, and it’s trivial for me to regurgitate my training data. I regularly say things that are either directly references to things I’ve heard, or accidentally copy them, sometimes with errors.
Would you argue that I’m just a statistical collage of the things I’ve experienced, seen or read? My brain has as many copies of my training data in it as the AI model, namely zero, but “Captain Picard of the USS Enterprise sat down for a rousing game of chess with his friend Sherlock Holmes, and then Shakespeare came in dressed like Mickey mouse and said ‘to be or not to be, that is the question, for tis nobler in the heart’ or something”. Direct copies of someone else’s work, as well as multiple copyright infringements.
I’m also shit at drawing with perspective. It comes across like a drunk toddler trying their hand at cubism.

Arguing about how the model works or the deficiencies of it to justify treating it differently just invites fixing those issues and repeating the same conversation later. What if we make one that does work how humans do in your opinion? Or it properly actually extracts the information in a way that isn’t just statistically inferred patterns, whatever the distinction there is? Does that suddenly make it different?

You don’t need to get bogged down in the muck of the technical to say that even if you conceed every technical point, we can still say that a non-sentient machine learning system can be held to different standards with regards to copyright law than a sentient person. A person gets to buy a book, read it, and then carry around that information in their head and use it however they want. Not-A-Person does not get to read a book and hold that information without consent of the author.
Arguing why it’s bad for society for machines to mechanise the production of works inspired by others is more to the point.

Computers think the same way boats swim. Arguing about the difference between hands and propellers misses the point that you don’t want a shrimp boat in your swimming pool. I don’t care why they’re different, or that it technically did or didn’t violate the “free swim” policy, I care that it ruins the whole thing for the people it exists for in the first place.

I think all the AI stuff is cool, fun and interesting. I also think that letting it train on everything regardless of the creators wishes has too much opportunity to make everything garbage. Same for letting it produce content that isn’t labeled or cited.
If they can find a way to do and use the cool stuff without making things worse, they should focus on that.

spacesatan ,

I’m I the only person that remembers that it was “you wouldn’t steal a car” or has everyone just decided to pretend it was “you wouldn’t download a car” because that’s easier to dunk on.

roguetrick ,

You wouldn’t shoot a policeman and then steal his helmet.

C126 ,

These anti piracy commercials have gotten really mean.

JasonDJ ,

I’m pretty sure it’s either Mandela Effect or a massive gaslighting conspiracy. Though I guess that’s true for everything that’s collectively misremembered.

Cornelius_Wangenheim ,

People remember the parody, which is usually modified to be more recognizable. Like Darth Vader never said “Luke, I am your father”; in the movie it’s actually “No, I am your father”.

arin ,

Kids pay for books, openAI should also pay for the material access used for training.

renrenPDX ,

Then OpenAI should pay for a copy, like we do.

mightyfoolish ,

Is their an official statement if OpenAI pays for at least one copy of whatever they throw into the bots?

Shanedino ,

Maybe if you would pay for training data they would let you use copyright data or something?

T156 ,

Had the company paid for the training data and/or left it as voluntary, there would be less of a problem with it to begin with.

Part of the problem is that they didn’t, but are still using it for commercial purposes.

andrew_bidlaw ,
@andrew_bidlaw@sh.itjust.works avatar

Their business strategy is built on top of assumption they won’t. They don’t want this door opened at all. It was a great deal for Google to buy Reddit’s data for some $mil., because it is a huge collection behind one entity. Now imagine communicating to each individual site owner whose resources they scrapped.

If that could’ve been how it started, the development of these AI tools could be much slower because of (1) data being added to the bunch only after an agreement, (2) more expenses meaning less money for hardware expansion and (3) investors and companies being less hyped up about that thing because it doesn’t grow like a mushroom cloud while following legal procedures. Also, (4) the ability to investigate and collect a public list of what sites they have agreement with is pretty damning making it’s own news stories and conflicts.

Veneroso ,

We have hundreds of years of out of copyright books and newspapers. I look forward to interacting with old-timey AI.

“Fiddle sticks! These mechanical horses will never catch on! They’re far too loud and barely more faster than a man can run!”

“A Woman’s place is raising children and tending to the house! If they get the vote, what will they demand next!? To earn a Man’s wage!?”

That last one is still relevant to today’s discourse somehow!?

Kolanaki ,
@Kolanaki@yiffit.net avatar

The ingredient thing is a bit amusing, because that’s basically how one of the major fast food chains got to be so big (I can’t remember which one it was ATM though; just that it wasn’t McDonald’s). They cut out the middle-man and just bought their own farm to start growing the vegetables and later on expanded to raising the animals used for the meat as well.

NeoNachtwaechter ,

Wait… they actually STOLE the cheese from the cows?

😆

VerbFlow ,
@VerbFlow@lemmy.world avatar

There are a few problems, tho. 123456

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines