There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

The AI-focused COPIED Act would make removing digital watermarks illegal (as well as training any kind of AI on copyrighted content)

A bipartisan group of senators introduced a new bill to make it easier to authenticate and detect artificial intelligence-generated content and protect journalists and artists from having their work gobbled up by AI models without their permission.

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) would direct the National Institute of Standards and Technology (NIST) to create standards and guidelines that help prove the origin of content and detect synthetic content, like through watermarking. It also directs the agency to create security measures to prevent tampering and requires AI tools for creative or journalistic content to let users attach information about their origin and prohibit that information from being removed. Under the bill, such content also could not be used to train AI models.

Content owners, including broadcasters, artists, and newspapers, could sue companies they believe used their materials without permission or tampered with authentication markers. State attorneys general and the Federal Trade Commission could also enforce the bill, which its backers say prohibits anyone from “removing, disabling, or tampering with content provenance information” outside of an exception for some security research purposes.

(A copy of the bill is in he article, here is the important part imo:

Prohibits the use of “covered content” (digital representations of copyrighted works) with content provenance to either train an AI- /algorithm-based system or create synthetic content without the express, informed consent and adherence to the terms of use of such content, including compensation)

riodoro1 ,

So the rich have already scalped what they could. Now it can be made illegal

admin ,
@admin@lemmy.my-box.dev avatar

Because even when some of the water has gotten out, you still go plug the dam.

The best moment was earlier. The second best moment is now.

Grimy OP ,

This is more akin to diverting a public river into private land so the landowner can charge everyone what they were getting for free.

The river cannot be dammed and this bill doesn’t aim to even try.

A better solution would be to make all models copyleft, so even if corporations dip their cup in the water, whatever they produce has to be thrown back in.

trollbearpig ,

Maybe I’m missing something, but I don’t understand what you guys mean by “the river cannot be dammed”. The LLM models need to be retrained all the time to include new data and in general to get them to change their behavior in any way. Wouldn’t this bill apply to all these companies as soon as they retrain their models?

I mean, I get the point that old models would be exempt from the law since laws can’t be retroactive. But I don’t get how that’s such a big deal. These companies would be stuck with old models if they refuse to train new ones. And as much hype as there is around AI, current models are still shit for the most part.

Also, can you explain why you guys think this would stop open source models? I have always though that the best solution to stop these fucking plagiarism machines was for the open source community to create an open source training set were people contribute their art/text/whatever. Does this law prevents this? Honestly to me this panic sounds like people without any artistic talent wanted to steal the work of artists and they are now mad they can’t do it.

Grimy OP ,

The game right now is about better training methods and curating current datasets, new data is not needed.

Obviously though, eventually they will want new data so their models aren’t stuck in the past but this won’t stop them from getting it. There isn’t a future where individuals negotiate with google on how much they get paid, all that data is already owned by the platform it’s being posted on. Almost all websites slap on their own copyright or something similar, even for images. Deviant art and even Cara, the platform that’s suppose to be artist friendly, does this. Anything uploaded to Google maps gets a copyright on it if I’m not mistaken, Reddit as well. This data will be prohibitively expensive as to create a moat and strengthen soft monopolies.

Public datasets are great but aren’t enough in most cases. This is also the equivalent of saying “well they diverted the river, why don’t you build yourself a stream”. It’s also problematic since by it’s public nature, it means corporations can come over, dip their cup in the water and throw it into their river. It brings down their costs while making sure nothing can actually compete with them.

Also worth noting that there is no worthy public dataset for videos. 98% of the data is owned by YouTube or Hollywood.

trollbearpig ,

My man, I think you are mixin a lot of things. Let’s go by parts.

First, you are right that almost all websites get some copyright rights when you post on their platforms. At best, some license the content as Creative Commons or similar licenses. But that’s not new, that has been this way forever. If people are surprised that they are paying with their data at this point I don’t know what to say hahaha. The change with this law would be that no one, big tech companies or open source, gets to use this content for free to train new models right?

Which brings me back to my previous question, this law applies to old data too right? You say “new data is not needed” (which is not true for chat LLMs that want to include new data for example), but old data is still needed to use the new methods or to curate the datasets. And most of this old data was acquired by ignoring copyright laws. What I get from this law is that no one, including these companies, gets to keep using this “illegaly” acquired data now right? I mean, I’m pretty sure this is the case since movie studios and similar are the ones pushing for this law, they will not go like “it’s ok you stole all our previous libraries, just don’t steal the new stuff” hahahaha.

I do get your point that the most likely end result is that movie studios, record labels, social media platforms, etc, will just start selling the rights to train on their data and the only companies who will be able to afford this are the big tech companies. But still, I think this is a net possitive (weird times for me to be on the side of these awful companies hahaha).

First of all, it means no one, including big tech companies, get to steal content that is not theirs or given to them willingly. I’m particularly interested in open source code, but the same applies to indie art and any other form of art outside of the big companies. When we say that we want to stop the plagiarism it’s not a joke. Tech companies are using LLMs to attack the open source community by stealing the code under the excuse of LLMs being transformative (bullshit of course). Any law that stops this is a possitive to me.

And second of all, consider the 2 futures we have in front of us. Option one is we get laws like this, forcing AI to comply with copyright law. Which basically means we maintain the current status quo for intellectual property. Not great obviously, but the alrtenative is so much worse. Option two is we allow people to use LLMs to steal all the intellectual property they want, which puts an end to basically any market incentives to produce art by humans. Again, the current copyright system is awful. But why do you guys want a system were we as individuals have to keep complying with copyright but any company can bypass that with an LLM? Or how do you guys think this is going to pan out if we just don’t regulate AI?

Grimy OP ,

Google already paid 6 million to Reddit for their dataset (preemptively since I’m guessing they are lobbying for laws like this), I didn’t get a dime. Who do you think this helps here?

The change with this law would be that no one, big tech companies or open source, gets to use this content for free to train new models right?

My point is that this essentially insure that ONLY big tech companies will get to use the content. Do you think they mind spending a few million if it gives them a monopoly? They actively want this.

If it’s between the platform I used getting paid for my content while I get nothing and then I have to pay Openai to use a tool built with my content or the platform and me getting nothing while I get free AI, I will chose the latter.

There are two scenarios and in both, AI massively brings up productivity and huge layoffs happen. The difference is in one scenario, the tools are priced low enough so it’s economical to replace 5 workers with them but high enough so those same workers can’t afford them and compete with the business that just fired them. A situation where no company can remain competitive without paying Openai or Google 50k a month is a dystopian nightmare.

Open source is the best way to make sure this doesn’t happen and while these laws are the smallest of speed bumps for big tech companies, it is a literal wall for FOSS.

The best solution would be to copyleft all models using public data, the second best would be to leave things as is. This isn’t a solution but regulatory capture.

trollbearpig ,

My man, I think you are delisuonal hahahaha. You are giving AI way too much credit to a technology that’s just a glorified autocomoplete. But I guess I get your point, if you think that AI (and LLMs in particular hahahaha) is the way of the future and all that, then this is apocalyptic hahahahaha.

But you are delisuonal my man. The only practical use so far for these stupid LLMs is autocomplete which works great when it works. And bypassing copyright law by pretending it’s producing novel shit. But that’s a whole other discussion, time will show this is just another bubble like crypto hahahaha. For now, I hope they at least force everyone to stop plagiarising other peoples work with AI.

Grimy OP ,

Prohibits the use of “covered content” (digital representations of copyrighted works) with content provenance to either train an AI- /algorithm-based system or create synthetic content without the express, informed consent and adherence to the terms of use of such content, including compensation

This affects a lot more than just llms and essentially fucks any use of machine learning. You do not understand what you are defending. This kills kaggle and huggingface over night since I figure corporation will be able to keep already created datasets for internal use but distribution will be a no go.

You also have to be willfully blind to seriously think llms have no use cases. Ignoring the entertainment value, it’s a huge productivity boost, chatbots using it are now commonplace on websites (I preferred when it was actual people but that’s beside the point). I work in research and we are currently building a bunch of internal tools to use with our data.

Hahaha all you want but you are defending something completely against your own self interests and those of society.

trollbearpig ,

So you are saying that content scraped before the law is fair game to train new models? If so it’s fucking terrible. But again, I doubt this is the case since this would be against the interests of the big copyright holders. And if it’s not the case you are just creating a storm in glass of water since this affects the companies too.

As a side point, I’m really curious about LLM uses. As a programmer the only useful product I have seen so far is copilot and similar tools. And I ended up disabling the fucking thing because it produces too much garbage hahaha. But I’m the first to admit I haven’t been following this hype cycle hahahaha, so I’m really curious what the big things will be. You clearly know so much, so want to enligten me?

Grimy OP ,

This bill is being built with the interests of the big tech companies in mind imo, big copyright holders are just an afterthought. I figure since big tech spent quite a bit of money building those datasets and since they were built before the law, they will be able to keep using them as long as they don’t add anything new but I can’t be certain.

The use cases are vast. This is a huge boon for the indie gaming and animation industry. I’m seriously excited to have NPCs running on llms and don’t want to be forced into a subscription just to play my games. It’s also going to bring smart homes to an other level. Systems can be built that are much stronger than Alexa without having to send all that insanely private data to Amazon. There’s a huge privacy issue if all the available models only run on Google or openais cloud, but I won’t get into that (not to mention that these corporate llms will eventually be trained for advertisement and will essentially be poisoned to prefer whoever is paying its creator).

I’ll give some more concrete example with my work but it will be a bit vague to preserve my anonymity.

I work in research (I originally studied software engineering and robotics) and we have about 20 years worth of projects. None of it is standardized and it’s honestly a mess. I built a system in the space of a few days that grabs everyone of those docs, reads through it with an LLM and then classifies them doc per doc into an excel sheet with a SharePoint link. I’ve got 20 columns in there, it summarizes them, choses from a list of 30 types of documents I gave it, extracts related towns and people as well as companies and domain, it extracts the columns if there are any tables inside and generally establishes a bunch of different relationships. It doesn’t sound like much but doing it by hand would have been weeks of tedious work. My computer did it in 20 minutes using a local LLM so any sensitive client data doesn’t leave the building.

Right now I’m working on a GraphRAG system that will take all those docuuments and turns into into vectors, then an LLM adds relationships to those vectors. It will be incorporated into an internal chatbots so people can ask questions and not only get a natural language answer but have the references where the information was found and quick access to it. It’s vector search on steroids and will cost nothing to run. I’m planning on eventually training the chatbots itself on our data so it can have a better understanding of our research sector as well as direct access to all the documents.

Next is building something that gets info automatically from the web. Sometimes we have to create long Excel sheets with a bunch of different data points. We stay at a state level usually but it can sometimes mean 1000 businesses and we have to google each one manually and find the info. It’s sometimes weeks of work and honestly sucks doing. Llms are entirely capable of doing this kind of work and would take a few hours at most, again at no cost.

These things are seriously great whenever it’s dealing with data that isn’t just numbers and is hard to quantify. I hate Reddit and will never create an account there after what happened but I still go daily to the localllama subreddit, it’s a great source of information if you want to keep abreast with what’s happening.

trollbearpig ,

I figure since big tech spent quite a bit of money building those datasets and since they were built before the law, they will be able to keep using them as long as they don’t add anything new but I can’t be certain.

This is a very weird assumption you are making man. The quoted text you sent above pretty much says the opposite. It says everyone who wants to train their models wirh copyrigthed data needs to get permission from the copyright holders. That is great for me period. No one, not a big company nor the open source community, gets to steal the work of people producing art, code, etc. I honestly don’t get why you assume all the data scrapped before would be exempt. Again, very weird assumption.

As for ML algorithms having use, of course they have. Hell, pretty much every company I have worked with has used them for decades. But take a look at the examples you provided. None of them requires you or your company scrapping a bunch of information from randoms on the internet. Specially not copyrighted art, literature, or code. And that’s the point here, you are acting like all of that stops with these laws but that’s ridiculous.

Grimy OP ,

The article is pro corpo, I’m looking at the bill and it’s quite clear where it’s headed.

None of what I mentioned is possible without the LLM that’s at its heart. Just training an LLM is a million or two in compute power. We don’t get the next generation for free if laws like this tack on an extra 80 million. 6 million for Reddit and that was when you could scrap it for free, and that’s just a drop in the bucket.

afraid_of_zombies ,

Yeah it is really messed up that Disney made untold tens of billions of dollars on public domain stories, effectively cut us off from our own culture, then extended the duration to indefinite. I wonder why near everyone was silent about this issue for multiple decades until it became cliche to pretend to care about furry porn creators.

Creatives have always been screwed, we are the first civilization to not only screw them but screw the general public. As shit as it was in the past you could just copy a freaken scroll.

Anyway you guys have fun defending some of the worst assholes in human history while acting like you care about people you weren’t even willing to give a buck a month to on patreon.

NeoNachtwaechter ,

LOL

So I take your photo, remove your watermark, put my own watermark on it, and then I sue you for removing my watermark.

General_Effort ,

Don’t be a fool. Of course, content corporations like Disney or the NYT are able to prove just when something was made.

NeoNachtwaechter ,

Don’t be a fool either.

Of course I am going to do this to you, not to Disney etc. because I am way better at creating proof than you are.

And of course Disney etc. are going to do this to you and me, because they are even better at creating proof than you and me are.

That’s how foolish this law is.

Womble ,

So what you’re saying is that this is a law designed to extend corporate control over information and culture even further?

General_Effort ,

This bill reads like it was written by Adobe.

This provenance labelling scheme already exists. Adobe was a major force behind it. (see here: en.wikipedia.org/…/Content_Authenticity_Initiativ… ). This bill would make it so that further development will be tax-funded through organizations like DARPA.

Of course, they are also against fair use. They pay license fees for AI training. For them, it means more cash flow.

explodicle ,

It’s pretty cheap to just time stamp everything.

cyd ,

If this passes, this would have the perverse effect of making China (and maybe to a lesser extent the Middle East) the leading suppliers of open source / open weight AI models…

Melt ,

China would be the world leader in making AI model trained on copyrighted content

catloaf ,

And as the vast majority of content is not licensed for AI model training, they would have an immensely larger dataset to train on.

Petter1 ,

Well, there is also Europe ✌🏻

General_Effort ,

No. In the EU, the lobbyists have already won. Major countries, like Germany, have always had very conservative copyright laws. I believe it’s one reason why their cultures are losing so hard.

Surprisingly, Japan has adopted a very sensible law on AI training.

afraid_of_zombies ,

I am just sitting here with my eye twitching thinking of all the code I have had to deal with from German companies over the years.

Doomsider ,

If you put something on the Internet you are giving up ownership of it. This is reality and companies taking advantage of this for AI have already proven this is true.

You are not going to be able to put the cat back in the bag. The whole concept of ownership over art, ideas, and our very culture was always ridiculous.

It is past time to do away with the joke of the legal framework we call IP law. It is merely a tool for monied interests to extract more obscene profit from our culture at this point.

There is only one way forward and that is sweeping privacy protections. No more data collection, no more targeted advertising, no more dark patterns. The problem is corporations are not going to let that happen without a fight.

nasi_goreng ,
@nasi_goreng@lemmy.zip avatar

deleted_by_author

  • Loading...
  • afraid_of_zombies ,

    Yeah in theory but in practice that isn’t happening. In theory the laws could be structured such that creatives are being paid fairly and distributors make some money and that the general public knows the stuff will be public domain in a relatively short period of time.

    No one is doing it and they had hundreds of years to figure out how to do it. You are asking us to take it on faith and I personally will not.

    LainTrain ,

    Incredibly well-put. IP is just land for the wannabe landlords of information and culture.

    They are just attempting to squeeze the working class dry, take the last freedoms we have so we have to use their corporate products.

    afraid_of_zombies ,

    Anyone supporting this better be against right of repair and jail time for anyone discussing a sporting event without written permission

    catloaf ,

    This sounds exactly like existing copyright law and DRM.

    Grimy OP ,

    It’s strengthening copyright laws by negating the transformative clause when dealing with AI

    admin ,
    @admin@lemmy.my-box.dev avatar

    Hopefully the next step: force every platform that deals in user generated content to give users the choice to exploit that content for a fraction of the profit, or to exclude their content from processing.

    It’s amazing how many people don’t realize that they themselves also hold copyright over their content, and that laws like these protect them as well.

    Grimy OP , (edited )

    This is essentially regulatory capture. The article is very lax on calling it what it is.

    A few things to consider:

    • Laws can’t be applied retroactively, this would essentially close the door behind Openai, Google and Microsoft. Openai with sora in conjunction with the big Hollywood companies will be the only ones able to do proper video generation.
    • Individuals will not be getting paid, databrokers will.
    • They can easily pay pennies to a third world artist to build them a dataset copying a style. Styles are not copyrightable.
    • The open source scene is completely dead in the water and so is fine tuning for individuals.

    Edit: This isn’t entirely true, there is more leeway for non commercial models, see comments below.

    • AI isn’t going away, all this does is force us and the economy into a subscription model.
    • Companies like Disney, Getty and Adobe reap everything.

    In a perfect world, this bill would be aiming to make all models copyleft instead but sadly, no one is lobbying for that in Washington and money talks.

    just_another_person ,

    deleted_by_moderator

  • Loading...
  • cm0002 ,

    Yup, I fucking knew it. I knew this is what would happen with everyone bitching about copyright this and that. I knew any legislation that came as a result was going be bastardized and dressed up to make it look like it’s for everyone when in reality it’s going to mostly benefit big corps that can afford licensing fees and teams of lawyers.

    People could not/would not understand how these AI models actually processes images/text or the concept of “If you post publicly, expect it to be used publicly” and here we are…

    LainTrain , (edited )

    As always, the anprims/luddites/ecofashies (who downvoted me) are like an anvil to left-wing ideas of progress, we’re too busy arguing amongst ourselves to make a stand to protect open source AI from regulation.

    Honestly I blame Hbomberguy personally. People were a lot more open-minded before he tacked on that shitty little AI snark at the end of his plagiarism video.

    werefreeatlast ,

    Introducing Chat-Stupid! It just like Chat-GPT but it wishes for any conversation with humans so it can legally learn…don’t disclose company secrets or it will legally learn those too.

    Kowowow ,

    Sure would be fun to expand things to include a section to not let normal people make art of copyrighted material or be an excuse to mess with fair use

    autotldr Bot ,

    This is the best summary I could come up with:


    A bipartisan group of senators introduced a new bill to make it easier to authenticate and detect artificial intelligence-generated content and protect journalists and artists from having their work gobbled up by AI models without their permission.

    Content owners, including broadcasters, artists, and newspapers, could sue companies they believe used their materials without permission or tampered with authentication markers.

    State attorneys general and the Federal Trade Commission could also enforce the bill, which its backers say prohibits anyone from “removing, disabling, or tampering with content provenance information” outside of an exception for some security research purposes.

    Senate Majority Leader Chuck Schumer (D-NY) led an effort to create an AI roadmap for the chamber, but made clear that new laws would be worked out in individual committees.

    “The capacity of AI to produce stunningly accurate digital representations of performers poses a real and present threat to the economic and reputational well-being and self-determination of our members,” SAG-AFTRA national executive director and chief negotiator Duncan Crabtree-Ireland said in a statement.

    “We need a fully transparent and accountable supply chain for generative Artificial Intelligence and the content it creates in order to protect everyone’s basic right to control the use of their face, voice, and persona.”


    The original article contains 384 words, the summary contains 203 words. Saved 47%. I’m a bot and I’m open source!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines