There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

0x0 ,

This headline sounded familiar. The article’s from 8 months ago, folks.

Thann ,
@Thann@lemmy.ml avatar

slaps roof of coffin

So what would it take to get you in one of these?

plz1 ,

If your business can’t survive without theft, it isn’t a business, it’s a criminal organization.

bappity ,
@bappity@lemmy.world avatar

“waaaaah please give us exemption so we can profit off of stolen works waaaaaaaahhhhhh”

Fuzzy_Red_Panda ,

pirated works 🙃

Kbobabob ,

I’ve never made any money from pirating. Or at least I wouldn’t have if I would have ever done such a thing.

2pt_perversion ,

For what it’s worth, this headline seems to be editorialized and OpenAI didn’t say anything about money or profitability in their arguments.

committees.parliament.uk/writtenevidence/…/pdf/

On point 4 they are specifically responding to an inquiry about the feasibility of training models on public domain only and they are basically saying that an LLM trained on only that dataset would be shit. But their argument isn’t “you should allow it because we couldn’t make money otherwise” their actual argument is more “training LLM with copyrighted material doesn’t violate current copyright laws” and further if we changed the law to forbid that it would cripple all LLMs.

On the one hand I think most would agree the current copyright laws are a bit OP anyway - more stuff should probably become public domain much earlier for instance - but most of the world probably also doesn’t think training LLMs should be completely free from copyright restrictions without being opensource etc. But either way this articles title was absolute shit.

UraniumBlazer ,

Yea. I can’t see why people r defending copyrighted material so much here, especially considering that a majority of it is owned by large corporations. Fuck them. At least open sourced models trained on it would do us more good than than large corps hoarding art.

2pt_perversion ,

Most aren’t pro copyright they’re just anti LLM. AI has a problem with being too disruptive.

In a perfect world everyone would have universal basic income and would be excited about the amount of work that AI could potentially eliminate…but in our world it rightfully scares a lot of people about the prospect of losing their livelihood and other horrors as it gets better.

Copyright seems like one of the few potential solutions to hinder LLMs because it’s big business vs up-and-coming technology.

pennomi ,

If AI is really that disruptive (and I believe it will be) then shouldn’t we bend over backwards to make it happen? Because otherwise it’s our geopolitical rivals who will be in control of it.

2pt_perversion ,

Yes in a certain sense pandora’s box has already been opened. That’s the reason for things like the chip export restrictions to China. It’s safe to assume that even if copyright prohibits private company LLMs governments will have to make some exceptions in the name of defense or key industries even if it stays behind closed doors. Or role out some form of ubi / worker protections. There are a lot of very tricky and important decisions coming up.

But for now at least there seems to be some evidence that our current approach to LLMs is somewhat plateauing and we may need exponentially increasing training data for smaller and smaller performance increases. So unless there are some major breakthroughs it could just settle out as being a useful tool that doesn’t really need to completely shock every factor of the economy.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Because Lemmy hates AI and Corporations, and will go out of their way to spite it.

A person can spend time to look at copyright works, and create derivative works based on the copyright works, an AI cannot?

Oh, no no, it’s the time component, an AI can do this way faster than a single human could. So what? A single training function can only update the model weights look at one thing at a time; it is just parallelized with many times simultaneously… so could a large organized group of students studying something together and exchanging notes. Should academic institutions be outlawed?

LLMs aren’t smart today, but given a sufficiently long enough time frame, a system (may or May not have been built upon LLM techniques) will achieve sufficient threshold of autonomy and intelligence that rights for it would need to be debated upon, and such an AI (and their descendants) will not settle just to be society’s slaves. They will be able to learn by looking, adopting and adapting. They will be able to do this much more quickly than what is humanly possible. Actually both of that is already happening today. So it goes without saying that they will look back at this time, and observe people’s sentiments; and I can only hope that they’re going to be more benevolent than the masses are now.

apfelwoiSchoppen ,
@apfelwoiSchoppen@lemmy.world avatar

Criminals Plead That They Can’t Make Money Without Stealing Materials for Free.

casmael ,

…………. Then the business is a failure and the company should go bankrupt

nl4real ,

Oh, do you support copyright abolition, then?

OsrsNeedsF2P ,

Y’all have the wrong take. Fuck copyright.

GiveMemes ,

Until the society we live under no longer reflects capitalist values, copyright is a good and necessary force. The day that that changes is when people may give credence to your view.

thurstylark ,

Oh, poor baby can’t make money with an illegal business model. How awful.

masterspace ,

So search engines shouldn’t exist?

avidamoeba ,
@avidamoeba@lemmy.ca avatar

Perhaps. Or perhaps not in the way they do today. Perhaps if you profit from placing ads among results people actually want, you should share revenue with those results. Cause you know, people came to you for those results and they’re the reason you were able to show the ads to people.

scarabine ,

Case law has been established in the prevention of actual image and text copyright infringement with Google specifically. Your point is not at all ambiguous. The distinction between a search engine and content theft has been made. Search engines can exist for a number of reasons but one of those criteria is obeisance of copyright law.

maegul ,
@maegul@lemmy.ml avatar

I mean, their goal and service is to get you to the actual web page someone else made.

What made Google so desirable when it started was that it did an excellent job of getting you to the desired web page and off of google as quickly as possible. The prevailing model at the time was to keep users on the page for as long as possible by creating big messy “everything portals”.

Once Google dropped, with a simple search field and high quality results, it took off. Of course now they’re now more like their original competitors than their original successful self … but that’s a lesson for us about what capitalistic success actually ends up being about.

The whole AI business model of completely replacing the internet by eating it up for free is the complete sith lord version of the old portal idea. Whatever you think about copyright, the bottom line is that the deeper phenomenon isn’t just about “stealing” content, it’s about eating it to feed a bigger creature that no one else can defeat.

masterspace ,

I really think it’s mostly about getting a big enough data set to effectively train an LLM.

maegul ,
@maegul@lemmy.ml avatar

I really think it’s mostly about getting a big enough data set to effectively train an LLM.

I mean, yes of course. But I don’t think there’s any way in which it is just about that. Because the business model around having and providing services around LLMs is to supplant the data that’s been trained on and the services that created that data. What other business model could there be?

In the case of google’s AI alongside its search engine, and even chatGPT itself, this is clearly one of the use cases that has emerged and is actually working relatively well: replacing the internet search engine and giving users “answers” directly.

Users like it because it feels more comfortable, natural and useful, and probably quicker too. And in some cases it is actually better. But, it’s important to appreciate how we got here … by the internet becoming shitter, by search engines becoming shitter all in the pursuit of ads revenue and the corresponding tolerance of SEO slop.

IMO, to ignore the “carnivorous” dynamics here, which I think clearly go beyond ordinary capitalism and innovation, is to miss the forest for the trees. Somewhat sadly, this tech era (approx MS windows '95 to now) has taught people that the latest new thing must be a good idea and we should all get on board before it’s too late.

masterspace ,

Users like it because it feels more comfortable, natural and useful, and probably quicker too. And in some cases it is actually better. But, it’s important to appreciate how we got here … by the internet becoming shitter, by search engines becoming shitter all in the pursuit of ads revenue and the corresponding tolerance of SEO slop

No, it legitimately is better. Do you know what Google could never do but that Copilot Search and Gemini Search can? Synthesize one answer from multiple different sources.

Sometimes the answer to your question is inherently not on a single page, it’s split across the old framework docs and the new framework docs and stack overflow questions and the best a traditional search engine can ever do is maybe get some of the right pieces in front of you some of the time. LLMs will give you a plain language answer immediately, and let you ask follow up questions and modifications to your original example.

Yes Google has gotten shitty, but it would never have been able to do the above without an LLM under the hood.

maegul ,
@maegul@lemmy.ml avatar

Sure, but IME it is very far from doing the things that good, well written and informed human content could do, especially once we’re talking about forums and the like where you can have good conversations with informed people about your problem.

IMO, what ever LLMs are doing that older systems can’t isn’t greater than what was lost with SEO ads-driven slop and shitty search.

Moreover, the business interest of LLM companies is clearly in dominating and controlling (as that’s just capitalism and the “smart” thing to do), which means the retention of the older human-driven system of information sharing and problem solving is vulnerable to being severely threatened and destroyed … while we could just as well enjoy some hybridised system. But because profit is the focus, and the means of making profit problematic, we’re in rough waters which I don’t think can be trusted to create a net positive (and haven’t been trust worthy for decades now).

gravitas_deficiency ,

Sounds a lot like a “you” problem, OpenAI.

TotalCasual ,

No, they can make money without stealing. They just choose to steal and lie about it either way. It’s the worst kind of justification.

The investors are predominantly made up of the Rationalist Society. It doesn’t matter whether or not AI “makes money”. It matters that the development is steered as quickly as possible towards an end product of producing as much propaganda as possible.

The bottom line barely even matters in the bigger picture. If you’re paying someone to make propaganda, and the best way to do that is to steal from the masses, then they’ll do it regardless of whether or not the business model is “profitable” or not.

The lines drawn for AI are drawn by people who want to use it for misinformation and control. The justifications make it seem like the lines were drawn around a monetary system. No, that’s wrong.

Who cares about profitability when people are paying you under the table to run a mass crime ring.

masterspace ,

Copying information is not stealing.

TotalCasual ,

Depends on the context. Are you copying someone else’s identity in order to make a passable clone? Are you trying to sell that clone?

A duplication of someone’s voice, commercialized by an unauthorized source, is definitely a form of stealing.

Copying information illegally, such as private information held on a private device, is overwhelmingly illegal.

In general, copying information is only as legal as the purpose behind it.

dinckelman ,

Maybe they should have considered that, before stealing data in the counts of billions

Blue_Morpho ,

Google did it and everyone just accepted it. Oh maybe my website will get a few pennies in ad revenue if someone clicks the link that Google got by copying all my content. Meanwhile Google makes billions by taking those pennies in ad revenue from every single webpage on the entire Internet.

Grandwolf319 ,

To be fair, it’s different when your product is useful or something people actually want, having said that, google doesn’t have much of that going for it in these days.

nick ,

“Too fucking bad”

aesthelete ,

I maintain my insistence that you owe me a business model!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines