Anyone care to explain why people would care that they posted to a public forum that they don’t own, with content that is now further being shared for public benefit?
The argument that it’s your content becomes false as soon as you shared it with the world.
It’s not shared for public benefit, though. OpenAI, despite the Open in their name, charges for access to their models. You either pay with money or (meta)data, depending on the model.
Legally, sure. You signed away your rights to your answers when you joined the forum. Morally, though?
People are pissed that SO, that was actively encouraging Mods to use AI detection software to prevent any LLM usage in the posted questions and answers, are now selling the publicly accessible data, made by their users for free, to a closed-source for-profit entity that refuses to open itself up.
I can only really speak to reddit, but I think this applies to all of the user generated content websites. The original premise, that everyone agreed to, was the site provides a space and some tools and users provide content to fill it. As information gets added, it becomes a valuable resource for everyone. Ads and other revenue streams become a necessary evil in all this, but overall directly support the core use case.
Now that content is being packaged into large language models to be either put behind a paywall or packed into other non-freely available services. Since they no longer seem interested in supporting the model we all agreed on, I see no reason to continue adding value and since they provided tools to remove content I may as well use them.
But from the very beginning years ago, it was understood that when you post on these types of sites, the data is not yours, or at least you give them license to use it how they see fit. So for years people accepted that, but are now whining because they aren’t getting paid for something they gave away.
This is legal vs rude. It certainly is legal and was in the terms of service for them to use the data in any way they see fit. But, also it’s rude to bait and switch from being a message board to being an AI data source company. Users we led to believe they were entering into an agreement with one type of company and are now in an agreement with a totally different one.
You can smugly tell people they shouldn’t have made that decision 15 years ago when they started, but a little empathy is also cool.
Additionally: When you owe your entire existence and value to user goodwill it might not be a great idea to be rude to them.
No, you can’t post something in public and have it appropriated by a mega corp for money and then prevent you from deleting or modifying the very things you posted.
Well, it is important to comply with the terms of service established by the website. It is highly recommended to familiarize oneself with the legally binding documents of the platform, including the Terms of Service (Section 2.1), User Agreement (Section 4.2), and Community Guidelines (Section 3.1), which explicitly outline the obligations and restrictions imposed upon users. By refraining from engaging in activities explicitly prohibited within these sections, you will be better positioned to maintain compliance with the platform’s rules and regulations and not receive email bans in the future.
Tough to say. I honestly don’t know. The user name is the classic word_wordNumber that bots use. The comments are long though. But its comments are spaced far apart timewise.
Comments are clearly ChatGPT I know because I did it once to troll some sub too. I instantly recognize the pirate ‚swashbuckling’ comment in their profile history you get when you type ‚write a funny comment like a Redditor’
The account reads like they’re pasting AI-generated responses to everything. Maybe it’s someone’s experiment. The prompt must include “You are a self-righteous asshole.”
I despise this use of mod power in response to a protest. It’s our content to be sabotaged if we want - if Stack Overlords disagree then to hell with them.
I’ll add Stack Overflow to my personal ban list, just below Reddit.
Once submitted to stack overflow/Reddit/literally every platform, it’s no longer your content. It sucks, but you’ve implicitly agreed to it when creating your account.
While true, it’s stupid that things are that way. They shouldn’t be able to hide behind the idea that “we’re not responsible for what our users publish, we’re more like a public forum” while also having total ownership over that content.
you’ve implicitly agreed to it when creating your account
Many people would agree with that, probably most laws do. However I doubt many users have actually bothered to read the unnecessarily long document, fewer have understood the legalese, and the terms have likely already been changed pray I don’t alter it any further. That’s a low and shady bar of consent. It indeed sucks and I think people should leave those platforms, but I’m also open to laws that would invalidate that part of the EULA.
You really don’t need anything near as complex as AI…a simple script could be configured to automatically close the issue as solved with a link to a randomly-selected unrelated issue.
A malicious response by users would be to employ an LLM instructed to write plausibly sounding but very wrong answers to historical and current questions, then an army of users upvoting the known wrong answer while downvoting accurate ones. This would poison the data I would think.
All use of generative AI (e.g., ChatGPT1 and other LLMs) is banned when posting content on Stack Overflow. This includes “asking” the question to an AI generator then copy-pasting its output as well as using an AI generator to “reword” your answers.
Interestingly I see nothing in that policy that would dis-allow machine generated downvotes on proper answers and machine generated upvotes on incorrect ones. So even if LLMs are banned from posting questions or comments, looks like Stackoverflow is perfectly fine with bots voting.
For years, the site had a standing policy that prevented the use of generative AI in writing or rewording any questions or answers posted. Moderators were allowed and encouraged to use AI-detection software when reviewing posts. Beginning last week, however, the company began a rapid about-face in its public policy towards AI.
I listened to an episode of The Daily on AI, and the stuff they fed into to engines included the entire Internet. They literally ran out of things to feed it. That's why YouTube created their auto-generated subtitles - literally, so that they would have more material to feed into their LLMs. I fully expect reddit to be bought out/merged within the next six months or so. They are desperate for more material to feed the machine. Everything is going to end up going to an LLM somewhere.
I think auto generated subtitles were to fulfil a FCC requirement, some years ago, for content subtitling. It has however turned out super useful for LLM feeding.
This sort of thing is so self-sabotaging. The website already has your comment, and a license to use it. By deleting your stuff from the web you only ensure that the AI is definitely going to be the better resource to go to for answers.
Not when you’ve agreed to a terms of service that hands over ownership of your content to Stack Overflow, leaving you merely licensed to use your own content.
Also backups and deleted flags. Whatever comment you submitted is likely backed up already and even if you click the delete button you’re likely only just changing a flag.
At this point I’m assuming most if not all of these content deals are essentially retroactive. They already scrapped the content and found it useful enough to try and secure future use, or at least exclude competitors.
Honestly? I’m down with that. And when the LLM’s end up pricing themselves out of usefulness, we’ll still have the fediverse version. Having free sites on the net with solid crowd-sourced information is never a bad thing even if other people pick up the data and use it.
It’s when private sites like Duolingo and Reddit crowd source the information and then slowly crank down the free aspect that we have the problems.
SO already was. Not even harvested as much as handed to them. Periodic data dumps and a general forced commitment to open information were a big part of the reason they won out over other sites that used to compete with them. SO most likely wouldn't have existed if Experts Exchange didn't paywall their entire site.
As with everything else, AI companies believe their training data operates under fair use, so they will discard the CC-SA-4.0 license requirements regardless of whether this deal exists. (And if a court ever finds it's not fair use, they are so many layers of fucked that this situation won't even register.)
Assuming the federated version allowed contributor-chosen licenses (similar to GitHub), any harvesting in violation of the license would be subject to legal action.
Contrast that with Stack Exchange, where I assume the terms dictated by Stack Exchange deprive contributors of recourse.
Smells too much like duo-lingo. Here, everyone jump in and answers all the questions. 5 years later, ohh look at this gold mine of community data we own…
This was actually the whole original point of Duolingo. The founder previously created Recaptcha to crowd source machine vision of scanned books.
His whole thing is crowd sourcing difficult tasks that machines struggle with by providing some sort of reason to do it (prevent spam at first and learn a language now)
From what I understand Duolingo just got too popular and the subscription service they offer made them enough money to be happy with.
Duolingo has been systematically enshittifying the free/ad supported service. Now every time you fart, you get a big unskippable ad trying to get you to subscribe to their service for free for 14 days without telling you the price. They took all that crowdsourced data that weren’t going to profit off of and are making the app a miserable experience without it.
Yeah but didn’t you see the sovereign citizens who think licenses are magic posting giant copyright notices after their posts? Lol
It’s so childish, ai tools will help billions of the poorest people access life saving knowledge and services, help open source devs like myself create tools that free people from the clutches of capitalism, but they like living in a world of inequity because their generational wealth earned from centuries of exploitation of the impoverished allows them a better education, better healthcare, and better living standards than the billions of impoverished people on the planet so they’ll fight to maintain their privilege even if they’re fighting against their own life getting better too. The most pathetic thing is they pretend to be fighting a moral crusade, as if using the answers they freely posted and never expected anything in return for is a real injustice!
And yes I know people are going to pretend that they think tech bros won’t allow poor people to use their tech and they base this on assuming how everything always works will suddenly just flip Into reverse at some point or something? Like how mobile phones are only for rich people and only rich people can sell via the internet and only rich people can start a YouTube channel…