There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Hyperz ,
@Hyperz@beehaw.org avatar

It seems to me like StackOverflow is really shooting themselves in the foot by allowing AI generated answers. Even if we assume that all AI generated answers are “correct”, doesn’t that completely destroy the purpose of the site? Like, if I were seeking an answer to some Python-related problem, why wouldn’t I go straight to ChatGPT or similar language models instead then? That way I also don’t have to deal with some of the other issues that plague StackOverflow such as “this question is a duplicate of <insert unrelated question> - closed!”.

14specks ,
@14specks@lemmy.ml avatar

I think what sites have been running into is that it’s difficult to tell what is and is not AI-generated, so enforcement of a ban is difficult. Some would say that it’s better to have an AI-generated response out there in the open, which can then be verified and prioritized appropriately from user feedback. If there’s a human generated response that’s higher.quality, then that should win anyway, right? (Idk tbh)

Hyperz ,
@Hyperz@beehaw.org avatar

Yeah that’s a good point. I have no idea how you’d go about solving that problem. Right now you can still sort of tell sometimes when something was AI generated. But if we extrapolate the past few years of advances in LLMs, say, 10 years into the future… There will be no telling what’s AI and what’s not. Where does that leave sites like StackOverflow, or indeed many other types of sites?

This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.

14specks ,
@14specks@lemmy.ml avatar

This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.

Between this and the “deep fake” tech I’m kinda hoping for a light Butlerian jihad that gets everyone to log tf off and exist in the real world, but that’s kind of a hot take

Hyperz ,
@Hyperz@beehaw.org avatar

But then they’d have to break up with their AI girlfriends/boyfriends 🤔.

spoilerI wish I was joking.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines