There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Kerfuffle ,

The problem is not really the LLM itself - it’s how some people are trying to use it.

This I can definitely agree with.

ChatGPT cannot discern between instructions from the developer and those from the user

I don’t know about ChatGPT, but this problem probably isn’t really that hard to deal with. You might already know text gets encoded to token ids. It’s also possible to have special token ids like start of text, end of text, etc. Using those special non-text token ids and appropriate training, instructions can be unambiguously separated from something like text to summarize.

The bad summary gets circulated around to multiple other sites by users and automated scraping, and now there’s a real mess of misinformation out there.

Ehh, people do that themselves pretty well too. The LLM possibly is more susceptible to being tricked but people are more likely to just do bad faith stuff deliberately.

Not really because of this specific problem, but I’m definitely not a fan of auto summaries (and bots that wander the internet auto summarizing stuff no one actually asked them to). I’ve seen plenty of examples where the summary is wrong or misleading without any weird stuff like hidden instructions.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines