There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Anthropic was supposed to be the good guy. It can’t be — unless government changes the incentives in the industry.

[L]ately, Anthropic has been in the headlines for less noble reasons: It’s pushing back on a landmark California bill to regulate AI. It’s taking money from Google and Amazon in a way that’s drawing antitrust scrutiny. And it’s being accused of aggressively scraping data from websites without permission, harming their performance.

It was supposed to be different from OpenAI, the maker of ChatGPT. In fact, all of Anthropic’s founders once worked at OpenAI but quit in part because of differences over safety culture there, and moved to spin up their own company that would build AI more responsibly.

An AI company may want to build safe systems, but in such a hype-filled industry, it faces enormous pressure to be first out of the gate. The company needs to pull in investors to supply the gargantuan sums of money needed to build top AI models, and to do that, it needs to satisfy them by showing a path to huge profits. Oh, and the stakes — should the tech go wrong — are much higher than with almost any previous technology.

CosmoNova ,

Tech firm that promised a better world turns bad guy. Where have I seen this before?

eestileib ,

The idea that AI is even on the radar of threats to humanity compared to climate change or nuclear war is fucking ridiculous.

filister ,

Money ruins even the best intentions.

snooggums ,
@snooggums@midwest.social avatar

Nah, they clearly didn’t have the best intentions and just moved away from criticism so they could pretend to be noble.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines