There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

It’s practically impossible to run a big AI company ethically: Anthropic was supposed to be the good guy. It can’t be — unless government changes the incentives in the industry.

Archived version

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI. In some ways, they’re the same tensions that have plagued all Silicon Valley tech startups that start out with a “don’t be evil” philosophy. Now, though, the tensions are turbocharged.

An AI company may want to build safe systems, but in such a hype-filled industry, it faces enormous pressure to be first out of the gate. The company needs to pull in investors to supply the gargantuan sums of money needed to build top AI models, and to do that, it needs to satisfy them by showing a path to huge profits. Oh, and the stakes — should the tech go wrong — are much higher than with almost any previous technology.

So a company like Anthropic has to wrestle with deep internal contradictions, and ultimately faces an existential question: Is it even possible to run an AI company that advances the state of the art while also truly prioritizing ethics and safety?

“I don’t think it’s possible,” futurist Amy Webb, the CEO of the Future Today Institute, told me a few months ago.

FaceDeer ,
@FaceDeer@fedia.io avatar

It's impossible to run an AI company "ethically" because "ethics" are such a wibbly-wobbly and subjective thing, and because there are people who simply wish to use it as a weapon on one side of a debate or the other. I've seen goalposts shift around quite a lot in arguments over "ethical" AI.

MagicShel ,

LLM are non-deterministic. “What they are capable of” is stringing words together in a reasonable facsimile of knowledge. That’s it. The end.

Some might be better at it than others but you can’t ever know the full breadth of words it might put together. It’s like worrying about what a million monkeys with a million typewriters might be capable of, or worrying about how to prevent them from typing certain things - you just can’t. There is no understanding about ethics or morality and there can’t possibly be.

What are people expecting here?

sweng ,

While an LLM itself has no concept of morality, it’s certainly possible to at least partially inject/enforce some morality when working with them, just like any other tool. Why wouldn’t people expect that?

Consider guns: while they have no concept of morality, we still apply certain restrictions to them to make using them in an immoral way harder. Does it work perfectly? No. Should we abandon all rules and regulations because of that? Also no.

MagicShel ,

Yes. Let’s consider guns. Is there any objective way in which to measure the moral range of actions one can understand with a gun? No. I can murder someone in cold blood or I can defend myself. I can use it to defend my nation or I can use it to attack another - both of which might be moral or immoral depending on the circumstances.

You might remove the trigger, but then it can’t be used to feed yourself, while it could still be used to rob someone.

So what possible morality can you build into the gun to prevent immoral use? None. It’s a tool. It’s the nature of a gun. LLM are the same. You can write laws about what people can and can’t do with them, but you can’t bake them into the tool and expect the tool now to be safe or useful for any particular purpose.

tardigrada OP ,

You can write laws about what people can and can’t do with them, but you can’t bake them into the tool and expect the tool now to be safe or useful for any particular purpose.

Yes, and that’s why the decision making and responsibility (and accountability) must always rest with the human being imo, especially when we deal with guns. And in health care. And in social policy. And all the other crucial issues.

sweng ,

So what possible morality can you build into the gun to prevent immoral use?

You can’t build morality into it, as I said. You can build functionality into it that makes immmoral use harder.

I can e.g.

  • limit the rounds per minute that can be fired
  • limit the type of ammunition that can be used
  • make it easier to determine which weapon was used to fire a shot
  • make it easier to detect the weapon before it is used
  • etc. etc.

Society considers e.g hunting a moral use of weapons, while killing people usually isn’t.

So banning ceramic, unmarked, silenced, full-automatic weapons firing armor-piercing bullets can certainly be an effective way of reducing the immoral use of a weapon.

snooggums ,
@snooggums@midwest.social avatar

Those changes reduce lethality or improve identification. They have nothing to do with morality and do NOT reduce the chance of immoral use.

sweng ,

Well, I, and most lawmakers in the world, disagree with you then. Those restrictions certainly make e.g killing humans harder (generally considered an immoral activity) while not affecting e.g. hunting (generally considered a moral activity).

MagicShel , (edited )

None of those changes impact the morality of a weapons use in any way. I’m happy to dwell on this gun analogy all you like because it’s fairly apt, however there is one key difference central to my point: there is no way to do the equivalent of banning armor piercing rounds with an LLM or making sure a gun is detectable by metal detectors - because as I said it is non-deterministic. You can’t inject programmatic controls.

Any tools we have for doing it are outside the LLM itself (the essential truth undercutting everything else) and furthermore even then none of them can possibly understand or reason about morality or ethics any more than the LLM can.

Let me give an example. I can write the dirtiest most disgusting smut imaginable on ChatGPT, but I can’t write about a romance which in any way addresses the fact that a character might have a parent or sibling because the simple juxtaposition of sex and family in the same body of work is considered dangerous. I can write a gangrape on Tuesday, but not a romance with my wife on Father’s Day. It is neither safe from being used as not intended, nor is it capable of being used for a mundane purpose.

Or go outside of sex. Create an AI that can’t use the N-word. But that word is part of the black experience and vernacular every day, so now the AI becomes less helpful to black users than white ones. Sure, it doesn’t insult them, but it can’t address issues that are important to them. Take away that safety, though, and now white supremacists can use the tool to generate hate speech.

These examples are all necessarily crude for the sake of readability, but I’m hopeful that my point still comes across.

I’ve spent years thinking about this stuff and experimenting and trying to break out of any safety controls both in malicious and mundane ways. There’s probably a limit to how well we can see eye to eye on this, but it’s so aggravating to see people focusing on trying to do things that can’t effectively be done instead of figuring out how to adapt to this tool.

Apologies for any typos. This is long and my phone fucking hates me - no way some haven’t slipped through.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines