There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

towerful ,

Follow sensible H&S rules.
Split the responsibility between the person that decided AI is able to do this task and the company that sold the AI saying it’s capable of this.

For the case of the purchasing company, obviously start with the person that chose that AI, then spread that responsibility up the employment chain. So the manager that approved it, the managers manager, all the way to the executive office & company as a whole.
If investigation shows that the purchasing company ignored sales advice, then it’s all on the purchasing company.

If the investigation shows that the purchasing company followed the sales advice, then the responsibility is split, unless the purchasing company can show that they did due diligence in the purchase.
For the supplier, the person that sold that tech. If the investigation shows that the engineers approved that sales pitch, then that engineers employment chain. If the sales person ignored the devs, then the sales employment chain. Up to the executive level.

No scape goats.
Whatever happens, C office, companies, and probably a lot of managers get hauled into court.
Make it rough for everyone in the chain of purchase and supply.
If the issue is a genuine mistake, then appropriate insurance will cover any damages. If the issue is actually fraud, then EVERYONE (and the company) from the level of handover upwards should be punished

Drusas ,

H&S?

NeoNachtwaechter ,

When these AIs make autonomous decisions that inadvertently cause harm – whether financial loss or actual injury – whom do we hold liable?

The person who allowed the AI to make these decisions autonomously.

We should do it like Asimov has shown us: create “robot laws” that are similar to slavery laws:

In principle, the AI is a non-person and therefore a person must take responsibility.

nullPointer ,

if the source code for said accusing AI cannot be examined and audited by the defense; the state is denying the defendant their right to face their accuser. mistrial.

JohnDClay ,

The person who decided to use the AI

chakan2 ,
@chakan2@lemmy.world avatar

There are going to be a lot of instances going forward where you don’t know you were interacting with an AI.

If there’s a quality check on the output, sure, they’re liable.

If a Tesla runs you into an ambulance at 80mph…the very expensive Tesla lawyers will win.

It’s a solid quandary.

JohnDClay ,

Why would the lawyer defendant not know they’re interacting with AI? Would the AI generated content appear to be actual case law? How would that confusion happen?

tal ,
@tal@lemmy.today avatar

My guess is that it’s gonna wind up being a split, and it’s not going to be unique to “AI” relative to any other kind of device.

There’s going to be some kind of reasonable expectation for how a device using AI should act, and then if the device acts within those expectations and causes harm, it’s the person who decided to use it.

But if the device doesn’t act within those expectations, then it’s not them, may be the device manufacturer.

JohnDClay ,

Yeah, if the company making the ai makes false claims about it, then it’d be on them at least partially.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines