There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Asafum ,

“Great! Billy doesn’t believe 9/11 was an inside job, but now the AI made him believe Bush was actually president in 1942 and that Obama was never president.”

In all seriousness I think an “unbiased” AI might be one of the few ways to reach people about this stuff because any Joe schmoe is just viewed as “believing what they want you to believe!” when they try to confront any conspiracy.

some_guy ,

The researchers think a deep understanding of a given theory is vital to tackling errant beliefs. “Canned” debunking attempts, they argue, are too broad to address “the specific evidence accepted by the believer,” which means they often fail. Because large language models like GPT-4 Turbo can quickly reference web-based material related to a particular belief or piece of “evidence,” they mimic an expert in that specific belief; in short, they become a more effective conversation partner and debunker than can be found at your Thanksgiving dinner table or heated Discord chat with friends.

This is great news. The emotional labor needed to talk these people down is emotionally and mentally damaging. Offloading it to software is a great use of the technology that has real value.

LucidBoi ,

Another way of looking at it: “AI successfully used to manipulate people’s opinions on certain topics.” If it can persuade them to stop believing conspiracy theories, AI can also be used to make people believe conspiracy theories.

davidgro ,

Anything can be used to make people believe them. That’s not new or a challenge.

I’m genuinely surprised that removing such beliefs is feasible at all though.

Gradually_Adjusting , (edited )
@Gradually_Adjusting@lemmy.world avatar

Let me guess, the good news is that conspiracism can be cured but the bad news is that LLMs are able to shape human beliefs. I’ll go read now and edit if I was pleasantly incorrect.

Edit: They didn’t test the model’s ability to inculcate new conspiracies, obviously that’d be a fun day at the office for the ethics review board. But I bet with a malign LLM it’s very possible.

davidgro ,

A piece of paper dropped on the ground can ‘shape human beliefs’. That’s literally a tool used in warfare.

The news here is that conspiratorial thinking can be relieved at all.

Xeroxchasechase ,

"AI is just a tool; is a bit naïve. The power of this tool and the scope makes this tool a devastating potential. It’s a good idea to be concerned and talk about it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines