There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Hirom , (edited )

We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include […] damage to the welfare state and the disempowerment of the poor […]

I got an example of that yersterday, when calling FedEx’s over the phone to schedule a pickup.

Contrary to calls I’ve made in the past, it was answered by a chatbot instead of humans. This raise various problems, in particular for already-disenfranchised people who rely on customer service being available over the phone.

  • A disclaimer says “This call may be recorded for training purpose”, there was no asking for consent, no way to opt-out. Does that mean my voice and conversation will be fed into a database to develop DL/LLM AI systems? There’s no simple way to know, or to refuse without hanging up (ie opt-out = being denied service).
  • The chatbot had a hard time understanding my inquiry, forcing me to repeat/rephrase it multiples times. In the end it replied it’s not possible to schedule pickups over the phone. This was possible back when humans were on the phone. Internet access is now required. Meaning disenfranchised people with no (easy/dependable) Internet are being left out.
  • The chatbot used a generic greeting, didn’t introduce itself properly, and was evasive when I asked “are you a human?”. Not being forthcoming will confuse some old or non-tech-savy people who won’t easily realize it’s a robot, not a human.
  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines