There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Skullgrid ,
@Skullgrid@lemmy.world avatar

the AI doesn’t have fucking racial bias, humanity and the content they produce they fit into the AI has a racist bias.

wjrii ,

No need to split hairs here. The product that people use and call “AI” is what is relevant.

Buffalox ,

So your logic is that a child can’t be racist if the parents are racist?

Skullgrid ,
@Skullgrid@lemmy.world avatar

No, that a knife isn’t racist when its owner goes on a muslim stabbing spree.

_cnt0 ,
@_cnt0@sh.itjust.works avatar

What is more racist, though? The average cop or the average LLM? I’d wager a guess it’s the average cop. So, it would still be a net benefit.

fine_sandy_bottom ,

Devils advocate: in the case of a monthly report, often an LLM is used like “take these current statistics and update last month’s report to include them.”

As in… the LLM is not developing an opinion it’s just presenting the numbers.

Monthly reporting is usually very formulaic. There’s no scope for “I propose forming a lynch mob comprised of vigilanties”.

gAlienLifeform OP ,
@gAlienLifeform@lemmy.world avatar

This isn’t about them using them for monthly reports, this is about them using LLMs for individual incident reports

Pulling from all the sounds and radio chatter picked up by the microphone attached to Gilbert’s body camera, the AI tool churned out a report in eight seconds. …

Oklahoma City’s police department is one of a handful to experiment with AI chatbots to produce the first drafts of incident reports. Police officers who’ve tried it are enthused about the time-saving technology, while some prosecutors, police watchdogs and legal scholars have concerns about how it could alter a fundamental document in the criminal justice system that plays a role in who gets prosecuted or imprisoned.

HK65 ,

“take these current statistics and update last month’s report to include them.”

That is literally the worst use case for an LLM. Something a simple script could do, but it is hard dry data the LLM is free to hallucinate with and people are lazy to check over manually.

Also, LLMs can’t math.

givesomefucks ,

A lot of the reasons not to use AI is it just makes up random shit…

But even with that it’s probably more accurate than what a fucking cop would write.

What I’m worried about is cops making sure the AI says what they want (lies) and then when questioned they’ll blame the AI to escape consequences

catloaf ,

I’ve signed a lot of forms that say something like “I certify that the information I have provided is true and accurate”. Using ChatGPT doesn’t absolve me of that. It shouldn’t for them either (but we all know they’re held to a different standard).

MediaBiasFactChecker Bot ,

TheGrio - News Source Context (Click to view Full Report)Information for TheGrio:
> MBFC: Left - Credibility: High - Factual Reporting: Mostly Factual - United States of America
> Wikipedia about this source

Internet Archive - News Source Context (Click to view Full Report)Information for Internet Archive:
> MBFC: Left-Center - Credibility: High - Factual Reporting: Mostly Factual - United States of America
> Wikipedia about this source

Search topics on Ground.Newshttps://web.archive.org/web/20240828120602/https://thegrio.com/2024/08/27/police-officers-are-starting-to-use-ai-chatbots-to-write-crime-reports-despite-concerns-over-racial-bias-in-ai-technology/
https://thegrio.com/2024/08/27/police-officers-are-starting-to-use-ai-chatbots-to-write-crime-reports-despite-concerns-over-racial-bias-in-ai-technology/

Media Bias Fact Check | bot support

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines