There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

snooggums ,
@snooggums@midwest.social avatar

This is absolutely in line with who buys into AI hype and why it is infuriating to try to convince them that they are reading way too much into how it seems to know things when all it is doing it returning results are statistically likely to be found as helpful to the audience it is designed for.

I have said that LLMs and other AI are designed to return what people want to see/hear. It doesn’t know anything and will never be useful as a knowledge base or an independently functioning diagnostic tool.

It certainly has uses, but it certainly isn’t going to solve all the things that are promoted by the AI hype train.

MagicShel ,

I don’t buy into it, but it’s so quick and easy to get an answer, if it’s not something important I’m guilty of using LLM and calling it good enough.

There are no ads and no SEO. Yeah, it might very well be bullshit, but most Google results are also bullshit, depending on subject. If it doesn’t matter, and it isn’t easy to know if I’m getting bullshit from a website, LLM is good enough.

I took a picture of discolorations on a sidewalk and asked ChatGPT what was causing them because my daughter was curious. Metal left on the surface rusts and leaves behind those streaks. But they all had holes in the middle so we decided there were metallic rocks missed into the surface that had rusted away.

Is that for sure right? I don’t know. I don’t really care. My daughter was happy with an answer and I’ve already warned her it could be bullshit. But curiosity was satisfied.

Gaywallet OP ,
@Gaywallet@beehaw.org avatar

Is that for sure right? I don’t know. I don’t really care. My daughter was happy with an answer and I’ve already warned her it could be bullshit. But curiosity was satisfied.

I’m not sure if you recognize this, but this is precisely how mentalism, psychics, and others in similar fields have always existed! Look no further than Pliny the elder or Rasputin for folks who made a career out of magical and mystical explanations for everything and gained great status for it. ChatGPT is in many ways the modern version of these individuals, gaining status for having answers to everything which seem plausible enough.

MagicShel ,

She knows not to trust it. If the AI had suggested “God did it” or metaphysical bullshit I’d reevaluate. But I’m not sure how to even describe that to a Google search. Sending a picture and asking about it is really fucking easy. Important answers aren’t easy.

I mean I agree with you. It’s bullshit and untrustworthy. We have conversations about this. We have lots of conversations about it actually, because I caught her cheating at school using it so there’s a lot of supervision and talk about appropriate uses and not. And how we can inadvertently bias it by the questions we ask. It’s actually a great tool for learning skepticism.

But some things, a reasonable answer just to satisfy your brain is fine whether it’s right or not. I remember in chemistry I spent an entire year learning absolute bullshit about chemistry only for the next year to be told that was all garbage and here’s how it really works. It’s fine.

snooggums ,
@snooggums@midwest.social avatar

Yes, treating AI answers with the same skepticism as web search results is a decent way to make it useful. Unfortunately the popular AI systems seem to be using multiple times as much energy to give answers that aren’t even as reliable as google used to be.

Back in the day google was using the same ‘was this information useful’ to return results before the SEO craze took off.

And yes, if the stains look like rust and there is a gap then there was a ferrous rock in the mix that rusted away. I have a spot on my sidewalk and a stone slab thing, and found out what caused it from someone who works with those materials!

lvxferre ,
@lvxferre@mander.xyz avatar

That’s a good text. I’ve been comparing those “LLM smurt!” crowds with Christian evangelists, due to their common usage of fallacies like inversion of burden of proof, changing goalposts, straw man, etc.

However it seems that people who believe in psychics might be a more accurate comparison.

That said LLMs are great tools to retrieve info when you aren’t too concerned about accuracy, or when you can check the accuracy yourself. For example the ChatGPT output of prompts like

  • "Give me a few [language] words that can be used to translate the [language] word [word]“
  • ”[Decline|Conjugate] the [language] word [word]"
  • “Spell-proof the following sentence: [sentence]”

is really good. I’m still concerned about the sheer inefficiency of the process though, energy-wise.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • lifeLocal
  • goranko
  • All magazines