There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

JackbyDev ,

That’s like saying you can’t be 100% sure you never have fake news at the top of search query results. It’s just a fact.

crystalmerchant ,

Of course they can’t. Any product or feature is only as good as the data underneath it. Training data comes from the internet, and the internet is full of humans. Humans make and write weird shit so so the data that the LLM ingests is weird, this creates hallucinations.

StaySquared ,

I don’t know why they’re trying to shove AI down our throats. They need to take their time, allow it to evolve.

Snowclone , (edited )

Because it’s all a corporation and a huge part of the corporate capitalist system is infinite growth. They want returns, BIG ones. When? Right the fuck now. How do you do that? Well AI would turn the world upside down like the dot-com boom. So they dump tons of money into AI. So… it’s the AI done? Oh no no no, we’re at machine leaning AI is pretty far down the road actually, what we’re firing the AI department heads and releasing this machine leaning software as 100% all the way done AI?

It’s all the same reasons section 8 housing and low cost housing don’t work under corporate capitalism. It’s profitable to take government money, it’s profitable to have low rent apartments. That’s not the problem, the problem is THEY NEED THE GROWTH NOW NOW NOW!!! If you have a choice between owning a condo where you have high wage renters, and you add another $100 to rent every year, you get more profit faster. No one wants to invest in a 10% increase over 5 years if the can invest in 12% over 4 years. So no one ever invests in low rent or section 8 housing.

Blackmist ,

Seeing these systems just making shit up when they’re not sure on the answer is probably the closest they’ll ever come to human behaviour.

We’ve invented the virtual politician.

boatsnhos931 ,

Tim Cook…go take your meds and watch Price is Right

kenkenken ,
@kenkenken@sh.itjust.works avatar

To be 100 percent sure is a hallucination. Probably he tried to say that he is less than 80 percent sure.

chonglibloodsport ,

Everything these AIs output is a hallucination. Imagine if you were locked in a sensory deprivation tank, completely cut off from the outside world, and only had your brain fed the text of all books and internet sites. You would hallucinate everything about them too. You would have no idea what was real and what wasn’t because you’d lack any epistemic tools for confirming your knowledge.

That’s the biggest reason why AIs will always be bullshitters as long as their disembodied software programs running on a server. At best they can be a brain in a vat which is a pure hallucination machine.

Voroxpete ,

Yeah, I try to make this point as often as I can. The notion that AI hallucinates only wrong answers really misleads people about how these programs actually work. It couches it in terms of human failings rather than really getting at the underlying flaw in the whole concept.

LLMs are a really interesting area of research, but they never should have made it out of the lab. The fact that they did is purely because all science operates in the service of profit now. Imagine if OpenAI were able to rely on government funding instead of having to find a product to sell.

Excrubulent , (edited )
@Excrubulent@slrpnk.net avatar

First of all I agree with your point that it is all hallucination.

However I think a brain in a vat could confirm information about the world with direct sensors like cameras and access to real-time data, as well as the ability to talk to people and determine things like who was trustworthy. In reality we are brains in vats, we just have a fairly common interface that makes consensus reality possible.

The thing that really stops LLMs from being able to make judgements about what is true and what is not is that they cannot make any judgements whatsoever. Judging what is true is a deeply contextual and meaning-rich question. LLMs cannot understand context.

I think the moment an AI can understand context is the moment it begins to gain true sentience, because a capacity for understanding context is definitionally unbounded. Context means searching beyond the current information for further information. I think this context barrier is fundamental, and we won’t get truth-judging machines until we get actually-thinking machines.

kaffiene ,

I’m 100% sure he can’t. Or at least, not from LLMs specifically. I’m not an expert so feel free to ignore my opinion but from what I’ve read, “hallucinations” are a feature of the way LLMs work.

rottingleaf ,

One can have an expert system assisted by ML for classification. But that’s not an LLM.

Kolanaki ,
@Kolanaki@yiffit.net avatar

Here’s how you stop AI from hallucinating:

Turn it off.

Because everything they output is a hallucination. Just because sometimes those hallucinations are true to life doesn’t mean jack shit. Even a broken clock is right twice a day.

“Only feed it accurate information.”

Even that doesn’t work because it just mixes and matches every element of its input to generate a new, novel output. Which would inevitably be wrong.

john_lemmy ,

Yeah, just pull the plug. The amount of time we waste talking about this shit for these assholes to play another round of monopoly is unbelievable

flop_leash_973 ,

Well yeah, its using the same dataset as MS copilot.

Spitting out inaccurate (I wish the media would stop feeding into calling it something that sounds less bad like hallucinations) answers is nothing something that will go away until the LLM gains the ability to decern context.

cmrn ,

It’s insane how many people already take AI as more capable/accurate than other medium. I’m not against AI, but I’m definitely against how much of a bubble of being worshipped that some people have it in.

baatliwala ,

Stupid headline, it’s like Tim Cook saying he’s not 100% sure Apple can stop batteries in their devices from exploding. You do as much as you can to prevent it but it might happen anyway because that’s just how it is.

cybersandwich ,

Of course you are getting downvoted, because you are right and not being a reactionary douche like your average lemmizen.

eestileib ,

You mean we can’t teach a bullshit machine to stop bullshitting? I’m shocked.

dch82 ,

What you can do is try to filter out the garbage, but it’s basically trying to find gold in food waste.

qaz ,

Saying anything else would be lying

NutWrench ,
@NutWrench@lemmy.world avatar

If you want to have good AI, you need to spend money and send your AI to college. Have real humans interact with it, correct it’s logic, make sure it understands sarcasm and logical fallacies.

Or, you can go the cheap route: train it on 10 years of Reddit sh*tposts and hope for the best.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines