There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Cyberflunk ,

Holy shit. Dunning Kruger is fully engaged in these post comments

KillingTimeItself ,

it’s only going to get worse, especially as datasets deteriorate.

With things like reddit being overrun by AI, and also selling AI training data, i can only imagine what mess that’s going to cause.

vegetal ,
@vegetal@lemmy.world avatar

I think you are spot on. I tend to think the problems may begin to outnumber the potentials.

KillingTimeItself ,

and we haven’t even gotten into the problem of what happens when you have no more data to feed it, do you make more? That’s an impossible task.

Natanael ,

There’s already attempts to create synthetic data to train on

Cyberflunk ,

Hallucinations, like depression, is a multifaceted issue. Training data is only a piece of it. Quantized models, overfitted training models rely on memory at the cost of obviously correct training data. Poorly structured Inferences can confuse a model.

Rest assured, this isn’t just training data.

KillingTimeItself ,

yeah there’s also this stuff as well, though i consider that to be a more technical challenge, rather than a hard limit.

OozingPositron ,
@OozingPositron@feddit.cl avatar

>The verge

Don’t take away the hallucinations, how am I supposed to do ERP with the models then?

Plopp ,

You do enterprise resource planning with AI hallucinations?

xia ,

Yeah! Just like water’s “wetness” problem. It’s kinda fundamental to how the tech operates.

ClamDrinker , (edited )

It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn’t exist in the physical world. Humans hallucinate too - all the time. It’s just that our approximations are usually correct, and then we don’t call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It’s also why we don’t notice our blinks, or why we don’t see the blind spot our eyes have.

AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

Hallucinations shouldn’t be treated like a bug. They are a feature - just not one the big tech companies wanted.

When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.

KeenFlame ,

Very long layman take. Why is there always so many of these on every ai post? What do you get from guesstimating how the technology works?

ClamDrinker ,

I’m not an expert in AI, I will admit. But I’m not a layman either. We’re all anonymous on here anyways. Why not leave a comment explaining what you disagree with?

KeenFlame ,

I want to just understand why people get so passionate about explaining how things work, especially in this field where even the experts themselves just don’t understand how it works? It’s just an interesting phenomenon to me

Fungah ,

The not understanding hlw it works thing isn’t universal in ai from my understanding. And people understand how a lot of it works even then. There may be a few mysterious but its not sacrificing chickens to Jupiter either.

KeenFlame ,

Nope, it’s actually not understood. Sorry to hear you don’t understand that

ClamDrinker ,

Hallucinations in AI are fairly well understood as far as I’m aware. Explained in high level on the Wikipedia page for it.And I’m honestly not making any objective assessment of the technology itself. I’m making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it’s given, but that’s something even a layman might know)

How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don’t have an answer there either), but a true fix should be impossible.

I can’t exactly say why I’m passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I’m also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.

KeenFlame ,

Not really, no, because these aren’t biological, and the scientists that work with it is more interested in understanding why it works at all.

It is very interesting how the brain works, and our sensory processing is predictive in nature, but no, it’s not relevant to machine learning which works completely different

Drewelite ,

Seems like there are a lot of half baked ideas online about AI that seem to come from assumptions based on some sci-fi ideal or something. People are shocked that an artificial intelligence gets things wrong when they themselves have probably made a handful of incorrect assumptions today. This Tom Scott talk is a great explanation of how truth can never be programmed into anything. And will never really be obtainable to humanity in the foreseeable future.

KeenFlame ,

Yeah! That’s probably a good portion of it, but exasterbated by the general hate for ai, which is understandable due to the conglomorates abusive training data

mriormro ,
@mriormro@lemmy.world avatar

What exactly are your bona fides that you get to play the part of the exasperated “expert” here? And, more importantly, why should I give a fuck?

I constantly hear this shit from other self-appointed experts in this field as if no one is allowed to discuss, criticize, or form opinions on the implications of this technology besides those few who ‘truly understand’.

KeenFlame ,

Did you misread something? Nothing of what you said is relevant

Drewelite ,

Could not have said it better. The whole reason contemporary programs haven’t been able to adapt to the ambiguity of real world situations is because they require rigidly defined parameters to function. LLMs and AI make assumptions and act on shaky info - That’s the whole point. If people waited for complete understanding of every circumstance and topic, we’d constantly be trapped in indecision. Without the ability to test their assumptions in the real world, LLMs will be like children.

GoodEye8 ,

I think you’re giving a glorified encyclopedia too much credit. The difference between us and “AI” is that we can approach knowledge from a problem solving position. We do approximate the laws of physics, but we don’t blindly take our beliefs and run with it. We put we come up with a theory that then gets rigorously criticized, then come up with ways to test that theory, then be critical of the test results and eventually we come to consensus that based on our understandings that thing is true. We’ve built entire frameworks to reduce our “hallucinations”. The reason we even know we have blind spots is because we’re so critical of our own “hallucinations” that we end up deliberately looking for our blind spots.

But the “AI” doesn’t do that. It can’t do that. The “AI” can’t solve problems, it can’t be critical of itself or what information its giving out. All our current “AI” can do is word vomit itself into a reasonable answer. Sometimes the word vomit is factually correct, sometimes it’s just nonsense.

You are right that theoretically hallucinations cannot be solved, but in practicality we ourselves have come up with solutions to minimize it. We could probably do something similar with “AI” but not when the AI is just a LLM that fumbles into sentences.

ClamDrinker , (edited )

I’m not sure where you think I’m giving it too much credit, because as far as I read it we already totally agree lol. You’re right, methods exist to diminish the effect of hallucinations. That’s what the scientific method is. Current AI has no physical body and can’t run experiments to verify objective reality. It can’t fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.

Eranziel ,

The fundamental difference is that the AI doesn’t know anything. It isn’t capable of understanding, it doesn’t learn in the same sense that humans learn. A LLM is a (complex!) digital machine that guesses the next most likely word based on essentially statistics, nothing more, nothing less.

It doesn’t know what it’s saying, nor does it understand the subject matter, or what a human is, or what a hallucination is or why it has them. They are fundamentally incapable of even perceiving the problem, because they do not perceive anything aside from text in and text out.

Drewelite ,

Many people’s entire thought process is an internal monologue. You think that voice is magic? It takes input and generates a conceptual internal dialogue based on what it’s previously experienced (training data for long term, context for short term). What do you mean when you say you understand something? What is the mechanism that your brain undergoes that’s defined as understanding?

Because for me it’s an internal conversation that asserts an assumption based on previous data and then attacks it with the next most probable counter argument systematically until what I consider a “good idea” emerges that is reasonably vetted. Then I test it in the real world by enacting the scientific process. The results are added to my long term memory (training data).

GoodEye8 ,

It doesn’t need to verify reality, it needs to be internally consistent and it’s not.

For example I was setting up logging pipeline and one of the filters didn’t work. There was seemingly nothing wrong with configuration itself and after some more tests with dummy data I was able to get it working, but it still didn’t work with the actual input data. So I have the working dummy example and the actual configuration to chatGPT and asked why the actual configuration doesn’t work. After some prompts going over what I had already tried it ended up giving me the exact same configuration I had presented as the problem. Humans wouldn’t (or at least shouldn’t) make that error because it would be internally inconsistent, the problem statement can’t be the solution.

But the AI doesn’t have internal consistency because it doesn’t really think. It’s not making sure what it’s saying is logical based on the information it knows, it’s not trying to make assumptions to solve a problem, it can’t even deduce that something true is actuality true. All it can do is predict what we would perceive as the answer.

bastion ,

Indeed. It doesn’t even trend towards consistency.

It’s much like the pattern-matching layer of human consciousness. Its function isn’t to filter for truth, its function is to match knowns and potentials to patterns in its environment.

AI has no notion of critical thinking. It is purely positive “thinking”, in a technical sense - it is positing based on what it “knows”, but there is no genuine concept of self, nor even of critical thinking, nor even a non-conceptual logic or consistency filter.

KillingTimeItself ,

ok so to give you an um ackshually here.

Technically if we were to develop a real general artificial general intelligence, it would be limited to the amount of knowledge that it has, but so is any given human. And it’s advantage would still be scale of operations compared to a human, since it can realistically operate on all known theoretical and practical information, where as for a human that’s simply not possible.

Though presumably, it would also be influenced by AI posting that we already have now, to some degree, the question is how it responds to that, and how well it can determine the difference between that and real human posting.

the reason why hallucinations are such a big problem currently is simply due to the fact that it’s literally a predictive text model, it doesn’t know anything. That simply wouldn’t be true for a general artificial intelligence. Not that it couldn’t hallucinate, but it wouldn’t hallucinate to the same degree, and possibly with greater motives in mind.

A lot of the reason human biology tends to obfuscate certain things is simply due to the way it’s evolved, as well as it’s potential advantages in our life. The reason we can’t see our blindspots is due to the fact that it would be much more difficult to process things otherwise. It’s the same reason our eyesight is flipped as well. It’s the same reason pain is interpreted the way that it is.

a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

For predictive models? This is probably the case, but you can also poison the well so to speak, when it comes to those even.

ClamDrinker , (edited )

Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.

It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.

KillingTimeItself ,

It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

yeah and so are humans, so i mean, shit happens. Even then it’d likely be more accurate than a human just based off of the very fact that it knows more subjects than any given human. And all humans alive, because it’s knowledge is based off of the written works of the entirety of humanity, theoretically.

A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

well yeah, if we’re defining the ultimate truth as something that propagates through the universe at the highest known speed possible. That would be how that works, since it’s likely a device of it’s own accord, and or responsive to humans, it likely wouldn’t matter, as it would just wait a few seconds anyway.

The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.

at that scale yes, but at this scale, with our current LLM technology, which was what i was talking about specifically, it wouldn’t matter. But even at that scale i don’t think it would classify as a hallucination, because a hallucination is a very specific type of being wrong. It’s literally pulling something out a thin air, and a theoretical general intelligence AI wouldn’t be pulling shit out of thin air, at best it would elaborate on what it knows already, which might be everything, or nothing, depending on the topic. But it shouldn’t just make something up out of thin air. It could very well be wrong about something, but that’s not likely to be a hallucination.

ClamDrinker ,

Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.

What I mentioned can’t really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you ‘hallucinated’ a truth that never existed, but you were just that confident it was correct to share and spread it. It’s how we get myths, popular belief, and folklore.

For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what’s going to happen, you basically can’t function in reality.

HawlSera ,

You assume the physical world is all there is or that the AI has any real intelligence at all. It’s a damn chinese room.

SulaymanF ,

We also have to stop calling it hallucinations. The proper term in psychology for making stuff up like this is “Confabulations.”

SolNine ,

The simple solution is not to rely upon AI. It’s like a misinformed relative after a jar of moonshine, they might be right some of the time, or they might be totally full of shit.

I honestly don’t know why people are obsessed with relying on AI, is it that difficult to look up the answer from a reliable source?

funkless_eck ,

because some jobs have to produce a bunch of bullshit text that no one will read quickly, or else parse a bunch of bullshit text for a single phrase in the midst of it all and put it in a report.

ZILtoid1991 ,

Sites like that can be blacklisted with web browser plugins. Vastly improved my DuckDuckGo experience for a while, but it’ll be a Whack-A-Mole game from both sides, and yet again my searches are littered with SEO garbage at best, and AI-generated SEO garbage full with made up stuff at worst.

force ,

is it that difficult to look up the answer from a reliable source?

With the current state of search engines and their content (almost completely unrelated garbage and shitty blogs make in like 3 minutes with 1/4 of the content poorly copy-pasted out of context from stackoverflow and most of the rest being pop-ups and ads), YES

SEO ““engineers”” deserve the guillotine

sebinspace ,

If it keeps me from going to stack and interacting with those degenerates, yes

kromem ,

It’s not hallucination, it’s confabulation. Very similar in its nuances to stroke patients.

Just like the pretrained model trying to nuke people in wargames wasn’t malicious so much as like how anyone sitting in front of a big red button labeled ‘Nuke’ might be without a functioning prefrontal cortex to inhibit that exploratory thought.

Human brains are a delicate balance between fairly specialized subsystems.

Right now, ‘AI’ companies are mostly trying to do it all in one at once. Yes, the current models are typically a “mixture of experts,” but it’s still all in one functional layer.

Hallucinations/confabulations are currently fairly solvable for LLMs. You just run the same query a bunch of times and see how consistent the answer is. If it’s making it up because it doesn’t know, they’ll be stochastic. If it knows the correct answer, it will be consistent. If it only partly knows, it will be somewhere in between (but in a way that can be fine tuned to be detected by a classifier).

This adds a second layer across each of those variations. If you want to check whether something is safe, you’d also need to verify that answer isn’t a confabulation, so that’s more passes.

It gets to be a lot quite quickly.

As the tech scales (what’s being done with servers today will happen around 80% as well on smartphones in about two years), those extra passes aren’t going to need to be as massive.

This is a problem that will eventually go away, just not for a single pass at a single layer, which is 99% of the instances where people are complaining this is an issue.

jj4211 ,

You just run the same query a bunch of times and see how consistent the answer is.

A lot of people are developing what I’d call superstitions on some way to overcome LLm limitations. I remember someone swearing they fixed the problem by appending “Ensure the response does not contain hallucinations” to every prompt.

In my experience, what you describe is not a reliable method. Sometimes it’s really attached to the same sort of mistakes for the same query. I’ve seen it double down, when instructed a facet of the answer was incorrect and to revise, several times I’d get “sorry for the incorrect information”, followed by exact same mistake. On the flip side, to the extent it “works”, it works on valid responses too, meaning an extra pass to ward off “hallucinations” you end up gaslighting the model and it changes the previously correct answer as if it were a hallucination.

kromem ,

How many times are you running it?

For the SelfCheckGPT paper, which was basically this method, it was very sample dependent, continuing to see improvement up to 20 samples (their limit), but especially up to around 6 iterations…

I’ve seen it double down, when instructed a facet of the answer was incorrect and to revise, several times I’d get “sorry for the incorrect information”, followed by exact same mistake.

You can’t continue with it in context or it ruins the entire methodology. You are reintroducing those tokens when you show it back to the model, and the models are terrible at self-correcting when instructed that it is incorrect, so the step is quite meritless anyways.

You need to run parallel queries and identify shared vs non-shared data points.

It really depends on the specific use case in terms of the full pipeline, but it works really well. Even with just around 5 samples and intermediate summarization steps it pretty much shuts down completely errant hallucinations. The only class of hallucinations it doesn’t do great with are the ones resulting from biases in the relationship between the query and the training data, but there’s other solutions for things like that.

And yes, it definitely does mean inadvertently eliminating false negatives, which is why a balance has to be struck in terms of design choices.

Alllo ,
@Alllo@lemmy.world avatar

without reading the article, this is the best summary I could come up with:

Mainstream government tied media keeps hallucinatin up facts. Republican, democrat, doesn’t matter; they hallucinate up facts. Time to stop ignoring human’s hallucination problem. At least with ai, they don’t have some subversive agenda beneath the surface when they do it. Time to help ai take over the world bbl

Wirlocke ,

I’m a bit annoyed at all the people being pedantic about the term hallucinate.

Programmers use preexisting concepts as allegory for computer concepts all the time.

Your file isn’t really a file, your desktop isn’t a desk, your recycling bin isn’t a recycling bin.

[Insert the entirety of Object Oriented Programming here]

Neural networks aren’t really neurons, genetic algorithms isn’t really genetics, and the LLM isn’t really hallucinating.

But it easily conveys what the bug is. It only personifies the LLM because the English language almost always personifies the subject. The moment you apply a verb on an object you imply it performed an action, unless you limit yourself to esoteric words/acronyms or you use several words to overexplain everytime.

calcopiritus , (edited )

It’s easily the worst problem of Lemmy. Sometimes one guy has an issue with something and suddenly the whole thread is about that thing, as if everyone thought about it. No, you didn’t think about it, you just read another person’s comment and made another one instead of replying to it.

I never heard anyone complain about the term “hallucination” for AIs, but suddenly in this one thread there are 100 clonic comments instead of a single upvoted ones.

I get it, you don’t like “hallucinate”, just upvote the existing comment about it and move on. If you have anything to add, reply to that comment.

I don’t know why this specific thing is so common on Lemmy though, I don’t think it happened in reddit.

emptiestplace ,

I don’t know why this specific thing is so common on Lemmy though, I don’t think it happened in reddit.

When you’re used to knowing a lot relative to the people around you, learning to listen sometimes becomes optional.

ZILtoid1991 ,

“Hallucination” pretty well describes my opinion on AI generated “content”. I think all of their generation is a hallucination at best.

Garbage in, garbage out.

abrinael , (edited )

What I don’t like about it is that it makes it sound more benign than it is. Which also points to who decided to use that term - AI promoters/proponents.

Edit: it’s like all of the bills/acts in congress where they name them something like “The Protect Children Online Act” and you ask, “well, what does it do?” And they say something like, “it lets local police read all of your messages so they can look for any dangers to children.”

zalgotext ,

The term “hallucination” has been used for years in AI/ML academia. I reading about AI hallucinations ten years ago when I was in college. The term was originally coined by researchers and mathematicians, not the snake oil salesman pushing AI today.

abrinael ,

I had no idea about this. I studied neural networks briefly over 10 years ago, but hadn’t heard the term until the last year or two.

KeenFlame ,

We were talking about when it was coined, not when you heard it first

Wirlocke ,

In terms of LLM hallucination, it feels like the name very aptly describes the behavior and severity. It doesn’t downplay what’s happening because it’s generally accepted that having a source of information hallucinate is bad.

I feel like the alternatives would downplay the problem. A “glitch” is generic and common, “lying” is just inaccurate since that implies intent to deceive, and just being “wrong” doesn’t get across how elaborately wrong an LLM can be.

Hallucination fits pretty well and is also pretty evocative. I doubt that AI promoters want to effectively call their product schizophrenic, which is what most people think when hearing hallucination.

Ultmately all the sciences are full of analogous names to make conversations easier, it’s not always marketing. No different than when physicists say particles have “spin” or “color” or that spacetime is a “fabric” or [insert entirety of String theory]…

abrinael ,

After thinking about it more, I think the main issue I have with it is that it sort of anthropomorphises the AI, which is more of an issue in applications where you’re trying to convince the consumer that the product is actually intelligent. (Edit: in the human sense of intelligence rather than what we’ve seen associated with technology in the past.)

You may be right that people could have a negative view of the word “hallucination”. I don’t personally think of schizophrenia, but I don’t know what the majority think of when they hear the word.

Knock_Knock_Lemmy_In ,

You could invent a new word, but that doesn’t help people understand the problem.

You are looking for an existing word that describes providing unintentionally incorrect thoughts but is totally unrelated to humans. I suspect that word doesn’t exist. Every thinking word gets anthropomorphized.

ZILtoid1991 ,

They’re nowadays using it to humanize neural networks, and thus oversell its capabilities.

HawlSera , (edited )

The AI isn’t alive, it’s not hallucinating… We will likely never have true AI until we figure out the Hard Problem of Consciousness, till we know what makes a human alive, we can’t make a machine alive.

Hugin ,

Prisencolinensinainciusol an Italian song that is complete gibberish but made to sound like an English language song. That’s what AI is right now.

www.youtube.com/watch?v=RObuKTeHoxo

noughtnaut ,
@noughtnaut@lemmy.world avatar

Oh that is hilarious! Just on my first listen but I don’t quite get the lyrics - am deeply disappointed that the video doesn’t have subs. 🤭

Found it on Spotify. It’s so much worse with lyrics. Thank you for sharing a version without them! 🙏

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

The Italians actually have a name for that kind of gibberish talking that sounds real. I did some VO work on a project being directed by an Italian guy and he explained what he wanted me to do by explaining the term to me first. I’m afraid it’s been way too long since he told me for me to remember it though.

Another example would be the La Linea cartoons, where the main character speaks a gibberish which seems to approximate Italian to my ears.

www.youtube.com/watch?v=ldff__DwMBc

yildolw ,

Jazz musicians have a name for gibberish talking that sounds real: scat

We have to stop ignoring AI’s scat problem

Gen Alpha has a name for gibberish talking that sounds real: skibidi toilet

We have to stop ignoring AI’s skibidi toilet problem

CrayonRosary ,

More importantly, we need to stop ignoring criminal case eye witness’ hallucinatory testimony.

ALostInquirer ,

Why do tech journalists keep using the businesses’ language about AI, such as “hallucination”, instead of glitching/bugging/breaking?

superminerJG ,

hallucination refers to a specific bug (AI confidently BSing) rather than all bugs as a whole

ALostInquirer , (edited )

(AI confidently BSing)

Isn’t it more accurate to say it’s outputting incorrect information from a poorly processed prompt/query?

vithigar ,

No, because it’s not poorly processing anything. It’s not even really a bug. It’s doing exactly what it’s supposed to do, spit out words in the “shape” of an appropriate response to whatever was just said

ALostInquirer ,

When I wrote “processing”, I meant it in the sense of getting to that “shape” of an appropriate response you describe. If I’d meant this in a conscious sense I would have written, “poorly understood prompt/query”, for what it’s worth, but I see where you were coming from.

Blackmist ,

Honestly, it’s the most human you’ll ever see it act.

It’s got upper management written all over it.

blazeknave ,

Ty. As soon as I saw the headline, I knew I wouldn’t be finding value in the article.

ALostInquirer ,

It’s not a bad article, honestly, I’m just tired of journalists and academics echoing the language of businesses and their marketing. “Hallucinations” aren’t accurate for this form of AI. These are sophisticated generative text tools, and in my opinion lack any qualities that justify all this fluff terminology personifying them.

Also frankly, I think students have one of the better applications for large-language model AIs than many adults, even those trying to deploy them. Students are using them to do their homework, to generate their papers, exactly one of the basic points of them. Too many adults are acting like these tools should be used in their present form as research aids, but the entire generative basis of them undermines their reliability for this. It’s trying to use the wrong tool for the job.

You don’t want any of the generative capacities of a large-language model AI for research help, you’d instead want whatever text-processing it may be able to do to assemble and provide accurate output.

xthexder ,
@xthexder@l.sw0.com avatar

Because hallucinations pretty much exactly describes what’s happening? All of your suggested terms are less descriptive of what the issue is.

The definition of hallucination:

A hallucination is a perception in the absence of an external stimulus.

In the case of generative AI, it’s generating output that doesn’t match it’s training data “stimulus”. Or in other words, false statements, or “facts” that don’t exist in reality.

ALostInquirer ,

perception

This is the problem I take with this, there’s no perception in this software. It’s faulty, misapplied software when one tries to employ it for generating reliable, factual summaries and responses.

xthexder ,
@xthexder@l.sw0.com avatar

I have adopted the philosophy that human brains might not be as special as we’ve thought, and that the untrained behavior emerging from LLMs and image generators is so similar to human behaviors that I can’t help but think of it as an underdeveloped and handicapped mind.

I hypothesis that a human brain, who’s only perception of the world is the training data force fed to it by a computer, would have all the same problems the LLMs do right now.

To put it another way… The line that determines what is sentient and not is getting blurrier and blurrier. LLMs have surpassed the Turing test a few years ago. We’re simulating the level of intelligence of a small animal today.

Danksy ,

It’s not a bug, it’s a natural consequence of the methodology. A language model won’t always be correct when it doesn’t know what it is saying.

ALostInquirer ,

Yeah, on further thought and as I mention in other replies, my thoughts on this are shifting toward the real bug of this being how it’s marketed in many cases (as a digital assistant/research aid) and in turn used, or attempted to be used (as it’s marketed).

Danksy ,

I agree, it’s a massive issue. It’s a very complex topic that most people have no way of understanding. It is superb at generating text, and that makes it look smarter than it actually is, which is really dangerous. I think the creators of these models have a responsibility to communicate what these models can and can’t do, but unfortunately that is not profitable.

vrighter ,

it never knows what it’s saying

TheDarksteel94 ,

Oh, at some point it will lol

Danksy ,

That was what I was trying to say, I can see that the wording is ambiguous.

machinin ,

…wikipedia.org/…/Hallucination_(artificial_intell…

The term “hallucinations” originally came from computer researchers working with image producing AI systems. I think you might be hallucinating yourself 😉

ALostInquirer ,

Fun part is, that article cites a paper mentioning misgivings with the terminology: AI Hallucinations: A Misnomer Worth Clarifying. So at the very least I’m not alone on this.

oldfemboy ,
@oldfemboy@lemmy.ml avatar

I remember getting gaslit about AIs lying. So glad it’s getting attention.

whoreticulture ,

By who? I thought this was broadly known?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines