There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

PPQ ,

Reddit = google’s bitch

casual_turtle_stew_enjoyer ,

I just checked and they actually disabled AI Overview. LMAO

diffusive ,

Clearly fake: LLM don’t mind citing their sources 😂

Treczoks ,

Dr. Google MD at its best.

Caspase8 ,

How is everyone getting this ai overview? All I get when I google something is the usual stuff.

justme ,

Not using Google for ages, bit I remember that big changes on,eg, Facebook roll out gradually. So not all users at the same time.

Klear ,

You can also get it by clicking inspect element and writing whatever ragebait you can think of in there.

robocall ,
@robocall@lemmy.world avatar

How do you stop people from jumping off the golden gate bridge? use glue! - Google, probably

DragonOracleIX ,

Nah, it would say that glue is only useful for pizzas.

TheOakTree ,

Simply turn yourself into cheese and the bridge into pizza, and then the glue will work perfectly!

mrgreyeyes ,

People will die of hunger in the glue trap, but they’re not jumping!

ultratiem ,
@ultratiem@lemmy.ca avatar

As comical as memey as this is, it does illustrate the massive flaw in AI today: it doesn’t actually understand context or what it’s talking about outside of just hacking a folder of info on the topic. It doesn’t know what a guitar is, so anything it recommends suffers from being sourced in a void, devoid of true meaning.

pelespirit ,
@pelespirit@sh.itjust.works avatar

anything it recommends suffers from being sourced in a void, devoid of true meaning.

You just described most of reddit, anything Meta, and what most reviews are like.

CanadaPlus , (edited )

Does anyone really know what a guitar is, completely? Like, I don’t know how they’re made, in detail, or what makes them sound good. I know saws and wide-bandwidth harmonics are respectively involved, but ChatGPT does too.

When it comes to AI, bold philosophical claims about knowledge stated as fact are kind of a pet peeve of mine.

CasualPenguin ,

It sounds like you could do with reading up on LLMs in order to know the difference between what it does and what you’re discussing.

CanadaPlus ,

Dude, I could implement a Transformer from memory. I know what I’m talking about.

Zron ,

You’re the one who made this philosophical.

I don’t need to know the details of engine timing, displacement, and mechanical linkages to look at a Honda civic and say “that’s a car, people use them to get from one place to another. They can be expensive to maintain and fuel, but in my country are basically required due to poor urban planning and no public transportation”

ChatGPT doesn’t know any of that about the car. All it “knows” is that when humans talked about cars, they brought up things like wheels, motors or engines, and transporting people. So when it generates its reply, those words are picked because they strongly associate with the word car in its training data.

All ChatGPT is, is really fancy predictive text. You feed it an input and it generates an output that will sound like something a human would write based on the prompt. It has no awareness of the topics it’s talking about. It has no capacity to think or ponder the questions you ask it. It’s a fancy lightbulb, instead of light, it outputs words. You flick the switch, words come out, you walk away, and it just sits there waiting for the next person to flick the switch.

CanadaPlus , (edited )

No man, what you’re saying is fundamentally philosophical. You didn’t say anything about the Chinese room or epistemology, but those are the things you’re implicitly talking about.

You might as well say humans are fancy predictive muscle movement. Sight, sound and touch come in, movement comes out, tuned by natural selection. You’d have about as much of a scientific leg to stand on. I mean, it’s not wrong, but it is one opinion on the nature of knowledge and consciousness among many.

Zron ,

I didn’t bring up Chinese rooms because it doesn’t matter.

We know how chatGPT works on the inside. It’s not a Chinese room. Attributing intent or understanding is anthropomorphizing a machine.

You can make a basic robot that turns on its wheels when a light sensor detects a certain amount of light. The robot will look like it flees when you shine a light at it. But it does not have any capacity to know what light is or why it should flee light. It will have behavior nearly identical to a cockroach, but have no reason for acting like a cockroach.

A cockroach can adapt its behavior based on its environment, the hypothetical robot can not.

ChatGPT is much like this robot, it has no capacity to adapt in real time or learn.

FruitLips ,

Feels reminiscent of stealing an Aboriginal, dressing them in formal attire then laughing derisively when the ‘savage’ can’t gracefully handle a fork. What is a brain, if not a computer?

CanadaPlus ,

Yeah, that’s spicier wording than I’d prefer, but there is a sense they’d never apply these high measures of understanding to another biological creature.

I wouldn’t mind considering the viewpoint, on it’s own, but they put it like it’s an empirical fact rather than a (very controversial) interpretation.

milicent_bystandr ,

The other massive flaw it demonstrates in AI today is it’s popular to dunk on it so people make up lies like this meme and the internet laps them up.

Not saying AI search isn’t rubbish, but I understand this one is faked, and the tweeter who shared it issued an apology. And perhaps the glue one too.

dependencyinjection ,

There are cases of AI using NotTheOnion as a source for its answer.

It doesn’t understand context. That’s not to say I am saying it’s completely useless, hell I’m a software developer and our company uses CoPilot in Visual Studio Professional and it’s amazing.

People can criticise the flaws in it, without people doing it because it’s popular to dunk on it. Don’t shill for AI and actually take a critical approach to its pros and cons.

milicent_bystandr ,

I think people do love to dunk on it. It’s the fashion, and it’s normal human behaviour to take something popular - especially popular with people you don’t like (e.g. j this case tech companies) - and call it stupid. Makes you feel superior and better.

There are definitely documented cases of LLM stupidity: I enjoyed one linked from a comment, where Meta’s(?) LLM trained specifically off academic papers was happy to report on the largest nuclear reactor made of cheese.

But any ‘news’ dumping on AI is popular at the moment, and fake criticism not only makes it harder to see a true picture of how good/bad the technology is doing now, but also muddies the water for people believing criticism later - maybe even helping the shills.

Annoyed_Crabby ,

It also doesn’t know what is true what is bs unless they learn from curated source. Truth need to be verified and backed by fact, if an AI learn from unverified or unverifiable source, it gonna repeat confidently what it learn from, just like an average redditor. That’s what make it dangerous, as all these millionaire/billionaire keep hyping up the tech as something it isn’t.

Carighan ,
@Carighan@lemmy.world avatar

It’s called Chinese Room and it’s exactly what “AI” is. It recombines pieces of data into “answers” to a “question”, despite not understanding the question, the answer it gives, or the piece sit uses.

It has a very very complex chart of which elements in what combinations need to be in an answer for a question containing which elements in what combinations, but that’s all it does. It just sticks word barf together based on learned patterns with no understanding of words, language, context of meaning.

Valmond ,

Yeah but the proof was about consciousness, and a really bad one IMO.

I mean we are probably not more advanced than computers, which would indicate that consciousness is needed to understand context which seems very shaky.

kibiz0r , (edited )

I think it’s kind of strange.

Between quantification and consciousness, we tend to dismiss consciousness because it can’t be quantified.

Why don’t we dismiss quantification because it can’t explain consciousness?

Valmond ,

We can understand and poke on one but not the other I guess. I think so much more energy should be invested in understanding consciousness.

kromem ,

This image was faked. Check the post update.

Turns out that even for humans knowing what’s true or not on the Internet isn’t so simple.

ultratiem ,
@ultratiem@lemmy.ca avatar

Yes we know. We aren’t talking about the authenticity of the meme. We are talking about the fundamental problem with “AI”

kromem ,

You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.

MrMeanJavaBean ,

I just asked Google why it’s search is complete shit now? At least it isn’t being biased, lol 🤪https://lemmy.world/pictrs/image/80d49433-320f-46eb-9ae8-5e456fc65cbe.jpeg

TheSlad ,

Thas not the AI though, thats a snippet from an article about how google search is shit now

CanadaPlus ,

Yeah, when Google starts trying to manipulate the meaning of results in it’s favour, instead of just traffic, things will be at a whole other level of scary.

lemmee_in ,

I’m glad it wasn’t us (lemmy users)

embed_me ,
@embed_me@programming.dev avatar

The suggestion would probably be to install linux and harbour radical thoughts

CanadaPlus , (edited )

It won’t make you happy, but it will make you a less of a casual in the depressive basement gremlin game.

Valmond ,

U depressed? Bring out the guillotine !

slimarev92 ,

At this point I can’t tell the joke ones from the real ones.

Jimmyeatsausage ,

It’s all a joke

MalReynolds ,
@MalReynolds@slrpnk.net avatar

And now I’m thinking of the comedian from Watchmen. Alan Moore, knows the score…

veganpizza69 ,
@veganpizza69@lemmy.world avatar

simulAIcrum

CoggyMcFee ,

Neither can ChatGPT

thezeesystem ,

Idk seems more helpful then the suicide hotline number. Called them many times for them to tell me generic same information and often times hug up on if I started to cry.

_number8_ ,

i like how the answers are the exact same generic unhelpful drivel you hear 20k times a month if you’re depressed as well. real improvement there. when people google that they want immediate relief, not fucking oh go for a walk every day, no shit. the triviality of the suggestion makes the depression worse because you know it’s going to do nothing the first week besides make you feel sweaty and looked at and alone. like if i’m feeling recovered enough to go walk every day then i’m already feeling good enough that i don’t need to be googling about depression tips. this shit drives me insane.

BradleyUffner ,

i like how the answers are the exact same generic unhelpful drivel you hear 20k times a month if you’re depressed as well.

It makes sense though. It was trained on that drivel.

Revan343 ,

when people google that they want immediate relief

Well, bad news as far as ‘immediate relief from depression’ goes.

Though I suppose there’s always ketamine.

elmz ,

Well, the Golden Gate suggestion is the immediate solution…

Raiderkev ,

I mean, they put nets on it now. This advice is outdated. Stupid AI.

captainlezbian ,

For some. Some of us would take at least a week to get there. Surely there must be a bridge on the east coast that works!

Seudo ,

The ultimate question of philosophy…

"Should I kill myself, or have a cup of coffee?
-Camus

barsoap ,

The trouble with ketamine is that once you reassociate shit’s back to where it was. It can alleviate symptoms and in very serious cases that might be called for but it’s definitely not a cure. Taking drugs to lower a fewer also alleviates symptoms, in serious cases will save lives, but it’s not going to get rid of the bug causing the fewer.

beeng ,

TIL im alone when I go for a walk

barsoap , (edited )
  1. Accept that your brain wants to do something different than what you had planned, thus
  2. Cancel all mid- to long-term appointments and
  3. Use the opportunity of not having that shit distracting you to reinforce good moment-to-moment habits. Like taking a walk today, because you can use the opportunity to buy fresh food today, to make a nice meal today, because that’s a good idea you can enjoy today while the back of your mind does its thing, which is not something you can do anything about in particular so stop worrying. And you probably don’t want to go shopping in pyjamas without taking a shower so that’s also dealt with. And with that,
  4. You have a way to set a minimum standard for yourself that will keep you away from an unproductive downward spiral and keep depression what it’s supposed to be, and that’s a fever to sweat out shitty ideas, concepts, and habits, none of which, let’s be honest, involve good food and a good shower. That’s not shitty shit you dislike.

The tl;dr is that depression doesn’t mean you need to suffer or anything. Unless you insist on clinging to the to be sweated out stuff, that is. The downregulating of vigour is global, yes, necessary to starve the BS, but if you don’t get your underwear in a twist over longer-term stuff your everyday might very well turn out to simply be laid back.

…OTOH yeah if this is your first time and you don’t have either a natural knack for it or the wherewithal to be spontaneously gullible enough to believe me, good luck.

Also clinical depression as in “my body just can’t produce the right neurotransmitters, physiologically” is a completely different beast. Also you might be depressive and not know it especially if you’re male because the usually quoted symptom set is female-typical.

fine_sandy_bottom ,

You’ve laid out your personal depression cure to someone stating that reading about other people’s depression cures is incredibly frustrating when you’re actually depressed.

It’s great that you’ve found a plan that works for you, but don’t minimise everyone else’s suffering by proposing your own therapy.

In most cases the best thing you can do to help is to try to understand how someone is feeling.

barsoap , (edited )

You’ve laid out your personal depression cure to someone stating that reading about other people’s depression cures is incredibly frustrating when you’re actually depressed.

That’s not what the complaint was about. The complaint was about the generic drivel. The population-based “We observed 1000 patients and those that did these things got better” stuff that ignores why those people ended up doing those things, ignorance of the underlying dynamics which also conveniently fits a “pull yourself up by the bootstraps” narrative. The kind of stuff that ignores what people are going through. Ignores which agency exists, and which not.

Read what I wrote not as a plan “though shall get up at 6 and go on a brisk walk”, that’s BS and not what I wrote. Read it as an understanding of how things work dressed up as a plan. Going out and cooking food? Just an example, apply your own judgement of what’s good and proper for you moment to moment. You can read past the concrete examples, I believe in you.

In most cases the best thing you can do to help is to try to understand how someone is feeling.

The trick is to understand why you’re in that situation, what your grander self is doing, or at least trust it enough to ride along. Stop second-guessing the path you’re on and walk it, instead. You don’t really have a choice of path, but you do have a choice of footwear.

Or, differently put: What’s more important, understanding a feeling or where it’s coming from? Why it’s there? What it’s doing? What is its purpose? …what are the options? Knowing all this, many feelings will be more fleeting that you might think.

There’s an old Discorian parable, and actually read it it’s not the one you think it is:

I dreamed that I was walking down the beach with the Goddess. And I looked back and saw footprints in the sand.
But sometimes there were two pairs of footprints, and sometimes there was only one. And the times when there was only one pair of footprints, those were my times of greatest trouble.
So I asked the Goddess, “Why, in my greatest need, did you abandon me?”
She replied, “I never left you. Those were the times when we both hopped on one foot.”
And lo, I was really embarassed for bothering Her with such a stupid question.

kionite231 ,

The story at last was genuinely good.

fine_sandy_bottom ,

I mean this in the nicest possible way but you seem absolutely insufferable.

This is precisely the type of un-depress yourself advice that helps no one.

barsoap ,

I seem to be speaking Klingon. I never told anyone to “un-depress” themselves. Quite the contrary, I’m talking about the necessity to accept that it’ll be the path you’re walking on for, potentially, quite a while. All I’m telling you is that that path doesn’t have to be miserable, or a downward spiral.

Make a distinction between these two scenarios: One, someone has a fever. They get told “stop having a fever, lower your temperature, then you’ll be fine”. Second, same kind of fever, they get told “Accept that you have a fever. Make sure to drink enough and to make yourself otherwise comfortable in the moment. Ignore the idiot with the ‘un-fever yourself’ talk”.

_number8_ ,

again, this is all long term executive function that you are generally incapable of performing or even contemplating when depressed. maybe you can protestant-work-ethic yourself out of depression but that doesn’t mean everyone can. oh yeah lemme just keep being fucking harsh with myself, that’s the ticket.

what i want to hear is

  • take a bath
  • have chamomile tea, it binds to your GABA receptors
  • go outside to breath the fresh air and look at the moon
  • etc

simple, actionable things that don’t have barely-hidden contempt or disinterest behind them

barsoap ,

I’m sorry what’s long-term executive function about cancelling your appointments? What’s harsh about it?

What about “take a bath” and “go outside to breathe” is less protestant-work-ethic than what I was saying?

The simple, actionable things are, precisely, the simple, actionable things. “Breathe in the fresh air” is not actionable when living in a city. “Sit on a bench and people-watch” is not actionable in the countryside. You know much better where you live, what simple things you could do right now. The point is not about the precise action, it’s about that it’s simple and actionable thus you should do it. Also, to a large degree, that it’s your idea, something you want.

HelixDab2 ,

when people google that they want immediate relief, not fucking oh go for a walk every day,

The problem is that there is no immediate relief that isn’t either a) suicide, or b) won’t make things worse in the long run. Even something like ECT doesn’t work instantly; it takes several treatments. Transcranial magnetic stimulation seems promising, but it’s not a frontline treatment. The generic shit is the stuff that actually works in the long run, things like getting therapy, exercising, going outside more, interacting with people in a positive way, and so on. “Self care”–isolating and doing easy, comfortable things–will make things worse in the long run.

dgriffith , (edited )

i like how the answers are the exact same generic unhelpful drivel you hear 20k times a month if you’re…

Searching for a solution to any problem on the internet.

There are a million ad- laden sites that, in answer to a technical question about your PC, suggest that you run antivirus, system file checker, oh and then just format and reinstall your operating system. That is also 90 percent of the answers coming from “Microsoft volunteer support engineers” on Microsoft’s own support forums as well, just please like and upvote their answer if it helps you.

There are a million Instagram and tiktok videos showing obvious trivial, shitty, solutions to everyday problems as if they are revealing the secrets of the universe while they’re glueing bottle tops and scraps of car tires together to make a television remote holder.

There are a trillion posts on Reddit from trolls and shitheads just doing it for teh lulz and Google is happily slurping this entire torrent of shit down and trying to regurgitate it as advice with no human oversight.

I reckon their search business has about two years left at this rate before the general public regards them as a joke.

Edit: and the shittification of the internet has all been Google’s doing. The need for sites to get higher up in Google’s PageRank™ or be forever invisible has absolutely ruined it. The torrent of garbage now needed to ensure that various algorithms favour your content has fucked it for everyone. Good job, Google.

ulterno ,
@ulterno@lemmy.kde.social avatar

I feel like the user’s suggestion of “jumping off the Golden Gate Bridge” would be more impactful in that case, you know, to awaken your survival instincts, which prevents depression.
But on the off chance that someone actually goes and jumps off, a professional would probably not give that advice.

HRDS_654 ,

Can we stop calling these glorified chat bots “AI” now?

Icalasari ,

These chatbots are AI - They tailor responses over time so long as previous messages are in memory, showing a limited level of learning

The issue is these chatbots either:

A) Get so little memory that they effectively don't even have short term memory, or

B) Are put in situations where that chat memory learning feature is moot

They are AI, they are just stupidly simple and inept AI that barely qualify

BradleyUffner ,

They have no memory actually. They are completely static. When you chat with them, every single previous prompt and response from that session is fed back through as if it were one large single prompt. They are just faking it behind a chat-like user interface. They most definitely do not learn anything after training is complete.

WhiskyTangoFoxtrot ,

Their brain is a neural-net processor, a learning computer, but Skynet sets the switch to read-only when they’re sent out alone.

RealFknNito ,
@RealFknNito@lemmy.world avatar

… No. They’re instanced so that when a new person interacts with them, they don’t have the memories of interacting with the person before them. A clean slate, using only the training data in the form the developers want it to. It’s still AI, it’s just not your girlfriend. The fact you don’t realize that they do and can learn after their training data proves people just hate what they don’t understand. I get it, most people don’t even know the difference between a neural network and AI because who has the time for that? But if you just sit here and go “nuh uh they’re faking it” rather than push people and yourself to learn more, I invite you, cordially, to shut the fuck up.

Dipshits giving their opinions as fact is a scourge with no cure.

BradleyUffner ,

I love how confidently wrong you are!

RealFknNito ,
@RealFknNito@lemmy.world avatar

About which part? The part that they can remember and expand their training data to new interactions but often become corrupted by them so much so that the original intent behind the AI is irreversibly altered? That’s been around for about a decade. How about the fact they’re “not faking it” because the added capacity to compute and generate the new content has to have sophisticated plans just to continue running in a timely manner?

I’d love to know which part you took issue with but you seemingly took my advice to shut the fuck up and I do profoundly appreciate it.

BradleyUffner ,

That’s a completely different kind of AI. This story, and all the discussion up to this point, has been about the LLM based AIs being employed by Google search and ChatGPT.

RealFknNito ,
@RealFknNito@lemmy.world avatar

And I explained to you that these models aren’t incapable of learning, they’re given artificial restrictions not to in order to prevent what I linked from happening. They don’t learn to preserve the initial experience but are the exact same kind of AI. Generative.

BradleyUffner ,

You can “explain” it that way as much as you want, that doesn’t make it true.

fleckenstein ,
@fleckenstein@lizzy.rs avatar

@BradleyUffner @RealFknNito wow this is just like reddit 🍿

RealFknNito ,
@RealFknNito@lemmy.world avatar

The mere existence of the things I linked you is pretty overwhelming evidence. Feel free to refute it at your leisure.

filcuk ,

Corps sure can’t

unreasonabro , (edited )

I mean this is amazing. In the world of the 80s and 90s that’d be enough to destroy a company, stripped down to parts and sold off. Now in this cowardly new “too big to fail” world of Clarence Thomas-reeking doubletalk bullshit, this’d result in no consequences whatsoever

sagrotan ,
@sagrotan@lemmy.world avatar

Darwin machine, pt. 2

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines