There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

cordlesslamp ,

To be honest, I lost all interest in the new AMD CPUs because they fucking named the thing “AI” (with zero real-world application).

I’m in the market for a new PC next month and I’m gonna get the 7800X3D for my VR gaming needs.

oyo ,

LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.

pumpkinseedoil ,

Often the answers are pretty good. But you never know if you got a good answer or a bad answer.

Blackmist ,

And the system doesn’t know either.

For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.

xantoxis ,

Accurate.

No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.

Blackmist ,

The worst for me was a fairly simple programming question. The class it used didn’t exist.

“You are correct, that class was removed in OLD version. Try this updated code instead.”

Gave another made up class name.

Repeated with a newer version number.

It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.

GBU_28 ,

With proper framework, decent assertions are possible.

  1. It must cite the source and provide the quote, not just a summary.
  2. An adversarial review must be conducted

If that is done, the work on the human is very low.

That said, it’s STILL imperfect, but this is leagues better than one shot question and answer

JCreazy ,

There are even companies slapping AI labels onto old tech with timers to trick people into buying it.

1995ToyotaCorolla ,
@1995ToyotaCorolla@lemmy.world avatar

That one DankPods video of the “AI Rice cooker” comes to mind

JCreazy ,

Yeah that’s the one I saw

EvilBit ,

For what it’s worth, rice cookers have been touting “fuzzy logic” for like 30 years. The term “AI” is pretty much the same, it just wasn’t as buzzy back then.

expatriado ,

I like my AI compartmentalized, I got a bookmark for chatGPT for when i want to ask a question, and then close it. I don’t need a different flavor of the same thing everywhere.

psmgx ,

Cuz everyone knows it’s BS, or mostly BS with extra data mining

Grandwolf319 ,

I mean, pretty obvious if they advertise the technology instead of the capabilities it could provide.

Still waiting for that first good use case for LLMs.

psivchaz ,

It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it’s got mistakes) or answer a few questions can save a lot of time.

Grandwolf319 ,

So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.

Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?

I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.

So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.

In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).

BassTurd ,

I just recently got copilot in vscode through work. I typed a comment that said, “create a new model in sqlalchemy named assets with the columns, a, b, c, d”. It couldn’t know the proper data types to use, but it output everything perfectly, including using my custom defined annotations, only it was the same annotation for every column that I then had to update. As a test, that was great, but copilot also picked up a SQL query I had written in a comment to reference as I was making my models, and it also generated that entire model for me as well.

It didn’t do anything that I didn’t know how to do, but it saved on some typing effort. I use it mostly for its auto complete functionality and letting it suggest comments for me.

Grandwolf319 ,

That’s awesome, and I would probably would find those tools useful.

Code generators have existed for a long time, but they are usually free. These tools actually costs a lot of money, cost way more to generate code this way than the traditional way.

So idk if it would be worth it once the venture capitalist money dries up.

BassTurd ,

That’s fair. I don’t know if I will ever pay my own money for it, but if my company will, I’ll use it where it fits.

Dran_Arcana ,

I’m actually working on a vector DB RAG system for my own documentation. Even in its rudimentary stages, it’s been very helpful for finding functions in my own code that I don’t remember exactly what project I implemented it in, but have a vague idea what it did.

E.g

Have I ever written a bash function that orders non-symver GitHub branches?

Yes! In your ‘webwork automation’ project, starting on line 234, you wrote a function that sorts Git branches based on WebWork’s versioning conventions.

beveradb ,

I’ve built a couple of useful products which leverage LLMs at one stage or another, but I don’t shout about it cos I don’t see LLMs as something particularly exciting or relevant to consumers, to me they’re just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem. I think they are a new tool which is genuinely valuable when dealing with natural language problems. For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I’ve finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would’ve been much harder!

Draedron ,

Wrote my last application with chat gpt. Changed small stuff and got the job

explodicle ,

Please write a full page cover letter that no human will read.

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

That’s because businesses are using AI to weed out resumes.

Basically you beat the system by using the system. That’s my plan too next time I look for work.

Empricorn ,

Haven’t you been watching the Olympics and seen Google’s ad for Gemini?

Premise: your daughter wants to write a letter to an athlete she admires. Instead of helping her as a parent, Gemini can magic-up a draft for her!

psivchaz ,

On the plus side for them, they can probably use Gemini to write their apology blog about how they missed the mark with that ad.

Cryophilia ,

Writing bad code that will hold together long enough for you to make your next career hop.

NABDad ,

I think the LLM could be decent at the task of being a fairly dumb personal assistant. An LLM interface to a robot that could go get the mail or get you a cup of coffee would be nice in an “unnecessary luxury” sort of way. Of course, that would eliminate the “unpaid intern to add experience to a resume” jobs. I’m not sure if that’s good or bad,l. I’m also not sure why anyone would want it, since unpaid interns are cheaper and probably more satisfying to abuse.

I can imagine an LLM being useful to simulate social interaction for people who would otherwise be completely alone. For example: elderly, childless people who have already had all their friends die or assholes that no human can stand being around.

Grandwolf319 ,

Is that really an LLM? Cause using ML to be a part of future AGI is not new and actually was very promising and the cutting edge before chatGPT.

So like using ML for vision recognition to know a video of a dog contains a dog. Or just speech to text. I don’t think that’s what people mean these days when they say LLM. Those are more for storing data and giving you data in forms of accurate guesses when prompted.

ML has a huge future, regardless of LLMs.

Entropywins ,

Llm’s are ML…or did I miss something here?

nic2555 ,

Yes. But not all Machine Learning (ML) is LLM. Machine learning refer to the general uses of neural networks while Large Language Models (LLM) refer more to the ability for an application, or a bot, to understand natural language and deduct context from it, and act accordingly.

ML in general as a much more usages than only power LLM.

EvilBit ,

I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.

But 98% of GenAI hype is bullahit so far.

Grandwolf319 ,

How would it do that? Would LLMs not just take input as voice or text and then guess an output as text?

Wouldn’t the text output that is suppose to be commands for action, need to be correct and not a guess?

It’s the whole guessing part that makes LLMs not useful, so imo they should only be used to improve stuff we already need to guess.

EvilBit ,

One of the ways to mitigate the core issue of an LLM, which is confabulation/inaccuracy, is to have a layer of either confirmation or simply forgiveness intrinsic to the task. Use the favor test. If you asked a friend to do you a favor and perform these actions, they’d give you results that you can either/both look over yourself to confirm they’re correct enough, or you’re willing to simply live with minor errors. If that works for you, go for it. But if you’re doing something that absolutely 100% must be correct, you are entirely dependent on independently reviewing the results.

But one thing Apple is doing is training LLMs with action semantics, so you don’t have to think of its output as strictly textual. When you’re dealing with computers, the term “language” is much looser than you or I tend to understand it. You can have a “grammar” that is inclusive of the entirety of the English language but also includes commands and parameters, for example. So it will kinda speak English, but augmented with the ability to access data and perform actions within iOS as well.

pumpkinseedoil ,

LLM have greatly increased my coding speed: instead of writing everything myself I let AI write it and then only have to fix all the bugs

Grandwolf319 ,

I’m glad. Depends on the dev. I love writing code but debugging is annoying so I would prefer to take longer writing if it means less bugs.

Please note I’m also pro code generators (like emmet).

TrickDacy ,

I don’t see any mention of any details about the study participants but I wouldn’t expect the general public to have this attitude.

jubilationtcornpone ,

I think there is potential for using AI as a knowledge base. If it saves me hours of having to scour the internet for answers on how to do certain things, I could see a lot of value in that.

The problem is that generative AI can’t determine fact from fiction, even though it has enough information to do so. For instance, I’ll ask Chat GPT how to do something and it will very confidently spit out a wrong answer 9/10 times. If I tell it that that approach didn’t work, it will respond with “Sorry about that. You can’t do [x] with [y] because [z] reasons.” The reasons are often correct but ChatGPT isn’t “intelligent” enough to ascertain that an approach will fail based on data that it already has before suggesting it.

It will then proceed to suggest a variation of the same failed approach several more times. Every once in a while it will eventually pivot towards a workable suggestion.

So basically, this generation of AI is just Cliff Clavin from Cheers. Able to to sting together coherent sentences of mostly bullshit.

verity_kindle ,

Cliffy didn’t hallucinate as much.

esc27 ,

They’ve overhyped the hell out of it and slapped those letters on everything including a lot of half baked ideas. Of course people are tired of it and beginning to associate ai with bad marketing.

This whole situation really does feel dotcommish. I suspect we will soon see an ai crash, then a decade or so later it will be ubiquitous but far less hyped.

Vent ,

Thing is, it already was ubiquitous before the AI “boom”. That’s why everything got an AI label added so quickly, because everything was already using machine learning! LLMs are new, but they’re just one form of AI and tbh they don’t do 90% of the stuff they’re marketed as and most things would be better off without them.

rottingleaf ,

What did they even expect, calling something “AI” when it’s no more “AI” than a Perl script determining whether a picture contains more red color than green or vice versa.

Anything making some kind of determination via technical means, including MCs and control systems, has been called AI.

When people start using the abbreviation as if it were “the” AI, naturally first there’ll be a hype of clueless people, and then everybody will understand that this is no different from what was before. Just lots of data and computing power to make a show.

baggachipz ,

Gartner Hype Cycle is the new Moore’s Law.

EherNicht ,

Who would have guessed so?

Meron35 ,

Market shows that investors are actively turned on by products that use AI

SapphironZA ,

Market shows that the market buys into hype, not value.

riskable ,
@riskable@programming.dev avatar

Market shows that hype is a cycle and the AI hype is nearing its end.

rottingleaf ,

Customers worry about what they can do with it, while investors and spectators and vendors worry about buzzwords. Customers determine demand.

Sadly what some of those customers want to do is to somehow improve their own business without thinking, and then they too care about buzzwords, that’s how the hype comes.

USSEthernet ,

Prominent market investor arrested and charged for sexually assaulting AI robot

Wirlocke ,

I wonder if we’ll start seeing these tech investor pump n’ dump patterns faster collectively, given how many has happened in such a short amount of time already.

Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.

It feels like the futurism sheen has started to waver. When everything’s a major revolution inserted into every product, then isn’t, it gets exhausting.

TimeSquirrel ,
@TimeSquirrel@kbin.melroy.org avatar

Internet of Things

This is very much not a hype and is very widely used. It's not just smart bulbs and toasters. It's burglar/fire alarms, HVAC monitoring, commercial building automation, access control, traffic infrastructure (cameras, signal lights), ATMs, emergency alerting (like how a 911 center dispatches a fire station, there are systems that can be connected to a jurisdiction's network as a secondary path to traditional radio tones) and anything else not a computer or cell phone connected to the Internet. Now even some cars are part of the IoT realm. You are completely surrounded by IoT without even realizing it.

Wirlocke ,

Huh, didn’t know that! I mainly mentioned it for the fact that it was crammed into products that didn’t need it, like fridges and toasters where it’s usually seen as superfluous, much like AI.

DancingBear ,

I would beg to differ. I thoroughly enjoy downloading various toasting regimines. Everyone knows that a piece of white bread toasts different than a slice of whole wheat. Now add sourdough home slice into the mix. It can get overwhelming quite quickly.

Don’t even get me started on English muffins.

With the toaster app I can keep all of my toasting regimines in one place, without having to wonder whether it’s going to toast my pop tart as though it were a hot pocket.

barsoap ,

I mean give the thing an USB interface so I can use an app to set timing presets instead of whatever UX nightmare it’d otherwise be and I’m in, nowadays it’s probably cheaper to throw in a MOSFET and tiny chip than it is to use a bimetallic strip, much fewer and less fickle parts and when you already have the capability to be programmable, why not use it. Connecting it to an actual network? Get out of here.

DancingBear ,

Yea I’m being a little facetious I hope it is coming through lol

verity_kindle ,

Bagels are a whole different set of data than bread. New bread toasts much more slowly than old bread.

kinsnik ,

I think that the dot com bubble is the closest, honestly. There can be some kind of useful products (mostly dealing with how we interact with a system, not actually trying to use AI to magically solve a problem; it is shit at that), but the hype is way too large

affiliate ,

don’t forget Big Data

explodicle ,

TimeSquirrel made a good point about Internet of Things, but Crypto and Self Driving Cars are still booming too.

IMHO it’s a marketing problem. They’re major evolutions taking root over decades. I think AI will gradually become as useful as lasers.

Cornelius_Wangenheim ,

It’s more of a macroeconomic issue. There’s too much investor money chasing too few good investments. Until our laws stop favoring the investor class, we’re going to keep getting more and more of these bubbles, regardless of what they are.

snekerpimp ,

No shit, because we all see that AI is just technospeak for “harvest all your info”.

Frozengyro ,

Not to mention it’s usually dog shit out put

blarth ,

I refuse to use Facebook anymore, but my wife and others do. Apparently the search box is now a Meta AI box, and it pisses them every time. They want the original search back.

nossaquesapao ,

That’s another thing companies don’t seem to understand. A lot of them aren’t creating new products and services that use ai, but are removing the existing ones, that people use daily and enjoy, and forcing some ai alternative. Of course people are going to be pissed of!

riskable ,
@riskable@programming.dev avatar

To be fair, I love my dog but he has the same output 🤷

barsquid ,

Yes the cost is sending all of your data to the harvest, but what price can you put on having a virtual dumbass that is frequently wrong?

Capricorn_Geriatric ,

More like “instead of making something that gets the job done, expect pur unfinished product to complain and not do whatever it’s supposed to”. Or just plain false advertising.

Either way, not a good look and I’m glad it’s not just us lemmings who care.

tourist ,
@tourist@lemmy.world avatar
  • a monthly service fee

for the price of a cup of coffee

DudeDudenson ,

Doubt the general consumer thinks that, in sure most of them are turned away because of the unreliability and how ham fisted most implementations are

rustyfish ,
@rustyfish@lemmy.world avatar

I barely trust organics. Some CEO being rock hard about his newest repertoire of buzzword doesn’t help.

NABDad ,

Think of the savings if you replace the CEO with an AI!

octopus_ink ,
RootBeerGuy ,
@RootBeerGuy@discuss.tchncs.de avatar

She looks so done with it. It is amazing how tone deaf and incapabale of detecting emotions the higher ups must have been to OK that image. Not blaming any one lower to approve this, they are probably all fed up too and were happy to use this.

verity_kindle ,

Plus, it’s way too cold at her vast and empty warehouse hot desk, because she’s wearing at least two sweaters. Please let this lady have a cubicle of her own with a little space heater.

veeesix ,
@veeesix@lemmy.ca avatar

Is that a real copilot ad?

octopus_ink ,

Yep. Give me time and I’ll dig up the link.

octopus_ink , (edited )

This is the link I had I believe, but it’s not loading for me now. Either it will work for you, or they pulled it. www.instagram.com/microsoft365/p/C7j8ipnxIiI/?img… (comments were brutal IIRC)

Related article about it: futurism.com/microsoft-brags-ai-attend-three-meet…

veeesix ,
@veeesix@lemmy.ca avatar

The post is still there.

I just can’t see anyone contributing anything meaningful to a meeting when they’re split across three different conversations. If that’s the case for this hypothetical employee, she’s part of the problem.

octopus_ink ,

I just can’t see anyone contributing anything meaningful to a meeting when they’re split across three different conversations. If that’s the case for this hypothetical employee, she’s part of the problem.

I think the whole idea is that the AI handles two of those meetings for her (somehow) But yes, I try to put myself in the mind of someone who is enthused to finally be able to “attend” three meetings at once, and I just can’t. I have a good job that I mostly enjoy, and am usually enthusiastic about my work. No fucking way.

The only people who could want this are the 1% (and wanna-be 1%), and they want it so the rest of us can attend three meetings at once to increase their wealth even faster.

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

It’s people who brag about how hard they work and how many hours they work when other people say they hate their jobs.

And those people make me laugh. Oh really? You worked 80 hours last week? I “worked” 40, which meant about 4 hours of actual work a day, clocked out at 5 on the dot every day and spent time with my family.

barsquid ,

I’m never contributing anything meaningful to the meetings I am continuously added to, so it would be nice to have an AI stand in. I could do the goddamn job I originally applied for instead of scrums, special project scrums, and meta scrums.

Grandwolf319 ,

I mean, that’s exactly the advantage of slack over meetings but that doesn’t tickle middle management fancy as much.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines