There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Churbleyimyam ,

I think AI has mostly been about luring investors into pumping up share prices rather than offering something of genuine value to consumers.

Some people are gonna lose a lot of other people’s money over it.

themurphy ,

Definitely. Many companies have implemented AI without thinking with 3 brain cells.

Great and useful implementation of AI exists, but it’s like 1/100 right now in products.

floofloof ,

If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.

At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it’s giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is “AI-driven”.

PerogiBoi ,
@PerogiBoi@lemmy.ca avatar

My old company before they laid me off laid off our entire HR and Comms teams in exchange for ChatGPT Enterprise.

“We can just have an AI chatbot for HR and pay inquiries and ask Dall-e to create icons and other content”.

A friend who still works there told me they’re hiring a bunch of “prompt engineers” to improve the quality of the AI outputs haha

verity_kindle ,

I’m sorry. Hope you find a better job, on the inevitable downswing of the hype, when someone realizes that a prompt can’t replace a person in customer service. Customers will invest more time, i.e., even wait in a purposely engineered holding music hell, to have a real person listen to them.

themurphy ,

That’s an even worse ‘use case’ than I could imagine.

HR should be one of the most protected fields against AI, because you actually need a human resource.

And “prompt engineer” is so stupid. The “job” is only necessary because the AI doesn’t understand what you want to do well enough. The only productive guy you could hire would be a programmer or something, that could actually tinker with the AI.

spiderman ,

Yeah, can make some products better but most of the products these days that use AI, it doesn’t actually need them. It’s annoying to use products that actively shovel AI when it doesn’t even need it.

Lost_My_Mind ,

Ya know what pfoduct MIGHT be better with AI?

Toasters. They have ONE JOB, and everybody agrees their toaster is crap. But you’re not going to buy another toaster, because that too will be crap.

How about a toaster, that accurately, and evenly toasts your bread, and then DOESN’T give you a heart attack at 5am when you’re still half asleep???

IS THAT TOO MUCH TO ASK???

grue ,

Sweet, I’m the one who gets to link the obligatory Technology Connections toaster video!

paw ,

Aw man, now I want this toaster.

SolarMonkey ,

I said the exact same thing months ago when I saw that video. I don’t even use a toaster.

T156 ,

Nah. We already have AI toasters, and they’re ambitious, but rubbish.

Adding AI is just serious overkill for a toaster, especially when it wouldn’t add anything meaningful, not compared to just designing the toaster better.

verity_kindle ,

It only needs one string of conditions that it can understand: don’t catch on fire. Turn yourself off IF smoke.

BorgDrone ,

AI toasters are a Bad Idea

verity_kindle ,

This is the visionary we need. Take my venture capital millions on a magic carpet ride, time traveler!

peto ,

A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn’t something new or exceptional. It is just the tool you use for solving certain problems.

Investors going to bubble though.

SlopppyEngineer ,

Yes, I’m getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it’s a risk they’re willing to take.

slaacaa ,

“You might lose all your money, but that is a risk I’m willing to take”

  • visionairy AI techbro talking to investors
SlopppyEngineer ,

Investors pump money in a bunch of companies so the chances of at least one of them making it big and paying them back for all the failed investments is almost guaranteed. That’s what taking risks is all about.

verity_kindle ,

Sure, but it SEEMS, that some investors are relying on buzzword and hype, without research and ignoring the fundamentals of investing, i.e. besides the ever evolving claims of the CEO, is the company well managed? What is their cash flow and where is it going a year from now? Do the upper level managers have coke habits?

slaacaa ,

You’re right, but these fundamentals don’t really matter anymore, investors are buying hype and hoping to sell a bigger hype for more money later.

Aceticon ,

Seeing the whole thing as Knowingly Trading in Hype is actually a really good insight.

Certainly it neatly explains a lot.

rottingleaf ,

Also called a Ponzi scheme, where every participant knows it’s a scam, but hopes to find some more fools before it crashes and leave with positive balance.

Churbleyimyam ,

If the whole sector turns out to be garbage it won’t matter which particular set of companies within it you invest in; you will get burned if you cash out after everyone else.

barsoap ,

OpenAI will fail. StabilityAI will fail. CivitAI will prevail, mark my words.

riskable ,
@riskable@programming.dev avatar

My doorbell camera manufacturer now advertises their products as using, “Local AI” meaning, they’re not relying on a cloud service to look at your video in order to detect humans/faces/etc. Honestly, it seems like a good (marketing) move.

SLVRDRGN ,

I tried to find the advert but I see this on YouTube a lot - an Adobe AI ad which depicts, without shame, AI writing out a newsletter/promo for a business owner’s new product (cookies or ice cream or something), showing the owner putting no effort into their personal product and a customer happily consuming because they were attracted by the thoughtless promo.

How are producers/consumers okay with everything being so mediocre??

lvxferre ,
@lvxferre@mander.xyz avatar

As I mentioned in another post, about the same topic:

Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.

Meron35 ,

Market shows that investors are actively turned on by products that use AI

SapphironZA ,

Market shows that the market buys into hype, not value.

riskable ,
@riskable@programming.dev avatar

Market shows that hype is a cycle and the AI hype is nearing its end.

rottingleaf ,

Customers worry about what they can do with it, while investors and spectators and vendors worry about buzzwords. Customers determine demand.

Sadly what some of those customers want to do is to somehow improve their own business without thinking, and then they too care about buzzwords, that’s how the hype comes.

USSEthernet ,

Prominent market investor arrested and charged for sexually assaulting AI robot

snekerpimp ,

No shit, because we all see that AI is just technospeak for “harvest all your info”.

Frozengyro ,

Not to mention it’s usually dog shit out put

blarth ,

I refuse to use Facebook anymore, but my wife and others do. Apparently the search box is now a Meta AI box, and it pisses them every time. They want the original search back.

nossaquesapao ,

That’s another thing companies don’t seem to understand. A lot of them aren’t creating new products and services that use ai, but are removing the existing ones, that people use daily and enjoy, and forcing some ai alternative. Of course people are going to be pissed of!

riskable ,
@riskable@programming.dev avatar

To be fair, I love my dog but he has the same output 🤷

barsquid ,

Yes the cost is sending all of your data to the harvest, but what price can you put on having a virtual dumbass that is frequently wrong?

Capricorn_Geriatric ,

More like “instead of making something that gets the job done, expect pur unfinished product to complain and not do whatever it’s supposed to”. Or just plain false advertising.

Either way, not a good look and I’m glad it’s not just us lemmings who care.

tourist ,
@tourist@lemmy.world avatar
  • a monthly service fee

for the price of a cup of coffee

DudeDudenson ,

Doubt the general consumer thinks that, in sure most of them are turned away because of the unreliability and how ham fisted most implementations are

oyo ,

LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.

pumpkinseedoil ,

Often the answers are pretty good. But you never know if you got a good answer or a bad answer.

Blackmist ,

And the system doesn’t know either.

For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.

xantoxis ,

Accurate.

No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.

Blackmist ,

The worst for me was a fairly simple programming question. The class it used didn’t exist.

“You are correct, that class was removed in OLD version. Try this updated code instead.”

Gave another made up class name.

Repeated with a newer version number.

It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.

GBU_28 ,

With proper framework, decent assertions are possible.

  1. It must cite the source and provide the quote, not just a summary.
  2. An adversarial review must be conducted

If that is done, the work on the human is very low.

That said, it’s STILL imperfect, but this is leagues better than one shot question and answer

esc27 ,

They’ve overhyped the hell out of it and slapped those letters on everything including a lot of half baked ideas. Of course people are tired of it and beginning to associate ai with bad marketing.

This whole situation really does feel dotcommish. I suspect we will soon see an ai crash, then a decade or so later it will be ubiquitous but far less hyped.

Vent ,

Thing is, it already was ubiquitous before the AI “boom”. That’s why everything got an AI label added so quickly, because everything was already using machine learning! LLMs are new, but they’re just one form of AI and tbh they don’t do 90% of the stuff they’re marketed as and most things would be better off without them.

rottingleaf ,

What did they even expect, calling something “AI” when it’s no more “AI” than a Perl script determining whether a picture contains more red color than green or vice versa.

Anything making some kind of determination via technical means, including MCs and control systems, has been called AI.

When people start using the abbreviation as if it were “the” AI, naturally first there’ll be a hype of clueless people, and then everybody will understand that this is no different from what was before. Just lots of data and computing power to make a show.

baggachipz ,

Gartner Hype Cycle is the new Moore’s Law.

JCreazy ,

There are even companies slapping AI labels onto old tech with timers to trick people into buying it.

1995ToyotaCorolla ,
@1995ToyotaCorolla@lemmy.world avatar

That one DankPods video of the “AI Rice cooker” comes to mind

JCreazy ,

Yeah that’s the one I saw

EvilBit ,

For what it’s worth, rice cookers have been touting “fuzzy logic” for like 30 years. The term “AI” is pretty much the same, it just wasn’t as buzzy back then.

thesohoriots ,

For the love of god, defund MBAs.

PriorityMotif ,
@PriorityMotif@lemmy.world avatar

Give them a box of crayons to eat so the adults can get some work done

aphonefriend ,

Fallout was right.

Grandwolf319 ,

I mean, pretty obvious if they advertise the technology instead of the capabilities it could provide.

Still waiting for that first good use case for LLMs.

psivchaz ,

It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it’s got mistakes) or answer a few questions can save a lot of time.

Grandwolf319 ,

So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.

Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?

I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.

So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.

In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).

BassTurd ,

I just recently got copilot in vscode through work. I typed a comment that said, “create a new model in sqlalchemy named assets with the columns, a, b, c, d”. It couldn’t know the proper data types to use, but it output everything perfectly, including using my custom defined annotations, only it was the same annotation for every column that I then had to update. As a test, that was great, but copilot also picked up a SQL query I had written in a comment to reference as I was making my models, and it also generated that entire model for me as well.

It didn’t do anything that I didn’t know how to do, but it saved on some typing effort. I use it mostly for its auto complete functionality and letting it suggest comments for me.

Grandwolf319 ,

That’s awesome, and I would probably would find those tools useful.

Code generators have existed for a long time, but they are usually free. These tools actually costs a lot of money, cost way more to generate code this way than the traditional way.

So idk if it would be worth it once the venture capitalist money dries up.

BassTurd ,

That’s fair. I don’t know if I will ever pay my own money for it, but if my company will, I’ll use it where it fits.

Dran_Arcana ,

I’m actually working on a vector DB RAG system for my own documentation. Even in its rudimentary stages, it’s been very helpful for finding functions in my own code that I don’t remember exactly what project I implemented it in, but have a vague idea what it did.

E.g

Have I ever written a bash function that orders non-symver GitHub branches?

Yes! In your ‘webwork automation’ project, starting on line 234, you wrote a function that sorts Git branches based on WebWork’s versioning conventions.

beveradb ,

I’ve built a couple of useful products which leverage LLMs at one stage or another, but I don’t shout about it cos I don’t see LLMs as something particularly exciting or relevant to consumers, to me they’re just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem. I think they are a new tool which is genuinely valuable when dealing with natural language problems. For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I’ve finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would’ve been much harder!

Draedron ,

Wrote my last application with chat gpt. Changed small stuff and got the job

explodicle ,

Please write a full page cover letter that no human will read.

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

That’s because businesses are using AI to weed out resumes.

Basically you beat the system by using the system. That’s my plan too next time I look for work.

Empricorn ,

Haven’t you been watching the Olympics and seen Google’s ad for Gemini?

Premise: your daughter wants to write a letter to an athlete she admires. Instead of helping her as a parent, Gemini can magic-up a draft for her!

psivchaz ,

On the plus side for them, they can probably use Gemini to write their apology blog about how they missed the mark with that ad.

Cryophilia ,

Writing bad code that will hold together long enough for you to make your next career hop.

NABDad ,

I think the LLM could be decent at the task of being a fairly dumb personal assistant. An LLM interface to a robot that could go get the mail or get you a cup of coffee would be nice in an “unnecessary luxury” sort of way. Of course, that would eliminate the “unpaid intern to add experience to a resume” jobs. I’m not sure if that’s good or bad,l. I’m also not sure why anyone would want it, since unpaid interns are cheaper and probably more satisfying to abuse.

I can imagine an LLM being useful to simulate social interaction for people who would otherwise be completely alone. For example: elderly, childless people who have already had all their friends die or assholes that no human can stand being around.

Grandwolf319 ,

Is that really an LLM? Cause using ML to be a part of future AGI is not new and actually was very promising and the cutting edge before chatGPT.

So like using ML for vision recognition to know a video of a dog contains a dog. Or just speech to text. I don’t think that’s what people mean these days when they say LLM. Those are more for storing data and giving you data in forms of accurate guesses when prompted.

ML has a huge future, regardless of LLMs.

Entropywins ,

Llm’s are ML…or did I miss something here?

nic2555 ,

Yes. But not all Machine Learning (ML) is LLM. Machine learning refer to the general uses of neural networks while Large Language Models (LLM) refer more to the ability for an application, or a bot, to understand natural language and deduct context from it, and act accordingly.

ML in general as a much more usages than only power LLM.

EvilBit ,

I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.

But 98% of GenAI hype is bullahit so far.

Grandwolf319 ,

How would it do that? Would LLMs not just take input as voice or text and then guess an output as text?

Wouldn’t the text output that is suppose to be commands for action, need to be correct and not a guess?

It’s the whole guessing part that makes LLMs not useful, so imo they should only be used to improve stuff we already need to guess.

EvilBit ,

One of the ways to mitigate the core issue of an LLM, which is confabulation/inaccuracy, is to have a layer of either confirmation or simply forgiveness intrinsic to the task. Use the favor test. If you asked a friend to do you a favor and perform these actions, they’d give you results that you can either/both look over yourself to confirm they’re correct enough, or you’re willing to simply live with minor errors. If that works for you, go for it. But if you’re doing something that absolutely 100% must be correct, you are entirely dependent on independently reviewing the results.

But one thing Apple is doing is training LLMs with action semantics, so you don’t have to think of its output as strictly textual. When you’re dealing with computers, the term “language” is much looser than you or I tend to understand it. You can have a “grammar” that is inclusive of the entirety of the English language but also includes commands and parameters, for example. So it will kinda speak English, but augmented with the ability to access data and perform actions within iOS as well.

pumpkinseedoil ,

LLM have greatly increased my coding speed: instead of writing everything myself I let AI write it and then only have to fix all the bugs

Grandwolf319 ,

I’m glad. Depends on the dev. I love writing code but debugging is annoying so I would prefer to take longer writing if it means less bugs.

Please note I’m also pro code generators (like emmet).

KingThrillgore ,
@KingThrillgore@lemmy.ml avatar

Take the hint, MBAs.

xantoxis ,

They don’t care. At the moment AI is cheap for them (because some other investor is paying for it). As long as they believe AI reduces their operating costs*, and as long as they’re convinced every other company will follow suit, it doesn’t matter if consumers like it less. Modern history is a long string of companies making things worse and selling them to us anyway because there’s no alternatives. Because every competitor is doing it, too, except the ones that are prohibitively expensive.

[*] Lol, it doesn’t do that either

simpleslipeagle ,

Assuming MBAs can do math might be a mistake. I’ve worked on an MBA pet project that squandered millions in worker time and opportunity cost to save 30k mrc…

xantoxis ,

Eh, they understand “number go down”

MataVatnik ,
@MataVatnik@lemmy.world avatar

I read this article that out of the 10 top Harvard MBA grads 8 of them had have gone to tank the company they were CEOs at. Or something ridiculous.

BradleyUffner ,

LLM based AI was a fun toy when it first broke. Everyone was curious and wanted to play with it, which made it seem super popular. Now that the novelty has worn off, most people are bored and unimpressed with it. The problem is that the tech bros invested so much money in it and they are unwilling to take the loss. They are trying to force it so that they can say they didn’t waste their money.

2pt_perversion ,

Honestly they’re still impressive and useful it’s just the hype train overload and trying to implement them in areas they either don’t fit or don’t work well enough yet.

GratefullyGodless ,
@GratefullyGodless@lemmy.world avatar

AI does a good job of generating character portraits for my TTRPG games. But, really, beyond that I haven’t found a good use for it.

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

Many of us who are old enough saw it as an advanced version of ELIZA and used it with the same level of amusement until that amusement faded (pretty quick) because it got old.

If anything, they are less impressive because tricking people into thinking a computer is actually having a conversation with them has been around for a long time.

MataVatnik ,
@MataVatnik@lemmy.world avatar

Are you like 80?

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

No, 47. Believe it or not, the first PCs came out when I was a young whippersnapper.

werefreeatlast ,

Also just listening and reading what people say. We don’t want fucking AI anything. We understand what it might do. We don’t want it.

the_post_of_tom_joad ,

Yeah these buttsniffers can’t possibly conceive the truth, they made something that people don’t want, let alone ever admit it. Check this out:

“When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions” - some marketing stinklipper

“We found emotional trust plays a critical role in how consumers perceive AI-powered products”.

Ok, first of all how is this person serious fire this person please cuz this gibberish sounds like a LLM wrote it like for real WTF even is “emotional trust” dude is that a real term so you mean we see your lies

(wheeze)

Sorry, brain overheated there. These fucks are so far up their own asses man… the mind just boggles

DonPiano ,
@DonPiano@feddit.org avatar

You’re mad that someone investigates and elaborates on causes of why using llm marketing bullshit is a bad idea? Weird.

yemmly , (edited )

This is because the AI of today is a shit sandwich that we’re being told is peanut butter and jelly.

For those who like to party: All the current “AI” technologies use statistics to approximate semantics. They can’t just be semantic, because we don’t know how meaning works or what gives rise to it. So the public is put off because they have an intuitive sense of the ruse.

As long as the mechanics of meaning remain a mystery, “AI” will be parlor tricks.

yemmly ,

And I don’t mean to denigrate data science. It is important and powerful. And real machine intelligence may one day emerge from it (or data science may one day point the way). But data science just isn’t AI.

Wirlocke ,

I wonder if we’ll start seeing these tech investor pump n’ dump patterns faster collectively, given how many has happened in such a short amount of time already.

Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.

It feels like the futurism sheen has started to waver. When everything’s a major revolution inserted into every product, then isn’t, it gets exhausting.

TimeSquirrel ,
@TimeSquirrel@kbin.melroy.org avatar

Internet of Things

This is very much not a hype and is very widely used. It's not just smart bulbs and toasters. It's burglar/fire alarms, HVAC monitoring, commercial building automation, access control, traffic infrastructure (cameras, signal lights), ATMs, emergency alerting (like how a 911 center dispatches a fire station, there are systems that can be connected to a jurisdiction's network as a secondary path to traditional radio tones) and anything else not a computer or cell phone connected to the Internet. Now even some cars are part of the IoT realm. You are completely surrounded by IoT without even realizing it.

Wirlocke ,

Huh, didn’t know that! I mainly mentioned it for the fact that it was crammed into products that didn’t need it, like fridges and toasters where it’s usually seen as superfluous, much like AI.

DancingBear ,

I would beg to differ. I thoroughly enjoy downloading various toasting regimines. Everyone knows that a piece of white bread toasts different than a slice of whole wheat. Now add sourdough home slice into the mix. It can get overwhelming quite quickly.

Don’t even get me started on English muffins.

With the toaster app I can keep all of my toasting regimines in one place, without having to wonder whether it’s going to toast my pop tart as though it were a hot pocket.

barsoap ,

I mean give the thing an USB interface so I can use an app to set timing presets instead of whatever UX nightmare it’d otherwise be and I’m in, nowadays it’s probably cheaper to throw in a MOSFET and tiny chip than it is to use a bimetallic strip, much fewer and less fickle parts and when you already have the capability to be programmable, why not use it. Connecting it to an actual network? Get out of here.

DancingBear ,

Yea I’m being a little facetious I hope it is coming through lol

verity_kindle ,

Bagels are a whole different set of data than bread. New bread toasts much more slowly than old bread.

kinsnik ,

I think that the dot com bubble is the closest, honestly. There can be some kind of useful products (mostly dealing with how we interact with a system, not actually trying to use AI to magically solve a problem; it is shit at that), but the hype is way too large

affiliate ,

don’t forget Big Data

explodicle ,

TimeSquirrel made a good point about Internet of Things, but Crypto and Self Driving Cars are still booming too.

IMHO it’s a marketing problem. They’re major evolutions taking root over decades. I think AI will gradually become as useful as lasers.

Cornelius_Wangenheim ,

It’s more of a macroeconomic issue. There’s too much investor money chasing too few good investments. Until our laws stop favoring the investor class, we’re going to keep getting more and more of these bubbles, regardless of what they are.

AceFuzzLord ,

In other news, AI bros convince CEOs and investors that polls saying people don’t like AI are out of touch with reality and those people actually want more AI, as proven by an AI that only outputs what those same AI bros want.

Just waiting for that to pop up in the news some time soon.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines