There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

StaySquared ,

Lots of companies jumping the gun… laying off so many people only to realize they’re going to need those people back. AI is still in its infancy, using it to replace an actual human is a dumb dumb move.

MartianRecon ,

‘That’s a problem for the next CEO’ -current CEO

schnurrito ,

LLMs aren’t virtual dumbasses who are constantly wrong, they are bullshit generators. They are sometimes right, sometimes wrong, but don’t really care either way and will say wrong things just as confidently as right things.

Meowie_Gamer ,
@Meowie_Gamer@lemmy.world avatar

Why are tech CEOs always so out of touch…

Knock_Knock_Lemmy_In ,

CEOs are generally the the sales(wo)men.

megopie ,

Because they’re playing a role, an actor so to speak, they’re not presenting their own personal opinions. They’re vocalizing and embodying the output of a series of complex internal mechanism, it’s a slow moving self optimizing system beyond the comprehension of any individual working with in the system.

Much like AI’s it often outputs stupid shit.

uis ,

CEOs(dumbasses who are constantly wrong): rush replacing everyone with AI before everyone replaces them with AI

JasonDJ ,

I don’t think any programmer would be dumb enough to take that bait.

They would be held personally liable for any business decision that costs the stockholders.

theredknight ,

Lol you haven’t met consultants

skuzz ,

Funny thing is, the CEOs are exactly the ones to be replaced with AI. Mediocre talent that is sometimes wrong. Perfect place for an AI, and the AI could come to the next decision much faster at a fraction of the cost.

Krauerking ,

So, I’d say there is some slight issue with replacing all decision makers with AI cause Walmart and Amazon does it for employee efficiency. It means the staff are micro managed and treated like machines the same way the computer is.

Walmart employees are moved around the floor like roombas to never interact with each other and no real availability for customers to get someone. Warehouse workers are overworked by bullshit ideas of efficiency.

Now I get that it could be fixed by having the AI systems designed to be more empathetic but who is choosing how they are programmed? The board still?

We just need good bosses who still interact with their employees on their level. We don’t need AI “replacing” anyone pretty much anywhere, but can be used as a helpful tool.

skuzz ,

Yeah, apologies, I was being a bit glib there. Honestly, I kinda subscribe to the Star Trek: Insurrection Ba’ku people’s philosophy. “We believe that when you create a machine to do the work of a man, you take something away from the man.”

While it makes sense to replace some tasks like dangerous mining or assembly line work away from humans, interaction roles and decision making roles both seem like they should remain very human.

In the same way that nuclear missile launches during the Cold War always had real humans as the last line before a missile would actually be fired.

I see AI as being something that becomes specialized tools for each job. You are repairing a lawn mower, you have an AI multimeter type device that you connect to some test points and you converse with in some fashion to troubleshoot. All offline, and very limited in capabilities. The tech bros, meanwhile, think they created digital Jesus, and they are desperate to figure out what Bible to jam him into. Meanwhile, corps across the planet are in a rush to get rid of their customer service roles en masse. Can you imagine 911 dispatch being replaced with AI? The human component is 100% needed there. (Albeit, an extreme comparison.)

Didros ,

CEOs are obsessed with value derived free of all that messy human labor. It would make sense if they didn’t still want the people they fired to pay money to talk to the robots.

Tiltinyall ,

I think what they are obsessed with is capitalizing on every new tech trend as fast as they can, security be damned.

darkphotonstudio ,

All those middle managers will be displaced.

captain_aggravated ,
@captain_aggravated@sh.itjust.works avatar

Holy shit, that’s it. GPT is Wheatley from Portal 2. The moron you attach to a computer system to make it into an idiot.

AVincentInSpace ,

I AM NOT! A! MORON!

Watch, hold on, I’ll prove it! I’ll perform a feat of brute strength in a blind rage that will end up hurting me in the long run! Then later when I find out that massive fall didn’t actually kill you and you fought your way back up through 2km worth of test chambers powered by sheer spite to come and confront me, I’ll act like nothing happened and beg you for your help because I have no idea how to run this place and it’s falling apart and the robot test subjects I built don’t work at all!

Huh? Could a moron do that?

andioop ,

Hey now, I found Wheatley charming. AI in real life, not so much.

captain_aggravated ,
@captain_aggravated@sh.itjust.works avatar

Wheatley is a great character, he’s just got a minor case of serious brain damage.

cumskin_genocide ,

I like using AI to summarize meetings

agressivelyPassive ,

Summary: nothing of value

cumskin_genocide ,

For you

Midnitte ,

I think we need to have a meeting about the summary of the meeting.

Swedneck ,
@Swedneck@discuss.tchncs.de avatar

it’s like how techbros constantly want to reinvent transportation, if they assign an AI to give them an answer it would just say “build more railways and trains” and they’d throw it out a window in anger

PlexSheep ,

Everything becomes train if you reiterate long enough.

MagicShel ,

Longcar is looooooooonnnnnng.

luciole ,
@luciole@beehaw.org avatar

Or a crab.

agressivelyPassive ,

They re-invent everything for no reason. Every mundane device has been “re-invented” using big data, blockchain, VR, now AI and in a few years probably quantum-something.

The entire tech world fundamentally ran out of ideas. The usual pipeline is basic research > applied research > products, but since money only gets thrown at products, there’s nothing left to do research. So the tech bros have to re-iterate on the same concepts again and again.

ChickenLadyLovesLife ,

there’s nothing left to do research.

There’s still military robots, unfortunately.

skuzz ,

Reimagined

Ftfy (/s)

RecluseRamble , (edited )

More like:

Computer scientist: We have made a text generator

Everyone: tExT iS iNtElLiGeNcE

Persen ,

That’s why nonverbal (and sometimes speaking) autistic people are considered stupid even by professionals.

pantyhosewimp ,
Persen ,

Wow, this looks worth reading. I’ll read it if I remember.

pantyhosewimp ,

It’s also a movie too with Daniel Day-Lewis. He’s kinda hard to forget.

abracaDavid ,

Oh come on. It’s called AI, as in artificial intelligence. None of these companies have ever called it a text generator, even though that’s what it is.

jorp ,

I get that it’s cool to hate on how AI is being shoved in our faces everywhere and I agree with that sentiment, but the technology is better than what you’re giving it credit for.

You don’t have to diminish the accomplishments of the actual people who studied and built these impressive things to point out that business are bandwagoning and rushing to get to market to satisfy investors. like with most technologies it’s capitalism that’s the problem.

LLMs emulate neural structures and have incredible natural language parsing capabilities that we’ve never even come close to accomplishing before. The prompt hacks alone are an incredibly interesting glance at how close these things come to “understanding.” They’re more like social engineering than any other kind of hack.

AppleTea ,

The trouble with phrases like ‘neural structures’ and ‘language parsing’ is that these descriptions still play into the “AI” narrative that’s been used to oversell large language models.

Fundamentally, these are statistical weights randomly wired up to other statistical weights, tested and pruned against a huge database. That isn’t language parsing, it’s still just brute-force calculation. The understanding comes from us, from people assigning linguistic meaning to patterns in binary.

jorp ,

Brain structures aren’t so dissimilar, unless you believe there’s some metaphysical quantity to consciousness this kind of technology will be how we do achieve general AI

AProfessional ,

This is all theoretical. Today it’s quite basic with billions thrown at the problem. Maybe in decades these ideas can be expanded on.

AppleTea ,

Living, growing, changing cells are pretty damn dissimilar to static circuitry. Neural networks are based on an oversimplified model of neuron cells. The model ignores the fact neurons are constantly growing, shifting, and breaking connections with one another, and flat out does not consider structures and interactions within the cells.

Metaphysics is not required to make the observation that computer programmes are magnitudes less complex than a brain.

ChickenLadyLovesLife ,

Neural networks are based on an oversimplified model of neuron cells.

As a programmer who has studied neuroanatomy and the structure/function of neurons themselves, I remain astonished at how not like real biological nervous systems computer neural networks still are. It’s like the whole field is based on one person’s poor understanding of the state of biological knowledge in the late 1970s. That doesn’t mean it’s not effective in some ways as it is, but you’d think there’d be more experimentation in neural networks based on current biological knowledge.

areyouevenreal ,

What sort of differences are we looking at exactly?

ChickenLadyLovesLife ,

The one thing that stands out to me the most is that programmatic “neurons” are basically passive units that weigh inputs and decide to fire or not. The whole net is exposed to the input, the firing decisions are worked through the net, and then whatever output is triggered. In biological neural nets, most neurons are always firing at some rate and the inputs from pre-synaptic neurons affect that rate, so in a sense the passed information is coded as a change in rate rather than as an all-or-nothing decision to fire or not fire as is the case with (most) programmatic neurons. Implementing something like this in code would be more complicated, but it could produce something much more like a living organism which is always doing something rather than passively waiting for an input to produce some output.

And TBF there probably are a lot of people doing this kind of thing, but if so they don’t get much press.

areyouevenreal ,

Pretty much all artificial neural nets I have seen don’t do all or nothing activation. They all seem to have activation states encoded as some kind of binary number. I think this is to mimic the effects of variable firing rates.

The idea of a neural network doing stuff in the background is interesting though.

NikkiDimes ,

The fact that you believe software based neural networks are, as you put it, “static circuitry” betrays your apparent knowledge on the subject. I agree that many people overblow LLM tech, but many people like yourself grossly underestimate it as well.

CompassRed , (edited )

Language parsing is a routine process that doesn’t require AI and it’s something we have been doing for decades. That phrase in no way plays into the hype of AI. Also, the weights may be random initially (though not uniformly random), but the way they are connected and relate to each other is not random. And after training, the weights are no longer random at all, so I don’t see the point in bringing that up. Finally, machine learning models are not brute-force calculators. If they were, they would take billions of years to respond to even the simplest prompt because they would have to evaluate every possible response (even the nonsensical ones) before returning the best answer. They’re better described as a greedy algorithm than a brute force algorithm.

I’m not going to get into an argument about whether these AIs understand anything, largely because I don’t have a strong opinion on the matter, but also because that would require a definition of understanding which is an unsolved problem in philosophy. You can wax poetic about how humans are the only ones with true understanding and that LLMs are encoded in binary (which is somehow related to the point you’re making in some unspecified way); however, your comment reveals how little you know about LLMs, machine learning, computer science, and the relevant philosophy in general. Your understanding of these AIs is just as shallow as those who claim that LLMs are intelligent agents of free will complete with conscious experience - you just happen to land closer to the mark.

marcos ,

It is parsing and querying into a huge statistical database.

Both done at the same time and in an opaque manner. But that doesn’t make it any less of parsing and querying.

Sweetpeaches69 ,

We don’t have to diminish their accomplishments, no; we choose to.

JackbyDev ,

It’s a shit post, relax

iAvicenna ,
@iAvicenna@lemmy.world avatar

I think tech CEOs can empathise with chatgpt on how uninformed its opinions are and how well it can it bullshit

TachyonTele ,

Great. It’s going to run for president now.

iAvicenna ,
@iAvicenna@lemmy.world avatar

about time: chatgpt the president that we don’t need but that we deserve

explodicle ,

Even a bot trained on Redditors would be better than either major candidate.

TachyonTele ,

I like the sentiment, but that’s not really true. Biden is a life long politician, which means he knows how things work and who to talk to.

Trump is an angry spiteful asshole that just wants to hear his own voice.

iAvicenna ,
@iAvicenna@lemmy.world avatar

so is Trump better than chatgpt or not still cant decide

TachyonTele ,

I don’t have an answer to that either lol

SatouKazuma ,

I’d say no, because the difference to me lies in Trump being actively malicious, and ChatGPT essentially being random, as far as the lay public is concerned.

ChickenLadyLovesLife ,

Wait, chatgpt was convicted of multiple felonies?

iAvicenna ,
@iAvicenna@lemmy.world avatar

If it starts giving advice like google AI it soon will be

Sparky ,
@Sparky@lemmy.blahaj.zone avatar

That’s the end user! /s

Sam_Bass ,

Why go virtual when reality exists?

FiniteBanjo ,

They’re convinced that AI might be cheaper for the same result. Partly because power and water is subsidized more than humans.

Couldbealeotard ,
@Couldbealeotard@lemmy.world avatar

Have you seen the film Dark Star? Bomb number 20 gets stuck in the release bay with the detonation countdown still running, so they have to spacewalk out and convince the AI not to explode.

FiniteBanjo ,

Theres a great Trevor Something Does Not Exist song that samples the conversation, called Outro.

Snowclone ,

They put new AI controls on our traffic lights. Cost the city a fuck ton more money than fixing our dilapidated public pool. Now no one tries to turn left at a light. They don’t activate. We threw out a perfectly good timer no one was complaining about.

But no one from silicone valley is lobbing cities to buy pool equipment, I guess.

Hobbes_Dent ,

Nah, that need dat water to cool the AI for the light.

MIDItheKID ,

Linus was ahead of the game on this one. Nvidia should start building data centers next to public pools. Cool the systems and warm the pools.

captain_aggravated ,
@captain_aggravated@sh.itjust.works avatar

I’ve seen a video of at least one spa that does that. They mine bitcoin on rigs immersed in mineral oil, with a heat exchanger to the spa’s water system. I’m struggling to imagine that’s enough heat, especially piped a distance through the building, to run several hot tubs, and I’m kind of dubious about that particular load, but hey.

areyouevenreal , (edited )

A large data centre can use over 100 MW at the high end. Certainly enough to power a swimming pool or three. In fact swimming pools are normally measured in kW not MW.

captain_aggravated ,
@captain_aggravated@sh.itjust.works avatar

“A large data centre” this wasn’t. I saw a couple washing machine-sized vats of oil-soaked computers.

areyouevenreal ,

If all it’s running is a hot tub that sounds reasonable. This bitcoin miner uses over 3kW: aozhiminer.com/…/high-profit-110th-95th-bitmain-a…

Sigma_ ,

This is so dumb that I totally beleive it

Daxtron2 ,

Using CV for automatic lights is not a new thing.

dan ,
@dan@upvote.au avatar

A lot of people in Silicon Valley don’t like this AI stuff either :)

makingStuffForFun ,

We are a small software company. We’re trying to find a useful use case. Currently we can’t. However, we’re watching closely. It has to come at the rate of improving.

lazynooblet ,
@lazynooblet@lazysoci.al avatar

Whilst it’s a shame this implementation sucks, I wish we would get intelligent traffic light controls that worked. Sitting at a light for 90 seconds in the dead of night without a car in sight is frustrating.

lemmyvore ,

That was a solved problem 20 years ago lol. We made working systems for this in our lab at Uni, it was one of our course group projects. It used combinations of sensors and microcontrollers.

It’s not really the kind of problem that requires AI. You can do it with AI and image recognition or live traffic data but that’s more fitting for complex tasks like adjusting the entire grid live based on traffic conditions. It’s massively overkill for dead time switches.

Even for grid optimization you shouldn’t jump into AI head first. It’s much better long term to analyze the underlying causes of grid congestion and come up with holistic solutions that address those problems, which often translate into low-tech or zero-tech solutions. I’ve seen intersections massively improved by a couple of signs, some markings and a handful of plastic poles.

Throwing AI at problems is sort of a “spray and pray” approach that often goes about as badly as you can expect.

MagicShel ,

That was a problem solved seventy years ago. If there’s no one around, just go. No one cares.

MonkeMischief ,

Throwing AI at problems is sort of a “spray and pray” approach that often goes about as badly as you can expect.

I can see the headlines now: “New social media trend where people are asking traffic light Ai to solve the traveling salesman problem is causing massive traffic jams and record electricity costs for the city.”

SkyeStarfall ,

You need to really specify what is meant by “AI” here. Chances are it’s probably some form of smart traffic lights to improve traffic flow. Which is not all that special. It has nothing to do with LLMs

Snowclone ,

Honestly I’m not sure, we had circular sensors for a long time, about the size of a tall drinking glass, now there’s rectangular sensors they just put up about twice the size of a cell phone and they have a bend, arc, to them, I know they weren’t being used as cameras at all before, no one was getting tickets with pictures from them, it’s a small town. What exactly the new system is I’m not sure, our local news all went out of business, so its all word of mouth, or going to town hall meetings.

SatouKazuma ,

I’m guessing it’s some sort of image recognition and maybe some sort of switch under the pavement telling the light when a car has rolled up.

MonkeMischief ,

It’s funny because this is what I was afraid of with “AI” threatening humanity.

Not that we’d get super-intelligences running Terminators, but that we’d be using black-box “I dunno how it does it, we just trained it and let it go.” Tech in civilization-critical applications because it sounded cool to people with more dollars than brain cells.

ristoril_zip ,

I read a pretty convincing article title and subheading implying that the best use for so called “AI” would be to replace all corporate CEOs with it.

I didn’t read the article but given how I’ve seen most CEOs behave it would probably be trivial to automate their behavior. Pursue short term profit boosts with no eye to the long term, cut workers and/or pay and/or benefits at every opportunity, attempt to deny unionization to the employees, tell the board and shareholders that everything is great, tell the employees that everything sucks, …

snooggums ,
@snooggums@midwest.social avatar

Then some hackers get in and reprogram the AI CEOs to value long term profit and employee training and productivity. The company grows and is massively profitable until some venture capitalists swoop in and kill the company to feed from the carcass.

Faydaikin ,
@Faydaikin@beehaw.org avatar

If your company is successful, that’s gonna happen anyway.

cerement ,
@cerement@slrpnk.net avatar

when workers go on strike, they call in the police, strikebreakers, National Guard, even bomb whole neighborhoods – but when a CEO takes a week off, no one even notices …

Semi_Hemi_Demigod ,
@Semi_Hemi_Demigod@lemmy.world avatar

Like that time in Ireland when the banks closed to protest a law and life went on just fine without them.

Swedneck ,
@Swedneck@discuss.tchncs.de avatar

Most CEOs could be automated with a random number generator that runs on a combustion engine fueled by burning dollar bills.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines