There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

qaz ,

I would pay extra to be able to run open LLM’s locally on Linux. I wouldn’t pay for Microsoft’s Copilot stuff that’s shoehorned into every interface imaginable while also causing privacy and security issues. The context matters.

Blue_Morpho ,

That’s why NPU’s are actually a good thing. The ability to run LLM local instead of sending everything to Microsoft/Open AI for data mining will be great.

smokescreen ,

Pay more for a shitty chargpt clone in your operating system that can get exploited to hack your device. I see no flaw in this at all.

ArchRecord , (edited )

And when traditional AI programs can be run on much lower end hardware with the same speed and quality, those chips will have no use. (Spoiler alert, it’s happening right now.)

Corporations, for some reason, can’t fathom why people wouldn’t want to pay hundreds of dollars more just for a chip that can run AI models they won’t need most of the time.

If I want to use an AI model, I will, but if you keep developing shitty features that nobody wants using it, just because “AI = new & innovative,” then I have no incentive to use it. LLMs are useful to me sometimes, but an LLM that tries to summarize the activity on my computer isn’t that useful to me, so I’m not going to pay extra for a chip that I won’t even use for that purpose.

Natanael ,

You borked your link

OfficerBribe ,
Natanael ,

That still needs an FPGA. While they certainly seems to be able to use smaller ones, adding an FPGA chip will still add cost

ArchRecord ,

Whoops, no clue how that happened, fixed!

capital ,

My old ass GTX 1060 runs some of the open source language models. I imagine the more recent cards would handle them easily.

What’s the “AI” hardware supposed to do that any gamer with recent hardware can’t?

Appoxo ,
@Appoxo@lemmy.dbzer0.com avatar

Run it faster.
A CPU can also compute graphics but you wait significant more time than using hardware accelerated graphics hardware.

FMT99 ,

Show the actual use case in a convincing way and people will line up around the block. Generating some funny pictures or making generic suggestions about your calendar won’t cut it.

overload ,

I completely agree. There are some killer AI apps, but why should AI run on my OS? Recall is a complete disaster of a product and I hope it doesn’t see the light of day, but I’ve no doubt that there’s a place for AI on the PC.

Whatever application there is in AI at the OS level, it needs to be a trustless system that the user has complete control of. I’d be all for an Open source AI running at that level, but Microsoft is not going to do that because they want to ensure that they control your OS data.

PriorityMotif ,
@PriorityMotif@lemmy.world avatar

Machine learning in the os is a great value add for medium to large companies as it will allow them to track real productivity of office workers and easily replace them. Say goodbye to middle management.

overload , (edited )

I think it could definitely automate some roles where you aren’t necessarily thinking and all decisions are made based on information internally available to the PC. For sure these exist but some decisions need human input, I’m not sure how they automate out those roles just because they see stuff happening on the PC every day.

If anything I think this feature is used to spy on users at work and see when keystrokes fall below a certain level each day, but I’m sure that’s already possible for companies to do (but they just don’t).

chicken ,

I can’t tell how good any of this stuff is because none of the language they’re using to describe performance makes sense in comparison with running AI models on a GPU. How big a model can this stuff run, how does it compare to the graphics cards people use for AI now?

UltraMagnus0001 ,

Fuck, they won’t upgrade to TPM for windows 11

lost_faith ,

Still havent turned mine on, don’t want no surprises after a long day at work

ZILtoid1991 ,

A big letdown for me is, except with some rare cases, those extra AI features useless outside of AI. Some NPUs are straight out DSPs, they could easily run OpenCL code, others are either designed to not be able to handle any normal floating point numbers but only ones designed for machine learning, or CPU extensions that are just even bigger vector multipliers for select datatypes (AMX).

JokeDeity ,

The other 26% were bots answering.

some_guy ,

16%

mlg ,
@mlg@lemmy.world avatar

Even DLSS only works great for some types of games.

Although there have been some clever uses of it, lots of games could gain a lot from proper efficiency of the game engine.

War Thunder runs like total crap on even the highest end hardware, yet World of Warships has much more detailed ships and textures running fine off an HDD and older than GTX 7XX graphics.

Meanwhile on Linux, Compiz still runs crazy window effects and 3D cube desktop much better and faster than KDE. It’s so good I even recommend it for old devices with any kid of gpu because the hardware acceleration will make your desktop fast and responsive compared to even the lightest windows managers like openbox.

TF2 went from 32 bit to 64 bit and had immediate gains in performance upwards of 50% and almost entirely removing stuttering issues from the game.

Batman Arkham Knight ran on a heavily modified version of Unreal 3 which was insane for the time.

Most modern games and applications really don’t need the latest and greatest hardware, they just need to be efficiently programmed which is sometimes almost an art itself. Slapping on “AI” to reduce the work is sort of a lazy solution that will have side effects because you’re effectively predicting the output.

Buelldozer ,
@Buelldozer@lemmy.today avatar

I’m fine with NPUs / TPUs (AI-enhancing hardware) being included with systems because it’s useful for more than just OS shenanigans and commercial generative AI. Do I want Microsoft CoPilot Recall running on that hardware? No.

However I’ve bought TPUs for things like Frigate servers and various ML projects. For gamers there’s some really cool use cases out there for using local LLMs to generate NPC responses in RPGs. For “Smart Home” enthusiasts things like Home Assistant will be rolling out support for local LLMs later this year to make voice commands more context aware.

So do I want that hardware in there so I can use it MYSELF for other things? Yes, yes I do. You probably will eventually too.

Codilingus ,

I wish someone would make software that utilizes things like a M.2 coral TPU, to enhance gameplay like with frame gen, or up scaling for games and videos. Some GPUs are starting to even put M.2 slots on the GPU, if the latency from Mobo M.2 to PCIe GPU would be too slow.

RememberTheApollo_ ,

When a decent gpu is ~$1k alone, then someone wants you to pay more $ for a feature that offers no tangible benefit, why the hell would they want it? I haven’t bought a PC for over 25 years, I build my own and for family and friends. I’m building another next week for family, and AI isn’t even on the radar. If anything, this one is going to be anti-AI and get a Linux dual-boot as well as sticking with Win10, no way am I subjecting family to that Win11 adware.

meathorse ,

Bro, just add it to the pile of rubbish over there next to the 3D movies and curved TVs

KomfortablesKissen ,

The other 16% do not know what AI is or try to sell it. A combination of both is possible. And likely.

Xenny ,

As with any proprietary hardware on a GPU it all comes down to third party software support and classically if the market isn’t there then it’s not supported.

Appoxo ,
@Appoxo@lemmy.dbzer0.com avatar

Assuming theres no catch-on after 3-4 cycles I’d say the tech is either not mature enough, too expensive with too little results or (as you said) theres generally no interest in that.

Maybe it needs a bit of marturing and a re-introduction at a later point.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines