Okay, but here me out. What if the OS got way worse, and then I told you that paying me for the AI feature would restore it to a near-baseline level of original performance? What then, eh?
Maybe people doing AI development who want the option of running local models.
But baking AI into all hardware is dumb. Very few want it. saas AI is a thing. To the degree saas AI doesn’t offer the privacy of local AI, networked local AI on devices you don’t fully control offers even less. So it makes no sense for people who value convenience. It offers no value for people who want privacy. It only offers value to people doing software development who need more playground options, and I can go buy a graphics card myself thank you very much.
I’m interested in hardware that can better run local models. Right now the best bet is a GPU, but I’d be interested in a laptop with dedicated chips for AI that would work with pytorch. I’m a novice but I know it takes forever on my current laptop.
I would if the hardware was powerful enough to do interesting or useful things, and there was software that did interesting or useful things. Like, I’d rather run an AI model to remove backgrounds from images or upscale locally, than to send images to Adobe servers (this is just an example, I don’t use Adobe products and don’t know if this is what Adobe does). I’d also rather do OCR locally and quickly than send it to a server. Same with translations. There are a lot of use-cases for “AI” models.
A big letdown for me is, except with some rare cases, those extra AI features useless outside of AI. Some NPUs are straight out DSPs, they could easily run OpenCL code, others are either designed to not be able to handle any normal floating point numbers but only ones designed for machine learning, or CPU extensions that are just even bigger vector multipliers for select datatypes (AMX).
raytracing is something I’d pay for even if unasked, assuming they meaningfully impact the quality and dont demand outlandish prices.
And they’d need to put it in unasked and cooperate with devs else it won’t catch on quickly enough.
Remember Nvidia Ansel?
As with any proprietary hardware on a GPU it all comes down to third party software support and classically if the market isn’t there then it’s not supported.
Assuming theres no catch-on after 3-4 cycles I’d say the tech is either not mature enough, too expensive with too little results or (as you said) theres generally no interest in that.
Maybe it needs a bit of marturing and a re-introduction at a later point.
I can’t tell how good any of this stuff is because none of the language they’re using to describe performance makes sense in comparison with running AI models on a GPU. How big a model can this stuff run, how does it compare to the graphics cards people use for AI now?
Even DLSS only works great for some types of games.
Although there have been some clever uses of it, lots of games could gain a lot from proper efficiency of the game engine.
War Thunder runs like total crap on even the highest end hardware, yet World of Warships has much more detailed ships and textures running fine off an HDD and older than GTX 7XX graphics.
Meanwhile on Linux, Compiz still runs crazy window effects and 3D cube desktop much better and faster than KDE. It’s so good I even recommend it for old devices with any kid of gpu because the hardware acceleration will make your desktop fast and responsive compared to even the lightest windows managers like openbox.
TF2 went from 32 bit to 64 bit and had immediate gains in performance upwards of 50% and almost entirely removing stuttering issues from the game.
Batman Arkham Knight ran on a heavily modified version of Unreal 3 which was insane for the time.
Most modern games and applications really don’t need the latest and greatest hardware, they just need to be efficiently programmed which is sometimes almost an art itself. Slapping on “AI” to reduce the work is sort of a lazy solution that will have side effects because you’re effectively predicting the output.
When a decent gpu is ~$1k alone, then someone wants you to pay more $ for a feature that offers no tangible benefit, why the hell would they want it? I haven’t bought a PC for over 25 years, I build my own and for family and friends. I’m building another next week for family, and AI isn’t even on the radar. If anything, this one is going to be anti-AI and get a Linux dual-boot as well as sticking with Win10, no way am I subjecting family to that Win11 adware.