IoT devices are already getting owned at staggering rates. Adding a learning model that currently cannot be secured is absolutely going to happen, and going to cause a whole new large batch of breaches.
A processor that isn’t Turing complete isn’t a security problem like the TPM you referenced. A TPM includes a CPU. If a processor is Turing complete it’s called a CPU.
Is it Turing complete? I don’t know. I haven’t seen block diagrams that show the computational units have their own cpu.
CPUs also have co processer to speed up floating point operations. That doesn’t necessarily make it a security problem.
I would pay extra to be able to run open LLM’s locally on Linux. I wouldn’t pay for Microsoft’s Copilot stuff that’s shoehorned into every interface imaginable while also causing privacy and security issues. The context matters.
That’s why NPU’s are actually a good thing. The ability to run LLM local instead of sending everything to Microsoft/Open AI for data mining will be great.
I could see use for local text gen, but that apparently eats quite a bit more than what desktop PCs could offer if you want to have some actually good results & speed. Generally though, I'd rather want separate extension cards for this. Making it part of other processors is just going to increase their price, even for those who have no use for it.
Yes, I know - that's my point. But you need the necessary hardware to run those models in a performative way. Waiting a minute to produce some vaguely relevant gibberish is not going to be of much use. You could also use generative text for other applications, such as video game NPCs, especially all those otherwise useless drones you see in a lot of open world titles could gain a lot of depth.
Show the actual use case in a convincing way and people will line up around the block. Generating some funny pictures or making generic suggestions about your calendar won’t cut it.
I completely agree. There are some killer AI apps, but why should AI run on my OS? Recall is a complete disaster of a product and I hope it doesn’t see the light of day, but I’ve no doubt that there’s a place for AI on the PC.
Whatever application there is in AI at the OS level, it needs to be a trustless system that the user has complete control of. I’d be all for an Open source AI running at that level, but Microsoft is not going to do that because they want to ensure that they control your OS data.
Machine learning in the os is a great value add for medium to large companies as it will allow them to track real productivity of office workers and easily replace them. Say goodbye to middle management.
I think it could definitely automate some roles where you aren’t necessarily thinking and all decisions are made based on information internally available to the PC. For sure these exist but some decisions need human input, I’m not sure how they automate out those roles just because they see stuff happening on the PC every day.
If anything I think this feature is used to spy on users at work and see when keystrokes fall below a certain level each day, but I’m sure that’s already possible for companies to do (but they just don’t).
I would pay for AI-enhanced hardware…but I haven’t yet seen anything that AI is enhancing, just an emerging product being tacked on to everything they can for an added premium.
It’s hardware specifically designed for running AI tasks. Like neural networks.
An NPU, or Neural Processing Unit, is a dedicated processor or processing unit on a larger SoC designed specifically for accelerating neural network operations and AI tasks. Unlike general-purpose CPUs and GPUs, NPUs are optimized for a data-driven parallel computing, making them highly efficient at processing massive multimedia data like videos and images and processing data for neural networks
An NPU, or Neural Processing Unit, is a dedicated processor or processing unit on a larger SoC designed specifically for accelerating neural network operations and AI tasks.
it does seem like a good translator for the less human readable stuff like regex and such. I’ve dabbled with it a bit but I’m a technical artist and haven’t found much use for it in the things I do.
In the 2010s, it was cramming a phone app and wifi into things to try to justify the higher price, while also spying on users in new ways. The device may even a screen for basically no reason.
In the 2020s, those same useless features now with a bit of software with a flashy name that removes even more control from the user, and allows the manufacturer to spy on even further the user.
My Samsung A71 has had devil AI since day one. You know that feature where you can mostly use fingerprint unlock but then once a day or so it ask for the actual passcode for added security. My A71 AI has 100% success rate of picking the most inconvenient time to ask for the passcode instead of letting me do my thing.
Mine still takes several seconds to boot android TV just so it can display the HDMI input, even if not connected to internet. It has to be always plugged on the power because if there is a power cut, it needs to boot android TV again.
My old dumb TV did that in a second without booting an entire OS. Next time I need a big screen, it will be a computer monitor.
I don’t have a TV, but doesn’t a smart TV require internet access? Why not just… not give it internet access? Or do they come with their own mobile data plans now meaning you can’t even turn off the internet access?
They continually try to get ob the Internet, it’s basically malware at this point. The on board SoC is also usually comically underpowered so the menus stutter.
Even switching to other stuff right after the boot (because the power-on can’t be called a simple power-on anymore) the tv is slow.
I recently had the pleasure of interacting with a TV from ~2017 or 2018. God was it slow. Especially loading native apps (Samsung 50"-ish TV)
I like my chromecast. At least that was properly specced. Now if only HDMI and CEC would work like I’d like to :|
Signage TVs are good for this. They’re designed to run 24/7 in store windows displaying advertisements or animated menus, so they’re a bit pricey, and don’t expect any fancy features like HDR, but they’ve got no smarts whatsoever. What they do have is a slot you can shove your own smart gadget into with a connector that breaks oug power, HDMI etc. which someone has made a Raspberry Pi Compute Module carrier board for, so if you’re into, say, Jellyfin, you can make it smart completely under your own control with e.g. libreELEC. Here’s a video from Jeff Geerling going into more detail: youtu.be/-epPf7D8oMk
Alternatively, if you want HDR and high refresh rates, you’re okay with a smallish TV, and you’re really willing to splash out, ASUS ROG makes 48" 4K 10-bit gaming monitors for around $1700 US. HDMI is HDMI, you can plug whatever you want into there.
Right now it’s easier to find projectors without it and a smart os. Before long tho it’s gonna be harder to find those without a smart os and AI upscaling