Chrome being worse than Firefox doesn’t make Firefox’s default telemetry, adware, and DoH to cloudflare good. When the bar is Chrome, essentially any browser passes.
Telemetry you can’t easily disable (requires modifying about:config, can change on update), Glean (nastier than anything in chrome), DoH to cloudflare, pocket (adware), Anonym.
The charge controller’s idea of what’s going on is totally independent of what’s going on in the CPU. It doesn’t know and doesn’t care about your OS.
Multiple calibration cycles are pointless. Doing it once (every few months) should be enough. Or doing it never is fine too. I had one laptop (thinkpad l480) that would get out of calibration, such that the charge controller would go straight from 45% charge to 1%.
What’s happening is that lithium batteries have a very steady voltage for most of their usage. The voltage mostly changes at the top and bottom ~%10 of charge. Everything else in the middle is guesswork - the charge controller has to measure and count every drop of current going in and out of the battery. Measuring consists of a current meter - you put a very low value resistor in line and measure microvolts of drop across it. You can have a high precision current meter, or you can have one that “doesn’t burn a lot of power in the dropper resistor”, not both. Some systems have too inaccurate a meter. Some have phantom draws that aren’t well accounted for (like the battery’s own internal resistance and drain). If the battery spends all of its time in the “voltage never changes” region, the current counter’s guess will diverge from reality.
When you discharge/recharge the battery, you are forcing its current counter to realign itself with reality. Whatever it thinks is left in the battery, nope that’s really zero when we drop to ~3.2 volts.
It’s not true that precision measurements are impossible with low value resistors, a lot of measurement equipment works exactly like that - it might just be more expensive than what the manufacturer is willing to budget for.
Thinking about it, the SoC idea could stop at the southern boundary of the chipset in x86 systems.
Include DDR memory controller, PCI controller, USB controllers, iGPU’s etc. most of those have migrated into x86 CPU’s now anyway (I remember having north and south bridge chipsets!)
Leave the rest of the system: NIC’s, dGPU’s, etc on the relevant busses.
I’m both surprised and not surprised that ever since the M1, Intel seems to just be doing nothing in the consumer space. Certainly losing their contract with Apple was a blow to their sales, and with AMD doing pretty well these days, ARM slowly taking over the server space where backwards compatibility isn’t as significant, and now Qualcomm coming to eat the windows market, Intel just seems like a dying beast. Unless they do something magical, who will want an Intel processor in 5 years?
I haven’t wanted an Intel processor for years. Their “innovation” is driven by marketing rather than technical prowess.
The latest batch of 13900k and again with 14900k power envelope microcode bullshit was the final “last” straw.
They were more interested in something they could brand as a competitor to ryzen. Then left everyone who bought one (and I bought three at work) holding the bag.
We’ve not made the same mistake again.
Intel dying and its corpse being consumed by its competitors is a fairy tale ending.
I also haven’t wanted an Intel processor in a while . They used to be best in class for laptops prior to the M1, but they’re basically last now behind Apple, AMD, Qualcomm. They might win in a few specific benchmarks that matter very little to people, and are still the default option in most gaming laptops. For desktop use the Ryzen family is much more compelling. For servers they still seem to have an advantage but it’s also an industry which requires longer term contracts that Intel has the infrastructure for more so than it’s competitors, but ARM is also gaining ground there with exceptional performance per watt.
Is that a developer licence thing? I know GitHub recently announced Windows Arm runners that would be available to non-teams/enterprise tiers later this year.
It isn’t as simple as just compiling. Large programs like games then need to be tested to make sure the code doesn’t have bugs on ARM. Developers often use assembly to optimize performance, so those portions would need to be rewritten as well. And Apple has been the only large install of performant ARM consumer hardware on anything laptop or desktop windows. So, there hasn’t been a strong install base to even encourage many developers to port their stuff to windows on ARM.
Yeah this has been our (well, my) statement on requests to put out ARM binaries for Pulsar. Typically we only put binaries out for systems we actually have within the team so we can test on real hardware and replicate issues. I would be hesitant to put out Windows ARM builds when, as far as I know, we don’t have such a device. If there was a sudden clamouring for it then we could maybe purchase a device out of the funds pot.
The reason I was asking more about if it was to do with developer licences is that we have already dealt with differences between x86 and ARM macOS builds because the former seems to happily run unsigned apps after a few clicks, where the latter makes you run commands in the terminal - not a great user experience.
That is why I was wondering if the ARM builds for Windows required signing else they would just refuse to install on consumer ARM systems at all. The reason we don’t sign at the moment is just because of the exorbitant cost of the certificates - something we would have to re-evaluate if signing became a requirement.
It doesn’t usually work that well in practice. I have been running an M1 MBA for the last couple years (asahi Arch and now Asahi Fedora spin). More complex pieces of software typically have build system and dependencies that are not compatible or just make hunting everything down a hassle.
That said there is a ton of software that is available for arm64 on Linux so it’s really not that bad of an experience. And there are usually alternatives available for software that cannot be found.
I can’t say I’m one who shares that sentiment seeing as the only two projects I’m involved with happen to be Electron based (by chance rather than intention). Hell, one of them is Pulsar which is a continuation of Atom which literally invented Electron.
Until risc-v is at least as performant as top of the line 2 year old hardware it isn’t going to be of interest to most end users. Right now it is mostly hobbyist hardware.
I also think a lot of trust if being put into it that is going to be misplaced. Just because the ISA is open doesn’t mean anything about the developed hardware.
RISC-V is currently already being used in MCUs such as the popular ESP32 line. So I’d say it’s looking pretty good for RISC-V. Instruction sets don’t really matter in the end though, it’s just licensing for the producer to deal with. It’s not like you’ll be able to make a CPU or even something on the level of old 8-bit MCUs at home any time soon and RISC-V IC designs are typically proprietary too.
Actually It was completely wiped apart from a few fragments in the recovery D: drive & thanks to the disk management system being the worst I have ever seen, it put the C: drive into a 26GB partition instead of the 210GB that its supposed to be in which is now D:
Arm is not any better than x86 when it comes to instructions. There’s a reason we stuck to x86 for a very long time. Arm is great because of its power efficiency.
That power efficiency is a direct result of the instructions. Namely smaller chips due to the reduced instructions set, in contrast to x86’s (legacy bearing) complex instruction set.
Yes I understand that and agree, but the reason x86 dominated is because of those QoL instructions that x86 has. On arm you need to write more code to do the same thing x86 does, OTOH, if you don’t need to write a complex application, that isn’t a bad thing.
You don’t need to write more code. It’s just that code compiles to more explicit/numerous machine instructions. A difference in architecture is only really relevant if you’re writing assembly or something like it.
Sorry, I should have been more specific. I am talking about assembly code. I will again state that I am pro-arm, and wish I was posting this from an arm laptop running a distro.
It’s really not, x86 (CISC) CPUs could be just as efficient as arm (RISC) CPUs since instruction sets (despite popular consensus) don’t really influence performance or efficiency. It’s just that the x86 CPU oligopoly had little interest in producing power efficient CPUs while arm chip manufacturers were mostly making chips for phones and embedded devices making them focus on power efficiency instead of relentlessly maximizing performance. I expect the next few generations of intel and AMD x86 based laptop CPUs to approach the power efficiency Apple and Qualcomm have to offer.
All else being equal, a complex decoding pipeline does reduce the efficiency of a processor. It’s likely not the most important aspect, but eventually there will be a point where it does become an issue once larger efficiency problems are addressed.
yeah, but you could improve the not ideal encoding with a relatively simple update, no need to throw out all the tools, great compatibility, and working binaries that intel and amd already have.
Well, not exactly. You have to remove instructions at some point. That’s what Intel’s x86-S is supposed to be. You lose some backwards compatibility but they’re chosen to have the least impact on most users.
Arm is better because there are more than three companies who can design and manufacture one.
Edit: And only one of the three x86 manufacturers are worth a damn, and it ain’t Intel.
Edit2: On further checking, VIA sold its CPU design division (Centaur) to Intel in 2021. VIA now makes things like SBCs, some with Intel, some ARM. So there’s only two x86 manufacturers around anymore.
We stuck to x86 forever because backwards compatibility and because nobody had anything better. Now manufacturers do have something better, and it’s fast enough that emulation is good enough for backwards compatibility.
As an aside I think I’ve seen Windows 11 being extremely intolerant of allowing another OS to setup the dual boot. I think someone responded to an inquiry about it that - sarcastically - Microsoft was jealous other OSes are being used more so they fuck with anything and need to be the install that establishes the boot loader. Not sure how much of that to take seriously but let’s face it, they’re dying and not going without a hissy fit.
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.