Superficial feedback but I can’t read more than 3 lines without syntax highlighting. Here I believe lines short for the text but makes code even harder to read due to new line. Maybe Codeberg allows for HTML embedding.
Now for a comment on the content itself, how is that different from aliases in ~/.bashrc? I personally have a bunch of commands that are basically wrapped or shortcuts around existing ones with my default parameters.
Finally, if the result is visual, like dmenu which I only use a bit in the PinePhone, then please start by sharing a screenshot of the result.
Anyway, thanks for sharing, always exciting to learn from others how they make THEIR systems theirs!
I am sorry, I dont know how to do syntax highlighting in html, if it helps, can you please check it on codeberg (link in table of content and also mid text), there you can choose your preferred highlighting.
Yes, it is similar to aliases, I covered that bit in executing stuff, my problem from the times i had aliases was that sometimes i could not remember the aliases i had set (i had greater than 50 at some time), and for such reasons, there are programs like navi and cheats, I used to use navi, but then i had a different binding to call navi (ctrl+g by default) and this way I have only 1 binding, and that helps develop a great muscle memory. also aliases can only mimic the behaviour of Type or Exec sections, for others, you would need something else
and yes, the result is indeed graphical, I will add screenshots
pretty standard compared to OSs like Android and iOS. i think the mobile OSs, at least recently, have done better at this; they don’t ask for permission until they need it. want to import bookmarks? i need file system access for that. want to open your webcam? i need device access. doing it all upfront leads to all the problems mentioned in this thread: unclear as to why, easy to forget what access you’ve given, no ability to deny a subset of options, etc.
Since they started targeting the PC segment with these chips to take on Apple’s insanely priced m-class chips, and Amazon and Google’s custom ARM datacenter chips.
They partnered with Canonical to do the first run of development for kernel support in the past year, and now it sounds like they’re moving to get the graphics driver developed and upstreamed.
Graphics driver for sc8280xp are already a thing. There are more issues in convenience daily driving linux, currently. From the top of my head:
firmware update path
dtb update/loading path
no virtualization
no universal dock compability
missing HDMI/DP features
I suspect that these issues are common between their ARM chips and will be addressed for both chips almost simultaneously. But I have no real idea on kernel development. And their documentation is only shared with linaro so one can only guess.
Until recently, that “support” had been a barely supported forks of the linux kernel that were barely updated, and was so locked down that custom rom support was a pipedream on snapdragon processors. Which to be fair, is par for the course on most ARM chipsets (It’s the reason you see a lot of custom roms for android have extremely old and outdated kernels)
I’m glad to see more ARM companies moving towards working with upstream projects, and not just making working on their stuff a PITA to protect “Trade Secrets” or some bullshit like that.
You are very wrong here. They open-source a lot of things and they even used to have their own open-source modified version of Android for their phone chips.
Oh it’s ok. Broadcom is a very bad company in terms of open-source and Linux support. Their most known products are WiFi modules for laptops. Qualcomm on the other hand is probably one of the most open-source friendly commercial companies and it’s known for very popular mobile processors such as the Snapdragon series.
I wouldn’t call Qualcomm great for foss. It just better than absolutely terrible. Also Broadcom is a terrible company all around. They buy others and then wring them dry.
If the X Elite mainline kernel support pans out, Qualcomm may become top tier in terms of support. It would certainly make them the most important Linux ARM chip. We will see.
You mean like what they’re doing to VMware and canning perpetual licenses the second they took over? I guess in some ways they are actually great for FOSS, because I’ve never seen more interest by Enterprise in Proxmox before they made that decision.
As the article says, there is no graphics driver yet, so nobody is experimenting with these chips in the gaming world yet in that sense 😉
Maybe somebody is prototyping a Windows platform in the meantime, and I haven’t seen the benchmarks, but I would be surprised if these chips could outperform AMD’s similar APU packages.
Not sure why you’d want an ARM-based handheld to play PC games at this point in time. Pretty much all PC games are available in x86 only, and any efficiency gains these fancy new ARM chips supposedly have will be lost when translating x86 to ARM.
If both AMD/Intel and Qualcomm do a good job with their core design and the same process node is used, I don’t see how a translation layer can be any faster than a CPU natively supporting the architecture. Any efficiency advantages ARM supposedly has over x86 architecturally will vanish in such a scenario.
I actually think the efficiency of these new Snapdragon chips is a bit overhyped, especially under sustained load scenarios (like gaming). Efficiency cores won’t do much for gaming, and their iGPU doesn’t seem like anything special.
We need a lot more testing with proper test setups. Currently, reviewers mostly test these chips and compare them against other chips in completely different devices with a different thermal solution and at different levels of power draw (TDP won’t help you much as it basically never matches actual power draw). Keep in mind the Snapdragon X Elite can be configured for up to “80W TDP”.
Burst performance from a Cinebench run doesn’t tell the real story and comparing runtimes for watching YouTube videos on supposedly similar laptops doesn’t even come close to representing battery life in a gaming scenario.
Give it a few years/generations and then maybe, but currently I’m pretty sure the 7840U comfortably stomps the X Elite in gaming scenarios with both being configured to a similar level of actual power draw. And the 7840U/8840U is AMD’s outgoing generation, their new (horribly named) chips should improve performance/watt by quite a bit.
Not what i am saying. I said that it is not a given, that translation means less performance.
In theory you can achieve similar or even higher performance, all depending on how well or how bad the original machine code is. Especially when you can optimize it for a specific architecture or even a specific CPU.
And yes ARM has shown to be more power efficient then x86 CPUs even on higher load (not just low powered embedded stuff).
Wine/Proton on Linux occasionally beats Windows on the same hardware in gaming, because there’s inefficiencies in the original environment which isn’t getting replicated unnecessarily.
It’s not quite the same with CPU instruction translation, but the main efficiency gain from ARM is being designed to idle everything it can idle while this hasn’t been a design goal of x86 for ages. A substantial factor to efficiency is figuring out what you don’t have to do, and ARM is better suited for that.
As you said yourself, it’s not the same thing. Proton can occasionally beat Windows because Vulkan might be more efficient doing certain things compared to DirectX (same with other APIs getting translated to other API calls). This is all way more abstract compared to CPU instruction sets.
If Qualcomm actually managed to somehow accurately (!) run x86 code faster on their ARM hardware compared to native x86 CPUs on the same process node and around the same release date, it would mean they are insanely far ahead (or, depending on how you look at it, Intel/AMD insanely far behind).
And as I said, any efficiency gains in idle won’t matter for gaming scenarios, as neither the CPU nor the GPU idle at any point during gameplay.
With all that being said: I think Qualcomm did a great job and ARM on laptops (outside of Apple) might finally be here to stay. But they won’t replace x86 laptops anytime soon, and it’ll take even longer to make a dent in the PC gaming market because DIY suddenly becomes very relevant. So I don’t think (“PC”) gaming handhelds should move to ARM anytime soon.
It’s not that uncommon in specialty hardware with CPU instructions extensions for a different architecture made available specifically for translation. Some stuff can be quite efficiently translated on a normal CPU of a different architecture, some stuff needs hardware acceleration. I think Microsoft has done this on some Surface devices.
Rephrasing you: “Pretty much all PC games are available in Windows only, and any efficiency gains these fancy free Linux OS supposedly have will be lost when translating Windows to Linux.”
Obviously. Games are compiled for Linux natively. Same can be done for Linux on ARM. Opensource games like Xonotic already do this, proprietary games like War Thunder are compiled for ELBRUS and I’m sure can be compiled for ARM. If Valve wanted, they could release their games compiled for ARM tomorrow.
Porting games to a different architecture is normally quite a bit more involved than just recompiling them, especially when architecture-agnostic code wasn’t a design goal of the original game code. No, Valve couldn’t release all their games natively running on ARM tomorrow, the process would take more time.
But even if Valve were to recompile all their games for ARM, many other studios wouldn’t just because a few gaming handhelds would benefit from it. The market share of these devices wouldn’t be big enough to justify the cost. Very few of the games that run on Steam Deck are actually native Linux versions, studios just rarely bother porting their games over.
I’m not saying ARM chips can’t be faster or otherwise better (more efficient) at running games, but it just doesn’t make sense to release an ARM-based handheld intended for “PC” gaming in the current landscape of games.
Apple can comparatively easily force an architecture transition because they control fhe software and hardware. If Apple decides to only sell RISC-V based Macs tomorrow and abandon ARM, developers for the platform would have to release RISC-V builds of their software because at some point nobody could run their software natively anymore because current Macs would be replaced by RISC-V Macs as time passed by. Valve does not control the full hard- and software stack of the PC market so they’d have a very hard time to try and force such a move. If Valve released an ARM-based gaming handheld, other manufacturers would still continue offering x86-based handhelds with newer and newer CPUs (new x86 hardware is still being developed for the foreseeable future) and instead of Valve forcing developers to port their games to native ARM, they’d probably lose market share to these other handhelds as people would naturally buy the device that runs current games best right now.
In a “perfect world” where all games would natively support ARM right now an ARM-based handheld for PC gaming could obviously work. That simply isn’t the world we live in right now though. Sure we could ramble on about “if this and that”, it’s just not the reality.
If you use Debian-based linux (Ubuntu, Minut, others), Mozilla recommends getting the package directly from their respository rather than flatpak or other repos.
Personally, I saw a major performance increase on my low-powered laptop when I switched from flatpak to the Mozilla package.
I would buy if it wasn’t for the case it will take many years for the software I use to get arm support on Linux distros. I just don’t feel like having to fix so many packages
In defense of this warning, when I first put my application on Flathub, I had it because of how file i/o worked (didn’t support XDG portals, so needed home folder access to save properly). It did actually motivate me to get things working with portals to not request the extra permissions and get the green “safe” marker.
A lot of apps will always be “unsafe” because they do things that requires hardware access, though, so I could see them wanting something more nuanced.
Maybe you are just dealing with the new Plasma 6.1 feature for multi-monitor setups? It’s pretty useful, but I find it annoying too, and thankfully this is KDE, so there’s always the possibility to make it your way.
I’m referring to the issue outlined here. Thanks for the link to that problem, I haven’t encountered it yet but I haven’t played with Wayland/KDE6 all too much yet
What render settings are you using? Kdenlive doesn’t use Movit for rendering but rather Melt. You can try AMD VAAPI under Hardware Accelerated in the Render menu. Some other tweaks you can do include enabling Parallel Processing (for your CPU, it can go up to 8), changing Custom Quality, and changing Encoder Speed (though the last two options do affect quality so experiment with what works for you)
Battery calibration is supposed to help the battery’s firmware figure out how low the battery can go. It also tends to hurt your battery, so you should avoid performing these calibrations and keep the charge between 20% and 80% as much as you can.
It seems what you’re trying to do is improve battery estimation by the OS on a new machine. And in that case, Is just trey trip love* I’d just try to live with possible insecurity of not knowing whether the machine has 15 or 25 minutes left.
Afaik in almost all cases the battery monitors work independently of the OS (on the BIOS level I guess) so you should only do the calibration if you notice issues with the monitoring and not more often than like once a year. Also if the battery is very worn out, there’s no way to get accurate measures.
There’s two things you might be talking about here:
The old way of making sure nickel cadmium batteries didn’t degrade, which was to discharge them all the way and charge them back up all the way. Your new laptop is almost certainly using lithium ion batteries which are chemically “damaged” more through that process than just leaving them plugged up all the time.
You could be talking about the old way of dealing with charge controllers, where the controller relied on the bios or os to tell it what to do and didn’t “know” how to respond to batteries at different stages of charge. This hasn’t been the situation for like fifteen years. Nowadays charge controllers go “yup, ready to go boss, 12345mah of charge, 90%” when some bios or os polls them.
You don’t even need to manually keep your battery in the 20-80 range nowadays since almost every charge controller automatically monitors temperature and adjusts charging parameters to not damage the battery. It’s not like the old days where the charge controller was just an ic controlling a fet acting as a slucegate between the battery and the power brick.
Heck, lithium ion batteries nowadays last longest the longer they’re plugged in. Running them to <10% every charge cycle actually diminishes battery life!
You don’t even need to manually keep your battery in the 20-80 range nowadays since almost every charge controller automatically monitors temperature and adjusts charging parameters to not damage the battery.
Sort of. The charge controller will limit charging current if too far outside normal temperature ranges. But it will still charge all the way to 100% unless you manually limit that with the settings on your device.
Heck, lithium ion batteries nowadays last longest the longer they’re plugged in.
That’s actually incorrect, charging a Li-ion battery to 100% is significantly worse for it than charging to 80%, and keeping it at 100% plugged in is even worse. Which is why most devices will have the option to stop charging at 80% or near there instead of going all the way to 100%.
Charging while warm is also much worse than charging below 50 degrees F or so.
While you’re right that going all the way up to the 4.2v that the battery is rated for is worse than if it just stayed at 4v, by not discharging to half or more you’re reducing the charge cycles which directly correlates to longer life.
Ultimately in lieu of a charge controller or os that does that for you, the easiest way for a user to extend battery life without going psycho mode is to charge they phone, eat hot chip and lie.
I know all the macs and iphones have that predictive charging thing where if you’re always leaving your phone or computer or phone plugged in overnight they’ll keep it around 80% or so till about an hour before you wake up and charge the rest of the way then.
Windows computers have something called smart charging but I don’t have any experience with it.
Theres a bunch of different ways to control charging in Linux.
It really seems like this is a solved problem and I’m glad to not be worrying about plugging and unplugging my phone to maximize my battery life.
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.