It would be so cool if they created the Debian for RPM/Enterprise Linux and all the other distros from that “family” used it as a rock-solid upstream base.
Yeah, and I love it. However, after knowing the deb and the rpm worlds for the 20 years I’ve been using Linux, I believe it is too late for these two sides to unite and work together.
Even without talking about different dot extension, there are multiple incompatible repo with the same ones. Take RHEL vs SuSE vs Fedora, or Ubuntu vs Debian
They don't hype it as much as (I think) they should on that webpage, but VanillaOS does this thing with it's package manager, Apx, where it allows you to install applications from various distros via containers, and run them all side-by-side seamlessly. It's neat.
What about the packages that are not available in flatpak? I assume there must be some packages that are only available in certain corners of the internet?
Yeah, that's what I mean. You can use flatpak (or snap if you swing that way) but you can also install applications via containers. They're still not installed on the OS-- even "native" applications get installed via the container. So if the application you want is maintained for arch in aur, you can add the --aur tag to the apx command and it will install that version instead of the default, which is ubuntu. This also works for fedora applications.
Doesn't that result in a lot of wasted space from duplicated dependencies? Don't get me wrong, this looks great on paper, which is why I desperately need to find fault with it before I start distrohopping again.
I'm sure it does to some degree, though I don't know if it's enough to matter on modern computers, and isn't that what flatpak does, too? (duplicating dependencies)
In any event, if you don't need an application from a specific distro there's no reason to create that container. The non-ubuntu ones get created when they're needed. (And I think the next version of VanillaOS will be debian-based, not ubuntu; in case that matters.)
Flatpaks aren't the only option in Silverblue: you can also layer packages using 'rpm-ostree' (requires a reboot though), and you can also use toolbx (or even better, distrobox) to create an easy-to-use container that you can do anything with. With distrobox you can install an app inside of a fedora/ubuntu/arch/other container, and then use a simple terminal command to expose that app to your host system as if it was installed natively.
I'm on Silverblue and I have mostly flatpaks plus a handful of layered packages as my base system. Then I have a couple of distrobox ubuntu containers for software development (lots of libraries and build tools), music production (with Yabridge and Wine). Because the base system is immutable I've never had a problem that prevented my computer from booting, and if I ever do, it's extremely easy to roll back to before the last update. I've had a couple of issues working with containers in the past, but not big ones, and much of that comes down to my own user error.
I definitely recommend Silverblue for anyone who wants a rock solid, practically unbreakable Linux system.
I wonder how well it integrates with hardware. Arch with the pacman packages has been the only distro where I could get ROCm working reliably. I'd love to make a "ROCm container" and dump all that mess into it's own sandbox.
In fact, the thing I really want is more of a "Qubes but not for security tryhards" (aka I can actually use Wayland AND game on it) where everything gets it's own container mainly for organizational purposes, but something like this sounds like a fair compromise.
Again - I have no idea how well it's hardware support is. I assume 3d accel and whatnot would be fine because it's widely used, but dunno if anyone tried running ROCm on it.
That is actually awesome. It sound like the Fedora aliens (?) but probably more reliable. Cool. Adding VanillaOS that to the list potential new OS that makes computing easy and fun.
I have an arguably bad piece of advice, but one I hadn’t seen in skimming the replies.
You could always install Windows in a VM. Libvirt and virt-manager offer a pleasant GUI experience so it’s easy to do. If you give the VM a heavy resource allotment (while leaving a reasonable amount for the host) it should still perform well. The VM video driver is the only place you take a not insignificant performance hit, but for A/V manipulation I don’t think it’ll matter. Unless you use GPU based video encoding. In which case it’ll be CPU bound now so slower. You can potentially do PCI pass through to your GPU but that adds complexity.
A big downside here is that as far as Windows is concerned, this is different “hardware” so it won’t activate based on your physical device. As I recall, it only allows the use of one core while unactivated which is pretty much unusable. So a pretty hefty expense relative to a personal VM, I think. But it is an option.
“A big downside here is that as far as Windows is concerned, this is different “hardware” so it won’t activate based on your physical device.”
You can transfer a Windows licence from another installation, so in OP’s situation, from the original installation. During Windows setup, select the ‘I don’t have a license key’ option, then once Windows is installed, go into settings, click the Windows isn’t activated option, and go through the activation troubleshooter.
I can’t remember exactly where, but somewhere in there is the option to transfer the license from another installation. It has to be the same version of Windows.
The license transfer also depends what edition was being used. OEM may be stuck with the hardware, traditionally you could take a retail license to a new install.
If you switch everything you can to flatpaks and use distrobox for other apps before you switch you’re pretty close (better than toolbox and recommend layering it if you do switch to Silverblue).
Anything can be layered onto Silverblue if it can’t be installed another way. I’ve found it works well.
Sounds dope. I love OpenSuse. I almost made it my main OS, but got kicked in the ass installing graphics drivers and the fixes were many and too annoying.
I had a reasonably good time getting NVIDIA drivers installed. I found the instructions here. I installed the newest drivers using the following command + a reboot. transactional-update -i pkg in nvidia-driver-G06-kmp-default nvidia-video-G06 nvidia-gl-G06 nvidia-compute-G06 nvidia-utils-G06 nvidia-compute-utils-G06The OpenSUSE guide doesn’t include compute-utils, which is needed if you want to run nvidia-smi. I haven’t tried installing a full CUDA SDK, so ymmv there.
I also like them just for the sake of tidiness. Some apps like Steam tend to make a big mess of dependencies all over the place, so it’s nice to have that all contained in one place. It does take up more space but I have a reasonably big hard drive so it’s kind of negligible for me.
It certainly has simplified things for me! To get anything so up to date, I would need to use something like Arch or the AUR, which is fine but I find unappealing (using Arch).
The benefit of testing branch is it's still nit quite so bleeding edge, and updating from testing branch every week means you'll never have to install new stable releases, you'll already be running it.
While the testing branch is stable, if you want even more assurance of consistant stabilty, use Devuan testing branch, which is Debian without systemd.
thanks for the answer! In your experience, is Devuan more stable than classical Debian? I’ve never used a non-systemd distro, so I don’t really know what to expect from it
Yes, Devuan is more stable. It's not modified or forked, it's still Debian .deb files but with a different init system.
The difference is systemd is one thing to handle everything. The other inits are launched or initiated each time something thing starts on an individual basis.
I have heard that systemd has greatly improved, but a different init starts a new process ID for each separate program so if something locks or freezes, it affects that one inidividual init process. For systemd, which runs system wide to handle everything, if one program locks, systemd has to make adjusts for the whole system to fix the problem.
I also tried Artix, which is native Arch without systemd, and while it was still a rolling system like Arch, I found Artix to run smoother or lighter than Arch.
Some people find command line with systemd easier to do becase it is one centrslized control system, I say no, what you gain in ease of management you lose in optimal performance and precise control over each individual one, as opposed to systemd being a blanket system. I want Firefox running an isolated process from the one that Plasama desktop is running, each sith their own init started only when each one was started and not controlled by a shared resource.
I’ve been using it for 5 years on laptop and desktop and I’ve had very few issues since then. Imo it offers the best trade-off between up to date packages (and availability of packages and repos), rolling release and stability. I don’t see any reason to switch distros anytime soon.
More details: I’m using xfce and I’ve installed firefox from the unstable branch (via apt pinning) because I wanted it to be more bleeding edge.
thanks for the answer! I have installed it on a VM and noticed that only firefox-esr is present, which is a couple of versions behind. Why isn’t a “normal” firefox package included? and also, does installing firefox from the unstable branch causes any problems to other packages, conflicts, etc, or is it completely safe?
Yes, that’s why I installed it from unstable. The ESR version is an older version with added security patches. I’m not sure, why exactly they are doing it like that and I don’t think it’s a good idea. I’d say a browser should be as up to date as possible for both, bug fixes AND new features. But it worked flawlessly using the “unstable” firefox package. Another option would be the flatpak, but that’s not that well integrated into the system - last time I tried that, the font rendering in the browser was awkward. I use some other flatpaks though, most notably gimp and inkskape which work really well and are very up to date that way.
So $ sudo in general any time I need to run something as root?
I’ll have to think about that some more. I think I rather dislike “forcing” sudo on all commands as root.
I typed the post in a minute and published, so it definitely isn’t the most coherent or well thought out post.
I’m currently using # for commands executed by the root user or sudo.
Currently, I only use sudo if the command depends on one of its features. Like the example above where I execute a command as the www-data user.
My dilemma was whether to use $ sudo or # sudo for those few cases. But based on yours and other comments, it might make sense to use $ sudo for commands executed as root as well.
As much as I love my MacBook I’d like a laptop with the same kind of hardware running Ubuntu or Fedora even more. I have a desktop just for that but it sits in the office where I never go so it’s pretty much just a box that I connect to when I need to test something on x86 as opposed to ARM.
The non-root user probably doesn’t have permission to run the sudo command as www-data user, but root does.
You are wrong. E. g. in Debian (and Ubuntu) the default sudoers file contains
<pre style="background-color:#ffffff;">
<span style="color:#323232;">%sudo ALL=(ALL:ALL) ALL
</span>
that means that any user in the sudo group is permitted to execute any command as any other user. The same for redhat/fedora, but the group name is wheel there.
I use it on a couple devices. It’s more stable than arch and certainly easier to use. It can sometimes be a bit finicky with third party repos. However Debian testing isn’t guaranteed to be stable, so things may break on your system. That being said I really haven’t had many problems.
There are a couple weeks/months before a new version is released where testing stops getting feature updates, as the packages are frozen.
linux
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.