when it’s the main reason why so many people use Arch Linux?
AUR is one reason why I use Arch. But not the reason. Besides AUR, Arch has many other advantages from my point of view. Like for example the wiki that also users of other distributions use. Or the many vanilla packages. Or that you can easily create your own packages through the PKGBUILD files. Or that, based on my own experience, Arch is quite problem-free to use despite the current packages.
One reason why other distributions don’t have something like AUR could be that AUR is not an official offering, so no verification is done in advance either. Thus, it has happened at least once that someone has manipulated PKGBUILD files in bad faith (lists.archlinux.org/pipermail/…/034151.html). The Wiki does not warn against the use for nothing.
However, it is much easier for the user to check the files in the AUR in advance than it is, for example, with ready-made packages in an unofficial PPA.
Arch has many other advantages from my point of view. Like for example the wiki that also users of other distributions use.
I remember when started using #! and then Debian with Openbox. It didn’t matter what problem I had, the answer and solution were always in the Arch Wiki.
Depends on what you mean for security/privacy. You can use Tails or whatever and have everything encrypted and then just be logging into your Facebook account on Chrome without an ad blocker.
Most Linux distros are secure enough for the average person who isn’t being targeted by some crazy state level actor. If you’re particularly concerned stick with a distro that has a security team like Debian. As for privacy that has more to do with the sites you browse and have accounts with but obviously avoid Google (I just use Firefox instead of Chrome) use an adblocker like ublock origin, along with maybe something like decentraleyes.
Chrome phones home a lot. It’s not a good idea to use it if you care about privacy. Firefox even has metrics enabled by default, although you can turn them off.
I can throw in a vote for Debian stable as well. I’ve recently installed Debian 12 and I’ve been blown away by how great it’s been compared to my recent Fedora 38 experience out of box.
What kind of hardware are you running it on? I’ve started using Debian for servers, but I’m still using Fedora for laptops, currently. I am always curious about different options.
I don’t use wifi however it did work out of the box. The only thing that required additional setup was the Nvidia card but the driver was available in the repos.
If you do end up testing it out on a laptop let me know how it goes. I have a Windows laptop lying around here somewhere that could use some love.
<pre style="background-color:#ffffff;">
<span style="color:#323232;">abbr -a -- ttime date '+It is %-H %M and %S seconds'|espeak >/dev/null 2>/dev/null # imported from a universal variable, see `help abbr`
</span>
openSUSE has OBS, Fedora has COPR, and I’m pretty sure both Gentoo and NixOS have similar stuff. Do Ubuntu’s PPAs count? Flatpaks and AppImages are also similar, although they are more limited and they aren’t exactly “standard” packages.
OBS and COPR don’t even come close to the AUR in terms of ease of use. AUR is one searchable index, OBS and COPR are more like separate repositories that you have to find and add manually. There’s multiple people building the same packages and you have to figure out which one you want to rely on. You also can’t easily edit the packaging instructions and rebuild a package if it doesn’t work for you.
PPAs are fundamentally flawed. Since each repository is separate, they only care to maintain consistency internally, plus the packages of the Ubuntu version they were based on.
Adding a PPA and using its packages on your system takes your dependency tree into a “cul de sac” where only that PPA is reliable.
But of course people use multiple PPAs so what happens is that the dependency tree grows increasingly unrecoverable.
Eventually you get the dreaded “requires X but cannot be installed” errors which pretty much mean you’ve hit a dead end. You can recover your system from it (aptitude can provide solutions) but they are extremely invasive, basically come down to uninstalling and reinstalling thousands of packages to bring your tree back to a manageable state.
I admit I haven’t used Ubuntu in years, so I didn’t think they were that bad. Thanks for the info, it made me learn a dependency hell scenario I never thought about before.
Debian technically has the same issue but people who want Debian usually stick to stable + backports so it’s less frequent.
Yeah that’s why distributions which put all their community packages in one place with the same dependencies are more resilient in this respect.
Arch’s AUR is not perfect either, you can have packages that list dependencies badly or replace core packages so you can still mess up but in a different way.
NixOS seems to have hit on a very robust formula that lets packages coexist with minimal friction.
I wanted to use the up to date version of FFMPEG, had to download the binary from the website. Wanted to install some program that needed the latest version of KDE, had to install a PPA which updated a LOT of packages and at the end it would break many other apps installed from other PPAs.
At some point I realized using Arch was just much less work than worrying myself about all the dependencies that could break when you don’t stick to what’s available in their official repositories.
Probably for the same reasons why there are so many packaging formats in the first place. If everyone settled on deb, rpm, or arch style tar packages. Then we wouldn’t need the aur, flatpak, snap, appimage or anything else.
Don’t know. The AUR is a big reason I use Arch. Obviously there’s PPAs/OBS or whatever but they’re not implemented nearly as well, I don’t need to go searching for new repos with the AUR or messing with repo priorities (fun times on Suse…) since everything is in the one place and there’s procedures for taking over orphaned packages. I use about twenty or so packages from it, many of them not packaged for any other distro. Personally not interested in using Flatpak since two package management systems is not my idea of KISS. Poor man’s AUR imo :).
For my needs I found that that flatpak just werks for anything not on the distros repos. And for the really obscure stuff I’ve used, I could just build from source
Having to build from source is exactly why I don’t think the AUR has a replacement. There are many similar package managers but non as extensive. Like NUR for NixOS.
The point is that you want management, easy ways to create images, backups, move container between hosts, orchestration, network management and sometimes not only container but also virtual machines. LXD does it all very well and if you don’t want those resources you might as well use systemd-nspawn.
They’ve taken over Proxmox. Not sure if you’re following but they have now a WebUI and the entire solution is magnitudes better than the crap Proxmox has been offering.
Oh, bullshit. The minimal interface that Ubuntu offers isn’t even a pimple on the Proxmox front end, and doesn’t touch the filesystem, clustering abilities and backup solution that’s the equivalent of Veeam IMO.
There you are, calling bullshit on my post while deleting your own where you clearly demonstrated close to no experience with LXD and its clustering capabilities. lol
The minimal interface that Ubuntu offers
Once again your ineptitude is palpable. Ubuntu doesn’t offer anything, the WebUI is a part of LXD.
And yes LXD’s WebUI released “yesterday” is objectively better than Proxmox and it does touch storage and clustering.
I haven’t been following, but that’s actually good to hear, proxmox needs a better ui.
LXD, I suppose for the migration, but for any more complex orchestration I think you’ve moving to k8s or something more serious, LXD just has an odd “not enough but too much” feature set for me, I like things either push-button, or let me do it, this is kind of both.
for any more complex orchestration I think you’ve moving to k8s or something more serious
I guess it depends in your use case. If you’re taking about “regular” applications LXD/LXC might not be your best fit. LXD/LXC seem to very good for the more low level infraestruture related solutions. In contrast, whatever is typically deployed with k8s that is mostly immutable very reproducible and kind of runs at a very high level.
LXD is more about what might power that “higher level” layer, more about mutable containers, virtual machines and very complex stacks that you can’t deploy with docker most of the time. As excepted people with those needs greatly leverage cloud-init and Ansible in order to get the reproducibility and the automated deployment capabilities that the Docker “crowd” usually likes.
I do that with lxd, but I have written ansible playbooks (almost like dockerfile? ) to automate the lxd containers. You could probably write some automation for scaling as well, but not something I’ve done, I have just opted for high availability with ceph & keepalived. Whatever works for your use case :) I do use some docker, but this is still nested inside lxd…
I also do playbooks to deploy stuff some stuff with LXD, but my end users only like Docker so, I kind of setup the infrastructure that allows them to deploy Docker on top of LXD containers that are deployed using Ansible.
linux
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.