Because you’re relying on compatibility between older Debian software (systemd, etc) and newer versions installed in the chroot. Things get weird quickly.
Consider a nested privileged container instead (LXC or similar) and cross your fingers that Debian systemd and Arch systemd play nice.
If the above fails just make a VM and pass through the GPU with GVT-g (otherwise pass through the entire GPU.)
If all of that fails install Arch to a USB attached SSD or something.
Container is just a term for a set of isolation solutions bundled together.
Like file system isolation (chroot), network isolation, process isolation, device isolation…
One of them is ofc chroot, yes container use exactly the same chroot functionality.
So to answer your question, no, you don’t need full isolated container. You can use only chroot.
You just need to pass all required devices ( and match the driver version running in kernel with your files in container and (avoid) more than one app having full unrestricted access to GPU as that would result in issues (but dont know the details so can’t help you with that)).
good is the enemy of excellent. X11 works for most users (almost all the users?) well. You can see that with the adoptions of other standards like the C++ standards and IPV6 which can feel like forever.
Another thing I think one of the X11 maintainers mentioned iirc is that they have been fairly gentle with deprecation. some commercial company could have deprecated X11 and left you with a wayland session that is inferior in some ways.
In comparison to the alternatives we had at the time, Linux was a fucking tank. Once it was up, you could expect to get 6 months to years of uptime unless you were installing new tools or changing hardware (no real USB/SATA yet, so hardware was a reboot situation).
If you got a Win98 machine up, it would eventually just hang. Yes, some could got a whole, but if you used it for general use it would crash the kernel out eventually. Same for MacOS (the OG MacOS).
The only real completion for stability was other UNIX systems, and few of those were available to the general public at a reasonable price point.
VAX/VMS was still around then, and as far as I recall, that was the king for uptime.
Linux back then supported much less hardware. I can remember even in the early aughts, there was while families of popular wireless network chipsets that weren’t supported.
VAX/VMS was such a beast! The hardware wasn’t readily available to the public, though.
Oh, the wireless chipsets in the 90’s into about 2005? or so…that was a bad time for anyone trying to run wireless. Hell, MS Windows didn’t even have network drivers baked in until what, WinXP? Wiring computer together in the 90’s was such a a trial, both for hardware and software fronts.
I was lucky to score a 3Com 3c905b fast 10/100 Ethernet card from a bussy in 1996. That was well supported across the board (Linux and Windows), and the IRQ settings for the PCI bus memory mapped I/O and IRQs was well documented.
Edit: buddy, not a hussy, though he kinda was… Your call in how you want to read it.
Do you remember the article about some university that accidentally walled in a Network server? It ran for years until they needed to put hands on it for something. They had to do the “follow the Ethernet cable” game until it went through the sheetrock into a dead space.
Daily updates with rolling distro may cause issues but a stable system that wasn’t tinkered with would run and run and run. Our Linux fileserver at work had a 2 year uptime, only broke that for some drive additions and other adjustments, otherwise it would have just kept on chugging along without interaction. My debian ARM NAS runs without incident, the only shutdowns it sees are when I move equipment to different rooms or want to reroute power cables. Otherwise it would just always be working fine.
Hell my home server, running on low end Xeon hardware had uptime numbers around 3 years…then there was a power cut. Next down day was another power cut a year or so later. Total around 8 years running with 5 outages, all but one due to power loss (other was Ubuntu 16.04 - 18.04 upgrade).
Just updated to Ubuntu server 20.04 so uptime is only 7 days at this point.
If you’re using an Intel chip look into GVT-g and consider running Arch from a VM, that’ll be the closest thing to native.
The unfortunate thing about running an Arch container from a Debian host is that you’re relying on an older kernel and an older systemd host side and I’ve found that often causes compatibility problems inside the Arch container. If you are very, very lucky Arch will just work inside the container, but IME that’s fairly rare as systemd often has breaking changes over several releases (and Arch tends to be at least several releases ahead of Debian.)
If you’re using an Intel chip look into GVT-g and consider running Arch from a VM, that’ll be the closest thing to native.
I want to start clear Arch Wayland session on one my ttys and want to Arch have full direct hardware access.
The unfortunate thing about running an Arch container from a Debian host is that you’re relying on an older kernel
I use latest Linux-libre on my Debian machine and everything works well.
If you are very, very lucky Arch will just work inside the container, but IME that’s fairly rare as systemd often has breaking changes over several releases (and Arch tends to be at least several releases ahead of Debian.)
As I mentioned earlier I tried running Archbox. It is basically script to easy set up chroot. The main problem was that compositor couldn’t connect to Wayland socket.
I think you’re right on this, but i am thinking it’s more of an Nvidia issue rather than a Wayland one… Going to sleep under X11 works the first try, however resuming from sleep showed the following screen (kernel panic? with mentions of Nvidia) https://sh.itjust.works/pictrs/image/8023983c-3e74-477c-b69a-6ff66a8d5918.jpeg
It’s either that or just a black screen. I think this warrants a driver reinstall, I also installed some CUDA stuff so will have to check this out…
Education, Professional Development, & Credentials
section
Something like: Open Source Computer Science Coursework Completed XX hours of coursework through ABCD, EFGH, HIJK Universities Relevant Coursework: Linear Algebra (Princeton); Machine Learning (Stanford); Cryptography (Stanford)
It would weigh less than my traditional degrees, but if pressed on it (unlikely), I would describe exactly what this is: an effort to liberate CS education in the spirit of the Free Software movement, using synchronous and asynchronous learning methodology in an online learning platform from accredited, reputable universities.
At this point in my career, it would show continued aptitude for growth and professional development, since it’s been close to two decades since my first degree.
Also, at this point, I’ve seen people put shit like Strayer U and ITT Tech and Liberty on their resume and get hired for very high paying jobs. Honestly I would take this over that trash.
Even 15 years ago, most lower level undergrad coursework was 150+ students in a lecture hall where the professor would pull up Blackboard and just load the slideshow. It was only at the 300+ level where class size shrunk down and interpersonal relationships sort of mattered.
My wife’s graduate degree a few years later but still over a decade ago was almost entirely online; they only met in person to discuss their progress towards the capstone. And she has a nice prestigious degree with a very expensive university name on it, walked across the stage at that University, and nowhere does that diploma read, “Online.”
I have a lot of beef with the US university system. Change has to start somewhere.
Running journalctl -r -u systemd-suspend.service does not suggest anything is wrong, just normal status messages. I will try to see if I need a BIOS update, maybe it’s really out of date.
edit yeah current bios is F7c (apr 2022), most recent is F10 (dec 2023). will do that
I 100% agree. I don't mind design refreshes. I think I'm in the minority of loving the current Firefox logo.
But this just sucks. They really took their unique, clever wordmark logo (but still very modern and minimal!) and replaced it with a bland, trendy 2022 typeface.
I know this is super petty, but this might convince me to find another password manager and method for syncing tabs. Might try librewolf, too. Rebranding invites users to re-evaluate their view on a brand, and mine isn't changing for the better.
I think that question can be answered by the recent horrible series of decisions by them. Mozilla really has been captured by the roots of enshittification at this point.
Ew the new one sucks. Why can’t they spend the money that they have on important stuff instead of changing logos every couple of years? Uk, considering that their funding is going to dry up because of the Google anti trust case?
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.