There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

github.com/Atemu
reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Atemu , to asklemmy in chmod -R hit me, again... Anyone else who faced these sys-admin woopsies?
@Atemu@lemmy.ml avatar

Note that all of this is in the context of backups; duplicates for the purpose of restoring the originals in case something happens to them. Though it is at least possible to use an index cold storage system like what I describe for more frequent access, I would find that very inconvenient for “hot” data.

how would you use an index’d storage base if the drives weren’t connected

You take a look at your index where the data you need is located, connect to that singular location (i.e. plug in a drive) and then copy it into the place it went missing from.

The difference is that, with an Index, you gain granularity. If you only need file A, you don’t need to connect all 12 backup drives, just the one that has file A on it.

Atemu , to linux in Linux kernel 4.14 gets a life extension, thanks to OpenELA
@Atemu@lemmy.ml avatar

Amen.

Atemu , to linux in Swap causing very slow boot (and systemd says the swap partition became active after 500k years)
@Atemu@lemmy.ml avatar

Could you upload the output of systemd-analyze plot?

Atemu , to asklemmy in chmod -R hit me, again... Anyone else who faced these sys-admin woopsies?
@Atemu@lemmy.ml avatar

The problem is that i didn’t mean to write to the hdd, but to a usb stick; i typed the wrong letter out of habit from the old pc.

For that issue, I recommend never using unstable device names and always using /dev/disk/by-id/.

As for the hard drives, I’m already trying to do that, for bigger files i just break them up with split. I’m just waiting until i have enough disks to do that.

I’d highly recommend to start backing up the most important data ASAP rather than waiting to be able to back up all data.

Atemu , to asklemmy in chmod -R hit me, again... Anyone else who faced these sys-admin woopsies?
@Atemu@lemmy.ml avatar

That would require all of those disks to be connected at once which is a logistical nightmare. It would be hard with modern drives already but also consider that we’re talking IDE drives here; it’s hard enough to connect one of them to a modern system, let alone 12 simultaneously.

With an Index, you also gain the ability to lose and restore partial data. With a RAID array it’s all or nothing; requiring wasting a bunch of space for being able to restore everything at once. Using an index, you can simply check which data was lost and prepare another copy of that data on a spare drive.

Atemu , to linux in Linux kernel 4.14 gets a life extension, thanks to OpenELA
@Atemu@lemmy.ml avatar

I feel it was a direct reply to the comment above.

At no point did it mention livepatching.

Dinosaurs don’t want to give up their extended LTS kernels because upgrading is a hassle and often requires rebooting, occasionally to a bad state.

No, Dinosaurs want LTS because it’s stable; it’s in the name.

You can’t have your proprietary shitware kernel module in any kernel other than the ABI it’s made for. You can’t run your proprietary legacy service heap of crap on newer kernels where the kernel APIs function slightly differently.

how can you bring your userbase forward so you don’t have to keep slapping security patches onto an ancient kernel?

That still has nothing to do with livepatching.

Atemu , to selfhosted in Can a Raspberry Pi 5 with 8 GB of RAM handle my needs?
@Atemu@lemmy.ml avatar

You probably could. Though I don’t see the point in powering a home server over PoE.

A random SBC in the closet? WAP? Sure. Not a home server though.

Atemu , to technology in This was the first result on Google
@Atemu@lemmy.ml avatar

.

Atemu , to technology in This was the first result on Google
@Atemu@lemmy.ml avatar

10% worse efficiency > no refrigerator

Atemu , to steam in Introducing Steam Families
@Atemu@lemmy.ml avatar

It depends on whether the game wants that or not; it must explicitly opt-in to that. If it wasn’t Steam offering their extremely nonintrusive DRM, those games would likely use more intrusive DRM systems instead such as their own launchers or worse.
It also somehow doesn’t feel right to call it “DRM” since it has none of the downsides of “traditional” DRM systems: It works offline, it doesn’t cause performance issues and doesn’t get in your way (at least it never even once got in mine).

I’d much rather launch the games through Steam anyways though. Do you manually open the games’ locations and then open their executables or what? A nice GUI with favourites, friends and a big “play” button is just a lot better IMHO.

Atemu , to linux in Linux kernel 4.14 gets a life extension, thanks to OpenELA
@Atemu@lemmy.ml avatar

Kernel livepatching is super niche and I don’t see what it has to do with the topic at hand.

Atemu , to asklemmy in chmod -R hit me, again... Anyone else who faced these sys-admin woopsies?
@Atemu@lemmy.ml avatar

I’m trying to do that; but all of the newer drives i have are being used in machines, while the ones that arent connected to anything are old 80gb ide drives, so they aren’t really practical to backup 1tb of data on.

It’s possible to make that work; through discipline and mechanism.

You’d need like 12 of them but if you’d carve your data into <80GB chunks, you could store every chunk onto a separate scrap drive and thereby back up 1TB of data.

Individual files >80GB are a bit more tricky but can also be handled by splitting them into parts.

What such a system requires is rigorous documentation where stuff is; an index. I use git-annex for this purpose which comes with many mechanisms to aid this sort of setup but it’s quite a beast in terms of complexity. You could do every important thing it does manually without unreasonable effort through discipline.

For the most part i prevented myself from doing the same mistake again by adding a 1gb swap partition at the beginning of the disk, so it doesn’t immediatly kill the partition if i mess up again.

Another good practice is to attempt any changes on a test model. You’d create a sparse test image (truncate -s 1TB disk.img), mount via loopback and apply the same partition and filesystem layout that your actual disk has. Then you first attempt any changes you plan to do on that loopback device and then verify its filesystems still work.

Atemu , to asklemmy in chmod -R hit me, again... Anyone else who faced these sys-admin woopsies?
@Atemu@lemmy.ml avatar

I use scrapped drives for my cold backups, you can make it work.

Though in case of extreme financial inability, I’d make an exception to the “no backup, no pity” rule ;)

Atemu , to asklemmy in chmod -R hit me, again... Anyone else who faced these sys-admin woopsies?
@Atemu@lemmy.ml avatar

Am I the only one around here who does backups?

Atemu , to linux in "Stable distro" meaning
@Atemu@lemmy.ml avatar

You’ve highlighted it pretty well. But you’re wrong about one thing.

Stable means the packages’ interfaces remain stable. I mean that term in a very broad sense; a GUI layout would be included in my definition of an interface here.

The only feasible way of achieving that goal is to freeze the versions of the software and abstain from updating it. This creates a lot of work because newsflash: The world around you is not stable (not at all). Some parts must be updated because of changes in the world around you. The most critical here is security patches. Stable distros do backport those but usually only bother porting “important” security patches because it’s so much effort.

Another aspect of this is that you usually can’t introduce support for these without risking breaking older interfaces, so stable distros simply don’t receive new features.

Windows 95 is one of the most stable operating systems in the world but there’s a reason you’re not using that (besides the security issues): At some point, you do need newer versions of interfaces to, well, interface with the world. There’s newer versions of software with additional features, new communications standards and newer hardware platforms that you might want/need.

As an example: Even if you backported and patched all security issues, Firefox from 10 years ago would be quite useless today as it doesn’t implement many of the standard interfaces that the modern web requires.

What you are wrong about though is that stable means no breakage or that things “run smoothly”. That’s not the case; stable only means that it’s the same sources of breakage and same level of roughness. No new sources of breakage are introduced by the distro but the existing ones remain.
Stable distros do try and fix some bugs but what is and isn’t a bug sometimes isn’t easily determined as one man’s bug is another man’s feature. If things weren’t running smoothly before, stable distros will ensure that they will run similarly roughly tomorrow but not any worse.
Well, for parts that the distro can control that is. Things outside the distro’s control will inevitably break. Tools made for interfacing with 3rd party services will break when those 3rd party services inevitably change their interface; regardless of how stable the distro is (or rather precisely because of how stable the distro is).

Stable interfaces and no local regressions are what characterises stable distros. Not “no breakage”, “system stability” or whatever; those qualities are independent of stable vs. fresh distros and a lot more nuanced.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • lifeLocal
  • random
  • goranko
  • All magazines