There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

github.com/Atemu
reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Atemu ,
@Atemu@lemmy.ml avatar

I don’t know about timeshift but it appears to have a configuration tab for snapper.

Atemu ,
@Atemu@lemmy.ml avatar

It actually is. The file gets opened by bash and bash passes the file descriptor to cat but cat is the program which instructs the kernel to write to the device.

Modern cat even does reflink copies on supported filesystems.

Atemu ,
@Atemu@lemmy.ml avatar

systemd has become like the JavaScript of init systems

Likening systemd to JavaScript is incredibly inappropriate.

systemd now handles DNS, cron, bootloader, and is a suite of tools tightly coupled with the init system)

No. Except for the cron replacement, all of those are stand-alone tools that can be run with systemd, without systemd or replaced with any alternative.

They just happen to be developed under the systemd project umbrella and are obviously tested to work well with another.

This argument is especially weird for systemd-boot; it’s not even a Linux program ffs.

There are some components that are harder to replace with alternatives but mostly because no good alternatives exist. Systemd might be partially to blame here in how easy it is those parts can be ran independently and replaced with equals and you could certainly criticize it for that but you didn’t even mention one of them.

Truth be told, the birth of systemd really heralded in the death of the UNIX philosophy

There is no truth in this sentence.

Doing one thing only, and doing it well, while looking good on paper, and oftentimes is a good general rule of thumb, doesn’t apply to modern application development, for better and worse.

What? Please google “Microservices”.


Your whole wall of text hinges on the assumption that systemd is a simple “init system”; a root process spawning a set of other processes. This is false.

systemd (as in: PID1) does service management, not init. It happens to also fit into the “job description” of init because starting and cleaning up dead services also fall under the responsibility of a service manager but reducing it to just an init system is just plain wrong. All the other things are handled by separate components/processes.

Thus, it still follows the “unix philosophy”. The “one thing” it does simply isn’t what you think it does.

It’s like saying cp doesn’t follow the UNIX philosophy because you could copy files with cat. cat is soo much simpler to understand, why would anyone ever use the bloated cp? Must be the pesky commercial influence of Bell labs!

Truth be told, the birth of cp really heralded in the death of the UNIX philosophy.

Atemu ,
@Atemu@lemmy.ml avatar

except for hdds without cache

The “cache” on HDDs is extremely tiny. Maybe a few seconds worth of sequential access at max. It does not exist to cache significant amounts of data for much longer than that.

At the sizes at which bcache is used, you could permanently hold almost all of your performance-critical data on flash storage while having enough space for tonnes of performance-uncritical data; all in the same storage “package”.

Atemu ,
@Atemu@lemmy.ml avatar

Note that bcache and bcachefs are different things. The latter is extremely new and not ready for “production” yet. This post is about bcache.

Atemu ,
@Atemu@lemmy.ml avatar

Isn’t that the cloud shit?

Atemu ,
@Atemu@lemmy.ml avatar

I thought about switching the router to a dedicated one without a wireless access point

Is there a reason for this? Unless it has specific issues you’d like to fix, I’d just keep using the current router and simply disable its WiFi.

Atemu ,
@Atemu@lemmy.ml avatar

It might be underpowered, it might not be. Just test it out? Do you notice performance issues related to your router?

Atemu ,
@Atemu@lemmy.ml avatar

It’s they decide to shut down or raise prices or whatever, you can reevaluate and move.

Move at how many hundred $ per TB?

Atemu ,
@Atemu@lemmy.ml avatar

Things like RAID can effectively cover the hardware failure side

Note that RAID only covers one specific hardware failure. To the point where IMHO, you cannot consider it a data security measure, only a data availability one.

Atemu ,
@Atemu@lemmy.ml avatar

The only reason blu ray still exists is that you can’t buy (as in: own) movies in a high quality format otherwise.

If the publishers got the sticks out of their arses and offered file downloads for purchase, I wouldn’t see a single reason to buy a physical disk other than sentimentalism.

Atemu ,
@Atemu@lemmy.ml avatar

I use multiple offline HDDs with a policy to keep n copies between them because it’s by far the cheapest way to still own the data. It requires regular checks because HDDs are likely to fail after a decade or so and a bunch of HDDs are a pain to manage, so you will need tooling for this. I use git-annex for this purpose but it’s not particularly user-friendly.

Atemu ,
@Atemu@lemmy.ml avatar

Tapes only make financial sense if you’re in the hundreds of TB.

Atemu ,
@Atemu@lemmy.ml avatar

There are much worse ways for a RAID controller to fail than suddenly not doing anything. What if it doesn’t notice it has failed and continues to write to a subset of devices only? Great recipe for data corruption right there.

Bad RAID controller/HBA, CPU, RAM, Motherboard, PSU are all hardware failures that RAID does very little (if anything) to mitigate. One localised incident in any of them out could make all of your drives turn into magic smoke or bits go bad.

You cannot rely on that sort of setup for data security. It only really mitigates one relatively common hardware to push storage system uptime above 99.9%. That has a place in some scenarios where storage “only” being 99.9% available has a significant impact on total availability but you’d first have to demonstrate that that is the case.

Atemu ,
@Atemu@lemmy.ml avatar

Things like a house fire would presumably destroy a series of optical disks which would make most any in house option non-functional.

Well, it makes any option that only uses a single location non-functional. Having two copies at home and one at a distant location (as recommended by the 3-2-1 backup rule of thumb) mitigates this issue.

Network based backups could also fail to transmit data securely and accurately as well

Absolutely. Though the network is usually assumed to be unreliable from the get-go, so mitigations usually already exist here (E2EE, checksums, ECC).

really any sort of replication solution needs validation of the data is of significant value

Absolutely correct. An untested backup is probably better than nothing but most definitely worse than a tested backup.

and have a way to recover if someone does a ‘sudo rm -rf /’ accidentally.

Certainly something that must be mitigated but this is getting out of “hardware failure” territory now ;)

Atemu ,
@Atemu@lemmy.ml avatar

AMD platform support is coming to coreboot in the next few years, consumer platforms much later and even there I’m doubtful it’d come to your laptop in particular.

Get a Frame.work with Intel chip if you want coreboot on a modern laptop soon-ish. I know the guy working on that port ;)

Atemu ,
@Atemu@lemmy.ml avatar

What you’re doing is perfectly fine.

It is however more of a mitigation for bad distro installers than general good practice. If the distro installers preserved /home, you could keep it all in one partition. Because such “bad” distro installers still exist, it is good practice if you know that you might install such a distro.

If you were installing “manually” and had full control over this, I’d advocate for a single partition because it simplifies storage. Especially with the likes of btrfs you can have multiple storage locations inside one partition with decent separation between them.

Does alien.top do anything other than mirror reddit comments?

I have just realised that alien.top seems to be mirroring reddit accounts, posts and comments, without labelling them as such. What is the point of this one way mirroring? As soon as users realise, they are going to just leave. There is no point having a discussion with a bot that cannot respond.

Atemu ,
@Atemu@lemmy.ml avatar
  1. The right to be forgotten applies to PII. Comments can contain PII but usually don’t.
  2. The right to be forgotten applies to your private relationship with a company. Comments in public forums are, well, public. You can’t force the public to forget what you said.
Atemu ,
@Atemu@lemmy.ml avatar

In this case you could make a very clear case that alien.top is infringing on copyright because those users only gave Reddit a worldwide irrevocable perpetual license to their postings, not anyone else.

Atemu ,
@Atemu@lemmy.ml avatar

User agreements aren’t really enforceable

[citation needed]

if reddit got their way, then that means publications can no longer cite Twitter comments.

Why would publications no longer be able to execute their right of fair use?

Atemu ,
@Atemu@lemmy.ml avatar

In regular FHS distros, an upgrade to libxyz can be done without an update to its dependants a, b and c. The libxyz.so is updated in-place and newly run processes of a, b and c will use the new shared object code.

In Nix’ model, changing a dependency in any way changes all of its dependants too. The package a that depends on libxyz 1.0.0 is treated as entirely different from the otherwise same package a that depends on libxyz 1.0.1 or libxyz 1.0.0 with a patch applied/new dependency/patch applied to the compiler/anything.

Nix encodes everything that could in any way influence a package’s content into that package’s “version”. That’s the hash in every Nix store path (i.e. /nix/store/5jlfqjgr34crcljr8r93kwg2rk5psj9a-bash-interactive-5.2-p15/bin/bash). The version number in the end is just there to inform humans of a path’s contents; as far as Nix is concerned, it’s just an arbitrary name string.

Therefore, any update to “core” dependencies requires a rebuild of all dependants. For very central core packages such as glibc, that means almost all packages in existence. Because those packages are “different” from the packages on your system without the update, you must download them all again and, because they have different hashes, they will be in separate paths in your Nix store.

This is what allows Nix to have parallel “installation” of any version of any package and roll back your entire config to a previous state because your entire system is treated as a “package” with the same semantics as described above.

Unless you have harsh data caps, extremely slow connections or are extremely tight on disk space, this isn’t much of a concern though.
Additionally, you can always “garbage collect” old paths that are no longer referenced and Nix can deduplicate whole files that are 1:1 the same across the whole Nix store.

Atemu ,
@Atemu@lemmy.ml avatar

Any distro that ships relatively recent libraries and kernels.

With the exception of Debian, RHEL, SLES and the like, pretty much everything.

Atemu ,
@Atemu@lemmy.ml avatar

I don’t use deodeorant, I use a generic antitranspirant. It prevents sweating but you don’t stink of …whatever the fuck those super artificial “manly” smells are supposed to be.

Atemu ,
@Atemu@lemmy.ml avatar

Oh, curious. They are called “Antitranspirant” in German.

Latin words are almost always 1:1 the same in German and English (modulo suffix) and this appears to be derived from latin too, so I had assumed it’d be the same but, in this specific case, it’s not.

Atemu ,
@Atemu@lemmy.ml avatar

Me too lol

Atemu ,
@Atemu@lemmy.ml avatar

As in, build a NixOS VM that’s otherwise the exact same as your current system but with a different DE enabled. nixos-rebuild build-vm

Atemu ,
@Atemu@lemmy.ml avatar

Guix might also be able to do this but I don’t think the others can.

This relies on NixOS’ declarative configuration which Silverbluae and the like do not have; they are configured imperatively.

Atemu ,
@Atemu@lemmy.ml avatar

Well, you can roll back with a switch too; no reboot required.

The VM protects you from accidental state modification however (i.e. programs enabled by some DE by default writing their config files everwhere) and its ephemeral nature makes a few things easier.

Atemu ,
@Atemu@lemmy.ml avatar

Post the journal after wakeup, not before.

Atemu ,
@Atemu@lemmy.ml avatar

Unless you have specific needs for compute, I’d go with that.

You really ought to look into idle power though. At $0.1/kWh, 1W is about $1/year. You can extrapolate from there.
TDP doesn’t matter here but the i3 is likely more efficient under load.

The shipping cost is quite extreme though. Not sure I’d pay that.

Atemu , (edited )
@Atemu@lemmy.ml avatar

Well, unlike us, they’re obviously living in a country which massively subsidises energy cost. But it seems they either haven’t done the math properly or their measuring device is broken because even they shouldn’t be paying pennies per month.

You can do the calculation for yearly cost yourself; it’s not too hard. The two variables you need are energy pice and power.

Let’s say you’ve got 30W idle power draw at 0.4€/kWh. That comes out to ~105€/year if you ran it 24/7.

You can plug in arbitrary values yourself: numbat.dev/?q=1+year+

Atemu ,
@Atemu@lemmy.ml avatar

TL;DR Amazon is building a Linux distro that starts a chromium to run react native apps. Apparently, you need hundreds of people for that.

Atemu ,
@Atemu@lemmy.ml avatar

These optimizations are what enabled

[citation needed]

Atemu ,
@Atemu@lemmy.ml avatar

With btrfs I would set up a regular scrubbing job to find and fix possible data errors automatically.

This only works for minor errors caused by tiny physical changes. A buggy USB drive dropping out and losing writes it claimed to have written can kill a btrfs (sometimes unfixably so) especially in a multi-device scenario.

Atemu ,
@Atemu@lemmy.ml avatar

I got a tiny Lenovo M720Q (i5-8400T / 8RAM / 128NVME / 1Tb 2,5" HDD) that I want to set as my home server with the ability to add 2 more drives (for RAID5 if possible) later using its two USB 3.1 Gen 2 (10gbps).

Do not use USB drives in a multi-device scenario. Best avoid actively using them at all. Use USB drives for at most daily backups.

I wouldn’t advocate for RAID5. I’d also advocate against RAID to begin with in a homelab setting unless you have special uptime requirements (e.g. often away from home for prolonged periods) or an insane amount of drives.

I will mostly use 40/128GB of its capacity with no idea how to make use of the rest.

I use spare SSD space for write-through bcache. You need to make the decision to use it early on because you need to format the HDDs with bcache beneath the FS and post-formatting conversions are hairy at best.

most of what I read online predate kernel 6,2 (which improved BTRFS RAID56 reliability).

Still unstable and only for testing purposes. Assume it will eat your data.

Atemu ,
@Atemu@lemmy.ml avatar

The problem is on the logic level. What happens when a drive drops out but the other does not? Well, it will continue to receive writes because a setup like this is tolerant to such a fault.

Now imagine both connections are flakey and the currently available drive drops out aswell. Our setup isn’t that fault tolerant, so FS goes read-only and throws IO errors on read.
But, as the sysadmin takes a look, the drive that first dropped out re-appears, so they mount the filesystem again from the other drive and continue the workload.

Now we have a split brain. The drive that dropped out first missed the changes that happened to the other drive. When the other drive comes back, they’ll have diverged states. Btrfs can’t merge these.

That’s just one possible way this can go wrong. A simpler one I allured to is a lost write where a drive will claim to have permanently written something but if power was cut at that moment and the same sector read upon restart, it will not actually be the new data. If that happens to all copies of a metadata chunk, good bye btrfs.

Atemu ,
@Atemu@lemmy.ml avatar

It’s not the mixing that’s bad, it’s using USB in any kind of multi-device setup or even using USB drives for active workloads at all.

Atemu ,
@Atemu@lemmy.ml avatar

There is none. NTFS is a filesystem you should only use if you need Windows compatibility anyways. Eventhough Linux natively supports it these days, it’s still primarily a windows filesystem.

Atemu ,
@Atemu@lemmy.ml avatar

If you’re only using this filesystem on Linux anyways, absolutely.

Atemu ,
@Atemu@lemmy.ml avatar

From what I’ve seen, that’s a great way to corrupt your filesystem.

Atemu ,
@Atemu@lemmy.ml avatar

I’m still in the process of optimizing stuff around Linux (e.g. media drive filesystem)

What do you mean by that?

Atemu ,
@Atemu@lemmy.ml avatar

I dont want weird archives or anything, just to copy my filesystem to another drive.

For proper backups, you do want “weird archives” with integrity checks, versioning, deduplication and compression. Regular files cannot offer that (at least not efficiently so).

Atemu ,
@Atemu@lemmy.ml avatar

Even with btrfs “weird archives” such as Borg’s or restic’s are preferred for backups.

Atemu ,
@Atemu@lemmy.ml avatar

It’s proprietary I guess? That’s never good.

Atemu ,
@Atemu@lemmy.ml avatar

Note how I said that it’s never good. Being proprietary might not be bad in some cases but it’s never good either.

Atemu ,
@Atemu@lemmy.ml avatar

Your image might be too large, some instances have size restrictions.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines