There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

github.com/Atemu
reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Atemu ,
@Atemu@lemmy.ml avatar

As per btrfs fi df /home, used space is 82.86 GiB, not 83.21 GiB.

That’s just used data. The global used metric likely incorporates metadata etc. too. System aswell as the GlobalReserve are probably accounted as fully used as they’re, well, reserved.

As per btrfs fi du -s /home , used space is 63.11 GiB.


<span style="color:#323232;">     Total   Exclusive  Set shared  Filename
</span><span style="color:#323232;">  63.11GiB    13.64GiB    49.01GiB  /home
</span>

While according to du -hs /home, 64GiB is used.

Likely compression or inline extents. btrfs only reports apparent size to du and friends unfortunately.

Also, maximum space used should be close to 72 GiB as per btrfs fi du -s / and 73 GiB as per du -hs /, if btrfs fi usage includes all subvolumes . ‘/home’ and ‘/’ are on separate subvolumes.

Your home has a lot of shared extents which indicates to me that you have at least one snapshot of it.

You also wrote 13.6GiB of new data to your home since the snapshot. Assuming a similar amount of data was deleted/overwritten since, that would add up to 76GiB. If there’s perhaps one or two more snapshots, that would explain the rest.

Snapshots are “free” only so long as you don’t write or delete any data in the origin.

Do I have Burnout?

I really just need to talk about this to someone. I’m in college and I’ve always loved to learn, but now I don’t feel motivated do my school work or to study, but at the same time, when a test roles around and I don’t know how to answer the questions I get stressed and care about trying to do well. I’ve also always...

Atemu ,
@Atemu@lemmy.ml avatar

I was in a similar situation as you. I pulled through and let me tell you: It did not get better :(

I managed to get good grades and such because I knew most of the core concepts already and had some luck in the intelligence lottery w.r.t the specific kinds of intelligence beneficial in my field which enabled me to learn the new parts really quite easily but if that’s not the case for you, YMMV.

If I was doing it again, I’d stop if possible (a degree was not strictly required in my situation but beneficial). Failing that, I’d find a way to make it not as stressful (i.e. fewer classes).

N100 Mini PC w/ 3xNVMe?

Not sure why this doesn’t exist. I don’t need 12TB of storage. When I had a Google account I never even crossed 15GB. 1TB should be plenty for myself and my family. I want to use NVMe since it is quieter and smaller. 2230 drives would be ideal. But I want 1 boot drive and 2 x storage drives in RAID. I guess I could...

Atemu ,
@Atemu@lemmy.ml avatar

I do like the idea of using USB drives for storage, though…

I wholeheartedly don’t.

Atemu ,
@Atemu@lemmy.ml avatar

Well that depends on how you define malware ;)

Atemu ,
@Atemu@lemmy.ml avatar

Then let’s ship your PC, that’s how containers work, right?

Atemu ,
@Atemu@lemmy.ml avatar

They are quite solid but be aware that the web UI is dog slow and the menus weirdly designed.

Atemu ,
@Atemu@lemmy.ml avatar

You can say removed. Especially when literally referring to the word itself?

Not on lemmy.ml apparently…

Atemu ,
@Atemu@lemmy.ml avatar

I’ve got three hard problems preventing me from using Wayland (sway/wlroots) right now:

  1. No global shortcuts for applications, especially legacy applications; I need teamspeak3 to be able to read my PTT keys in any application. Yes I know that could be used to keylog (the default should be off) but let me make that decision.
  2. Button to pixel latency is significantly worse. I don’t need V-Sync in the terminal or Emacs. Let me use immediate presentation in those applications.
  3. VRR is weird. I’d love if desktop apps were V-sync’d via VRR but the way it currently works is that apps make the display go down to 48Hz (because they don’t refresh) but the refresh rate never goes up when typing; further exacerbating button to pixel delay.
Atemu ,
@Atemu@lemmy.ml avatar

If I can get the portal to just forward every keypress (or a configurable subset) to an xwayland window, that’d work for me. (I am aware of the security implications.)

Atemu ,
@Atemu@lemmy.ml avatar

Yeah and that’s great but my point is that I don’t see an obvious way to use it for that in its current implementation. I’m sure you could build it but it’s simply not built yet.

Atemu ,
@Atemu@lemmy.ml avatar

Without any cold hard data, this isn’t worth discussing.

Atemu ,
@Atemu@lemmy.ml avatar

That is just a specific type of drive failure and only certain software RAID solutions are able to even detect corruption through the use of checksums. Typical “dumb” RAID will happily pass on corrupted data returned by the drives.

RAID only serves to prevent downtime due to drive failure. If your system has very high uptime requirements and a drive just dropping out must not affect the availability of your system, that’s where you use RAID.

If you want to preserve data however, there are much greater hazards than drive failure: Ransomware, user error, machine failure (PSU blows up), facility failure (basement flooded) are all similarly likely. RAID protects against exactly none of those.

Proper backups do provide at least decent mitigation against most of these hazards in addition to failure of any one drive.

If love your data, you make backups of it.

With a handful of modern drives (<~10) and a restore time of 1 week, you can expect storage uptime of >99.68%. If you don’t need more than that, you don’t need RAID. I’d also argue that if you do indeed need more than that, you probably also need higher uptime in other components than the drives through redundant computers at which point the benefit of RAID in any one of those redundant computers diminishes.

Ingenious ways to measure power draw

So I wanted to get myself a Kill-a-watt. Being who I am, I wanted information regarding its accuracy, especially at low power draws. I found a comparison with a industry grade equipment (Fluke is about the best out there in handheld electrical meters). It’s not encouraging, so I thought about a more proper meter, but it’s...

Atemu ,
@Atemu@lemmy.ml avatar

Yes. Low power draws add up. 5W here 10W there and you’re already looking at >3€ per month.

Atemu ,
@Atemu@lemmy.ml avatar

The problem is that it’s not just 15W; I merely used that as an example of how even just two “low power” devices can cause an effect that you can measure in dollars rather than cents.

Atemu ,
@Atemu@lemmy.ml avatar

I am ashamed of GitLab.

Don’t be. Gitlab has to comply with the law.

It’s the law that’s broken, not Gitlab.

It’s absolutely ridiculous they took it down even though Nintendo didn’t DMCA the Suyu project directly.

Um, no. If shitty corpo X (ab)uses the DMCA to send you a takedown notice for some project and you also host a fork of the same project, you must take down the fork too.

“You see, while this might be the exact same code, the name is totally different, so we don’t have to take it down!” will not hold up in court.

Whether the DMCA request is valid or not is an entirely separate question. You must still comply or open yourself up to legal liabilities.

The process to object to the validity of the request is included in the screenshot.

Atemu ,
@Atemu@lemmy.ml avatar

This isn’t about copyright, it’s about whether the software’s purpose is to break DRM. Ninty argued that Yuzu’s primary purpose is to enable copyright infringement which is forbidden under the DMCA; both infringement of course but also even just building tools to enable it. The latter is the critical (and IMHO insane) part.

Now, all of that is obviously BS but Ninty SLAPPed Yuzu to death, so it doesn’t matter what’s just or unjust; they win. God bless corporate America.

Atemu ,
@Atemu@lemmy.ml avatar

In the screenshot it says that Gitlab received a DMCA request.

How does data sent over the internet know where to go?

I saw a map of undersea internet cables the other day and it’s crazy how many branches there are. It got me wondering - if I’m (based in the UK) playing an online game from someone in Japan for example, how is the route worked out? Does my ISP know that to get to place X, the data has to be routed via cable 1, cable 2 etc....

Atemu , (edited )
@Atemu@lemmy.ml avatar

Your home router probably has no clue where that is, so it goes to its upstream router and asks if they know, this process repeats until one figures it out and you get a route.

That’s not how that works. The router merely sends the packet to the next directly connected router.

Let’s take a simplified example:

If you were in the middle of bumfuck nowhere, USA and wanted to send a packet to Kyouto, Japan, your router would send the packet to another router it’s connected to on the west coast*. From your router’s perspective, that’s it; it just sends it over and never “thinks” about that packet again.
The router on the west coast receives the packet, looks at the headers, sees that its supposed to go to Japan and sends it over a link to Hawaii.
The router in Hawaii again looks at the packet, sees that it’s supposed to go to Japan and sends it over its link to Toukyou.
The router in Toukyou then sends it over its link to Kyouto and it’ll be locally routed further to the exact host from there but you get the idea.

This is generally how IP routing works; always one hop to the next.

What I haven’t explained is how your router knows that it can reach Kyouto via the west coast or how the west coast knows that it can reach Kyouto via Hawaii.
This is where routing protocols come in. You can look up how exactly these work in detail but what’s important is their purpose: Build a “map” of the internet which you can look at to tell which way to send a packet at each intersection depending on its destination.

In operation, each router then simply looks at the one intersection it represents on the “map” and can then decide which way (link) to send each individual packet over.
The “map” (routing table) is continuously updated as conditions change.

Never at any point do routers establish a fixed route from one point to another or anything resembling a connection; the internet protocol is explicitly connectionless.

  • in reality, there will be a few local routers between the gateway router sitting in your home and the big router that has a big link to the west coast
Atemu ,
@Atemu@lemmy.ml avatar

Depends on how much of our needs would be covered. Not needing to work to survive is different from not needing to work to live a comfortable life which is again different from living a luxurious life.

Atemu ,
@Atemu@lemmy.ml avatar

Could you upload the output of systemd-analyze plot?

Atemu ,
@Atemu@lemmy.ml avatar

Kernel livepatching is super niche and I don’t see what it has to do with the topic at hand.

Atemu ,
@Atemu@lemmy.ml avatar

I feel it was a direct reply to the comment above.

At no point did it mention livepatching.

Dinosaurs don’t want to give up their extended LTS kernels because upgrading is a hassle and often requires rebooting, occasionally to a bad state.

No, Dinosaurs want LTS because it’s stable; it’s in the name.

You can’t have your proprietary shitware kernel module in any kernel other than the ABI it’s made for. You can’t run your proprietary legacy service heap of crap on newer kernels where the kernel APIs function slightly differently.

how can you bring your userbase forward so you don’t have to keep slapping security patches onto an ancient kernel?

That still has nothing to do with livepatching.

Atemu ,
@Atemu@lemmy.ml avatar

Amen.

Atemu ,
@Atemu@lemmy.ml avatar

You probably could. Though I don’t see the point in powering a home server over PoE.

A random SBC in the closet? WAP? Sure. Not a home server though.

Atemu ,
@Atemu@lemmy.ml avatar

It depends on whether the game wants that or not; it must explicitly opt-in to that. If it wasn’t Steam offering their extremely nonintrusive DRM, those games would likely use more intrusive DRM systems instead such as their own launchers or worse.
It also somehow doesn’t feel right to call it “DRM” since it has none of the downsides of “traditional” DRM systems: It works offline, it doesn’t cause performance issues and doesn’t get in your way (at least it never even once got in mine).

I’d much rather launch the games through Steam anyways though. Do you manually open the games’ locations and then open their executables or what? A nice GUI with favourites, friends and a big “play” button is just a lot better IMHO.

Atemu ,
@Atemu@lemmy.ml avatar

Am I the only one around here who does backups?

Atemu ,
@Atemu@lemmy.ml avatar

I use scrapped drives for my cold backups, you can make it work.

Though in case of extreme financial inability, I’d make an exception to the “no backup, no pity” rule ;)

Atemu ,
@Atemu@lemmy.ml avatar

I’m trying to do that; but all of the newer drives i have are being used in machines, while the ones that arent connected to anything are old 80gb ide drives, so they aren’t really practical to backup 1tb of data on.

It’s possible to make that work; through discipline and mechanism.

You’d need like 12 of them but if you’d carve your data into <80GB chunks, you could store every chunk onto a separate scrap drive and thereby back up 1TB of data.

Individual files >80GB are a bit more tricky but can also be handled by splitting them into parts.

What such a system requires is rigorous documentation where stuff is; an index. I use git-annex for this purpose which comes with many mechanisms to aid this sort of setup but it’s quite a beast in terms of complexity. You could do every important thing it does manually without unreasonable effort through discipline.

For the most part i prevented myself from doing the same mistake again by adding a 1gb swap partition at the beginning of the disk, so it doesn’t immediatly kill the partition if i mess up again.

Another good practice is to attempt any changes on a test model. You’d create a sparse test image (truncate -s 1TB disk.img), mount via loopback and apply the same partition and filesystem layout that your actual disk has. Then you first attempt any changes you plan to do on that loopback device and then verify its filesystems still work.

Atemu ,
@Atemu@lemmy.ml avatar

That would require all of those disks to be connected at once which is a logistical nightmare. It would be hard with modern drives already but also consider that we’re talking IDE drives here; it’s hard enough to connect one of them to a modern system, let alone 12 simultaneously.

With an Index, you also gain the ability to lose and restore partial data. With a RAID array it’s all or nothing; requiring wasting a bunch of space for being able to restore everything at once. Using an index, you can simply check which data was lost and prepare another copy of that data on a spare drive.

Atemu ,
@Atemu@lemmy.ml avatar

The problem is that i didn’t mean to write to the hdd, but to a usb stick; i typed the wrong letter out of habit from the old pc.

For that issue, I recommend never using unstable device names and always using /dev/disk/by-id/.

As for the hard drives, I’m already trying to do that, for bigger files i just break them up with split. I’m just waiting until i have enough disks to do that.

I’d highly recommend to start backing up the most important data ASAP rather than waiting to be able to back up all data.

Atemu ,
@Atemu@lemmy.ml avatar

Note that all of this is in the context of backups; duplicates for the purpose of restoring the originals in case something happens to them. Though it is at least possible to use an index cold storage system like what I describe for more frequent access, I would find that very inconvenient for “hot” data.

how would you use an index’d storage base if the drives weren’t connected

You take a look at your index where the data you need is located, connect to that singular location (i.e. plug in a drive) and then copy it into the place it went missing from.

The difference is that, with an Index, you gain granularity. If you only need file A, you don’t need to connect all 12 backup drives, just the one that has file A on it.

Atemu ,
@Atemu@lemmy.ml avatar

You’ve highlighted it pretty well. But you’re wrong about one thing.

Stable means the packages’ interfaces remain stable. I mean that term in a very broad sense; a GUI layout would be included in my definition of an interface here.

The only feasible way of achieving that goal is to freeze the versions of the software and abstain from updating it. This creates a lot of work because newsflash: The world around you is not stable (not at all). Some parts must be updated because of changes in the world around you. The most critical here is security patches. Stable distros do backport those but usually only bother porting “important” security patches because it’s so much effort.

Another aspect of this is that you usually can’t introduce support for these without risking breaking older interfaces, so stable distros simply don’t receive new features.

Windows 95 is one of the most stable operating systems in the world but there’s a reason you’re not using that (besides the security issues): At some point, you do need newer versions of interfaces to, well, interface with the world. There’s newer versions of software with additional features, new communications standards and newer hardware platforms that you might want/need.

As an example: Even if you backported and patched all security issues, Firefox from 10 years ago would be quite useless today as it doesn’t implement many of the standard interfaces that the modern web requires.

What you are wrong about though is that stable means no breakage or that things “run smoothly”. That’s not the case; stable only means that it’s the same sources of breakage and same level of roughness. No new sources of breakage are introduced by the distro but the existing ones remain.
Stable distros do try and fix some bugs but what is and isn’t a bug sometimes isn’t easily determined as one man’s bug is another man’s feature. If things weren’t running smoothly before, stable distros will ensure that they will run similarly roughly tomorrow but not any worse.
Well, for parts that the distro can control that is. Things outside the distro’s control will inevitably break. Tools made for interfacing with 3rd party services will break when those 3rd party services inevitably change their interface; regardless of how stable the distro is (or rather precisely because of how stable the distro is).

Stable interfaces and no local regressions are what characterises stable distros. Not “no breakage”, “system stability” or whatever; those qualities are independent of stable vs. fresh distros and a lot more nuanced.

Atemu OP ,
@Atemu@lemmy.ml avatar

That tool does not claim to support custom resolutions in any way.

Linux distro for selfhosting server

So I have been running a fair amount of selfhosted services over the last decade or so. I have always been running this on a Ubuntu LTS distribution running on a intel NUC machine. Most, if not all of my services run in a docker container, and using a docker compose file that brings everything up. The server is headless. I...

Atemu ,
@Atemu@lemmy.ml avatar

If you’re using containers for everything anyways, the distro you use doesn’t much matter.

If Ubuntu works for you and switching away would mean significant effort, I see no reason to switch outside of curiosity.

Atemu ,
@Atemu@lemmy.ml avatar

While these can help on other issues, these will do nothing if the driver has an unrecoverable issue.

Atemu ,
@Atemu@lemmy.ml avatar

amdgpu has a recovery mechanism built in that can be triggered using sudo cat /sys/kernel/debug/dri/N/amdgpu_gpu_recover where N is the number of the DRI device in question. You could bind a shortcut to doing that I presume.

Atemu ,
@Atemu@lemmy.ml avatar

Which part of the path does not exist?

Is it the correct GPU number?

Atemu ,
@Atemu@lemmy.ml avatar

Yeah, Xorg (and the apps) will likely die. There is a wayland protocol in the works to be able to gracefully handle driver resets but I’m not sure on its implementation status.

Atemu ,
@Atemu@lemmy.ml avatar

The operating system is explicitly not virtualised with containers.

What you’ve described is closer to paravirtualisation where it’s still a separate operating system in the guest but the hardware doesn’t pretend to be physical anymore and is explicitly a software interface.

Atemu , (edited )
@Atemu@lemmy.ml avatar

Google has massive swing; there’s a whole industry around getting Google to prefer your low quality crap nobody wants to see over others’ low quality crap nobody wants to see.

If Google has finally figured out a metric to measure “helpfulness” of a website and punishes unhelpful websites, a bunch of dogshit that would have otherwise gotten top spots may have been banished to page 2.
Reddit results would naturally creep up because of that (and therefore get a lot more clicks), even if they didn’t change at all.

Atemu ,
@Atemu@lemmy.ml avatar

Source?

Atemu ,
@Atemu@lemmy.ml avatar

A really, really cool solution for problem nobody has.

Atemu ,
@Atemu@lemmy.ml avatar

Do you have a media center and/or server already? It’s a bit overkill for the former but would be well suited as the latter with its dedicated GPU that your NAS might not have/you may not want to have in your NAS.

What do you think about Abstract Wikipedia?

Wikifunctions is a new site that has been added to the list of sites operated by WMF. I definitely see uses for it in automating updates on Wikipedia and bots (and also for programmers to reference), but their goal is to translate Wikipedia articles to more languages by writing them in code that has a lot of linguistic...

Atemu ,
@Atemu@lemmy.ml avatar

Languages simply don’t agree on how to split the usage of words. Or grammatical case. Or if, when and how to do agreement.

Just for the sake of example: how are they going to keep track of case in a way that doesn’t break Hindi, or Basque, or English, or Guarani? Or grammatical gender for a word like “milk”? (not even the Romance languages agree in it.) At a certain point, it gets simply easier to write the article in all those languages than to code something to make it for you.

I don’t know what the WMF is planning here but what you’re pointing out is precisely what abstraction would solve.

If you had an abstract way to represent a sentence, you would be independent of any one order or case or whatever other grammatical feature. In the end you obviously do need actual sentences with these features. To get these, you’d build a mechanism that would convert the abstract sentence representation into a concrete sentences for specific languages that is correctly constructed according to those specific languages’ rules.

Same with gender. What you’d store would not be that e.g. some german sentence is talking about the feminine milk but rather that it’s talking about the abstract concept of milk. How exactly that abstract concept is represented in words would then be up to individual languages to decide.

I have absolutely no idea whether what I’m talking about here would be practical to implement but it in theory it could work.

Atemu ,
@Atemu@lemmy.ml avatar

Somewhere inside that abstraction you’ll need to have the pieces of info that Spanish “leche” [milk] is feminine, that Zulu “ubisi” [milk] is class 11, that English predicative uses the ACC form, so goes on.

Of course you do. The beauty of abstraction is that these language-specific parts can be factored into generic language-specific components. The information you’re actually trying to convey can be denoted without any language-specific parts or exceptions and that’s the important part for Wikipedia’s purpose of knowledge preservation and presentation.

you’ll need people to mark a multitude of distinctions in their sentences, when writing them down, that the abstraction layer would demand for other languages. Such as tagging the “I” in “I see a boy” as “+masculine, +older-person, +informal” so Japanese correctly conveys it as “ore” instead of “boku”, "atashi, “watashi” etc.

For writing a story or prose, I agree.

For the purpose of writing Wikipedia articles, this specifically and explicitly does not matter very much. Wikipedia strives to have one unified way of writing within a language. Whether the “I” is masculine or not would be a parameter that would be applied to all text equally (assuming I-narrator was the standard on Wikipedia).

Even the idea of “abstract concept of milk” doesn’t work as well as it sounds like, because languages will split even the abstract concepts in different ways. For example, does the abstract concept associated with a living pig includes its flesh?

If your article talks about the concept of a living pig in some way and in the context of that article, it doesn’t matter whether the flesh is included, then you simply use the default word/phrase that the language uses to convey the concept of a pig.

If it did matter, you’d explicitly describe the concept of “a living pig with its flesh” instead of the more generic concept of a living pig. If that happened to be the default of the target language or the target language didn’t differentiate between the two concepts, both concepts would turn into the same terms in that specific language.

The same applies to your example of the different forms of “I” in Japanese. To create an appropriate Japanese “rendering” of an abstract sentence, you’d use the abstract concept of “a nerdy shy kid refers to itself” as the i.e. the subject. The Japanese language “renderer” would turn that into a sentence like ”僕は。。。” while the English “renderer” would simply produce “I …”.

A language is not an agent; it doesn’t “do” something. You’d need people to actively insert those pieces of info for each language, that’s perhaps doable for the most spoken ones, but those are the ones that would benefit the least from this.

Yes, of course they would have to do that. The cool thing is that this it’d only have to be done once in a generic manner and from that point on you could use that definition to “render” any abstract article into any language you like.

You must also keep in mind that this effort has to be measured relative to the alternatives. In this case, the alternative is to translate each and every article and all changes done to them into every available language. At the scale of Wikipedia, that is not an easy task and it’s been made clear that that’s simply not happening.

(Okay, another alternative would be to remain on the status quo with its divergent versions of what are supposed to be the same articles containing the same information.)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines