There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

github.com/Atemu
reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

This profile is from a federated server and may be incomplete. Browse more on the original instance.

What is this block-rate-estim?? Suddenly came to life

While I was writing a shell script (doing this the past several days) just a few minutes ago my PC fans spinned up without any seemingly reason. I thought it might be the baloo process, but looking at the running processes I see it’s names block-rate-estim . It takes 6.2% CPU time and is running since minutes, on my modern 8...

Atemu ,
@Atemu@lemmy.ml avatar

Freetube won’t have anything to do with h265 as youtube does not serve that format in any way.

Atemu ,
@Atemu@lemmy.ml avatar

If you’re worried about that, I can recommend a service like Tailscale which does not require permanently open ports to the outside world, offering quite a bit more security than an exposed traditional VPN server.

Atemu ,
@Atemu@lemmy.ml avatar

It’s a central server (that you could actually self-host publicly if you wanted to) whose purpose it is to facilitate P2P connections between your devices.

If you were outside your home network and wanted to connect to your server from your laptop, both devices would be connected to the TS server independently. When attempting to send IP packets between the devices, the initiating device (i.e. your laptop) would establish a direct wireguard tunnel to the receiving device. This process is managed by the individual devices while the central TS service merely facilitates communication between the devices for the purpose of establishing this connection.

Atemu ,
@Atemu@lemmy.ml avatar

TS is a lot easier to set up than WG and does not require a publicly accessible IP address nor any public whatsoever. It’s not really comparable to setting WG up yourself; especially w.r.t. security.

Atemu ,
@Atemu@lemmy.ml avatar

Good luck packaging new stuff

Packaging is generally hard on any distro.

Compared to a traditional distro, the packaging difficulty distribution is quite skewed with Nix though as packages that follow common conventions are quite a lot easier to package due to the abstractions Nixpkgs has built for said conventions while some packages are near impossible to package due to the unique constraints Nix (rightfully) enforces.

good luck creating new options

Creating options is really simple actually. Had I known you could do that earlier, I would have done so when I was starting out.

Creating good options APIs is an art to be mastered but you don’t need to do that to get something going.

good luck cross-compiling

Have you ever tried cross-compiling on a traditional distro? Cross-compiling using Nixpkgs is quite easy in comparison.

actually good luck understanding how to configure existing packages

Yeah, no way to do so other than to read the source.

It’s usually quite understandable without knowing the exact details though; just look at the function arguments.

Also beats having no option to configure packages at all. Good luck slightly modifying an Arch package. It has no abstractions for this whatsoever; you have to copy and edit the source. Oh and you need to keep it up to date yourself too.

Gentoo-like standardised flags would be great and are being worked on.

good luck getting any kind of PR merged without the say-so of a chosen few

Hi, one of the “chosen few” here: That’s a security feature.

Not a particularly good one, mind you, but a security feature nonetheless.

There’s also now a merge bot now running in the wild allowing maintainers of packages to merge automatic updates on their maintained packages though which alleviates this a bit.

have fun understanding why some random package is being installed and/or compiled when you switch to a new configuration.

It can be mysterious sometimes but once you know the tools, you can directly introspect the dependency tree that is core to the concept of Nix and figure out exactly what’s happening.

I’m not aware of the existence of any such tools in traditional distros though. What do you do on i.e. Arch if your hourly shot of -Syu goes off and fetches some package you’ve never seen before due to an update to some other package? Manually look at PKGBUILDs?

Atemu ,
@Atemu@lemmy.ml avatar

;)

Atemu ,
@Atemu@lemmy.ml avatar

As it says on the website, this is still in development and not actually ready for use by mere mortals quite yet. It hopefully will be at some point though as that is its explicit goal.

Atemu ,
@Atemu@lemmy.ml avatar

Oh I’m sure your health insurance would love to know the condition of your teeth to increase your rates.

Atemu ,
@Atemu@lemmy.ml avatar

Realtek LAN is usually not too bad.

For WiFi, you want mediatek or Intel though.

Atemu ,
@Atemu@lemmy.ml avatar

Steam is its own package manager and native games usually assume that an FHS-conformant is present. Neither of those mesh well with Nix notoriously has nothing comparable to an FHS and usually requires everything to be defined in its terms.

Atemu ,
@Atemu@lemmy.ml avatar

Yes, yes they will. If you’re the sole user, they’d identify you from your behaviour anyways.

I don’t think internet proxy won’t help very much w.r.t. privacy but it will make you a lot more susceptible to being blocked.

Atemu ,
@Atemu@lemmy.ml avatar

v3 is worth it though

[citation needed]

Sometimes the improvements are not apparent by normal benchmarks, but would have an overall impact - for instance, if you use filesystem compression, with the optimisations it means you now have lower I/O latency, and so on.

Those would show up in any benchmark that is sensitive to I/O latency.

Also, again, [citation needed] that march optimisations measurably lower I/O latency for compressed I/O. For that to happen it is a necessary condition that compression is a significant component in I/O latency to begin with. If 99% of the time was spent waiting for the device to write the data, optimising the 1% of time spent on compression by even as much as 20% would not gain you anything of significance. This is obviously an exaggerated example but, given how absolutely dog slow most I/O devices are compared to how fast CPUs are these days, not entirely unrealistic.

Generally, the effect of such esoteric “optimisations” is so small that the length of your unix username has a greater effect on real-world performance. I wish I was kidding.
You have to account for a lot of variables and measurement biases if you want to make factual claims about them. You can observe performance differences on the order of 5-10% just due to a slight memory layout changes with different compile flags, without any actual performance improvement due to the change in code generation.

That’s not my opinion, that’s rather well established fact. Read here:

So far, I have yet to see data that shows a significant performance increase from march optimisations which either controlled for the measurement bias or showed an effect that couldn’t be explained by measurement bias alone.

There might be an improvement and my personal hypothesis is that there is at least a small one but, so far, we don’t actually know.

More importantly, if you’re a laptop user, this could mean better battery life since using more efficient instructions, so certain stuff that might’ve taken 4 CPU cycles could be done in 2 etc.

The more realistic case is that an execution that would have taken 4 CPU cycles on average would then take 3.9 CPU cycles.

I don’t have data on how power scales with varying cycles/task at a constant task/time but I doubt it’s linear, especially with all the complexities surrounding speculative execution.

In my own experience on both my Zen 2 and Zen 4 machines, v3/v4 packages made a visible difference.

“visible” in what way? March optimisations are hardly visible in controlled synthetic tests…

It really doesn’t make sense that you’re spending so much money buying a fancy CPU, but not making use of half of its features…

These features cater towards specialised workloads, not general purpose computing.

Applications which facilitate such specialised workloads and are performance-critical usually have hand-made assembly for the critical paths where these specialised instructions can make a difference. Generic compiler optimisations will do precisely nothing to improve performance in any way in that case.

I’d worry more about your applications not making any use of all the cores you’ve paid good money for. Spoiler alert: Compiler optimisations don’t help with that problem one bit.

Atemu ,
@Atemu@lemmy.ml avatar

I’d define “bloat” as functionality (as in: program code) present on my system that I cannot imagine ever needing to use.

There will never be a system that is perfectly tailored to my needs because there will always be some piece of functional code that I have no intention of using. Therefore, any system is “bloated” and it’s a question to which degree it is “bloated”.

The degree depends on which kind of resources the “bloat” uses and how much of it. The more significant the resource usage, the more significant the effect of the “bloat”. The kind of resource is used defines how critical some amount of usage is. 5% Power, CPU, IO, RAM or disk usage have varying degrees of criticality for instance.

Some examples:

This system has a calendar app installed by default. I don’t use it, so it’s certainly bloat but I also don’t care because it’s just a few megs on disk at worst and that doesn’t hurt me in any way.

Firefox frequently uses most of my RAM and >1% CPU util at “idle” but it’s a useful application that I use all the time, so it’s not bloat.

The most critical resource usage of systemd (pid1) on my system is RAM which is <0.1%. It provides tonnes of essential features required on a modern system and therefore not even worth thinking about when it comes to bloat.

I just noticed that mbrola voices sneaked into my closure again which is like 700MiB of voice synthesis data for many languages that I don’t have a need for. Quite a lot of storage for something I don’t ever need. This is significant bloat. It appears Firefox is drawing it in but it looks like that can be disabled via an override, so I’ll do that right now.

Atemu ,
@Atemu@lemmy.ml avatar

Which version was that introduced in? 6.9?

Stopping a badly behaved bot the wrong way.

I host a few small low-traffic websites for local interests. I do this for free - and some of them are for a friend who died last year but didn’t want all his work to vanish. They don’t get so many views, so I was surprised when I happened to glance at munin and saw my bandwidth usage had gone up a lot....

Atemu ,
@Atemu@lemmy.ml avatar

While I wouldn’t put it past tech bros to use such unethical measures for their latest grift, it’s not a given that it’s actually claudebot. Anyone can claim to be claudebot, googlebot, boredsquirrelbot or anything else. In fact it could very well be a competitor aiming to harm Claude’s reputation.

Atemu OP ,
@Atemu@lemmy.ml avatar

Nvidia has been slowly trying to open a little over the years; first GBM support in the proprietary driver then the open OOT module and finally GSP firmwares for the kernel; allowing an OSS kernel module to exist.

The OSS graphics community has obviously shown that it doesn’t want Nvidia’s open module (which is tied to the proprietary driver anyways) and would rather build out its own OSS drivers atop an adapted Nouveau/NOVA. Perhaps Nvidia finally realised this?

I’m sceptical too but for now this appears to be an actually good move from Nvidia?

Atemu OP ,
@Atemu@lemmy.ml avatar

Yeah, that has been the largest pain point for all these years I heard.

Atemu OP ,
@Atemu@lemmy.ml avatar

Are you sure you replied to the correct comment?

Atemu OP ,
@Atemu@lemmy.ml avatar

He had the last 6 months or so to work on it. He resigned from the Nouveau project and RH in September and likely joined Nvidia a little while later where he would have had plenty of time to work on this patch series.

Atemu ,
@Atemu@lemmy.ml avatar

Why is this being downvoted? It’s clearly labelled as Japanese; if you don’t want to see foreign languages, filter them out.

Atemu ,
@Atemu@lemmy.ml avatar

Not that I can tell; just an explanation how df works on Linux and macOS.

Atemu ,
@Atemu@lemmy.ml avatar

Thank you for your thoughts, I really enjoyed reading them :)

Atemu ,
@Atemu@lemmy.ml avatar

Note that 1660 ti and 1060 are from an entirely different generation of product; one is Turing the other Pascal.

Atemu ,
@Atemu@lemmy.ml avatar

Statistically it should always be better by now because the resource hog that is called windows slows older systems down.

That’s not how any of this works.

Atemu ,
@Atemu@lemmy.ml avatar

Plenty more benchmark worse. What’s your point exactly?

Linux 6.10 To Merge NTSYNC Driver For Emulating Windows NT Synchronization Primitives (www.phoronix.com)

Going through my usual scanning of all the “-next” Git subsystem branches of new code set to be introduced for the next Linux kernel merge window, a very notable addition was just queued up… Linux 6.10 is set to merge the NTSYNC driver for emulating the Microsoft Windows NT synchronization primitives within the kernel for...

Atemu ,
@Atemu@lemmy.ml avatar

Old reddit absolutely had its issues. The new and newnew design is just decisively worse however.

[HELP] Option for Variable Refresh is gone after installing new graphics card (PowerColor 6750 XT)

EDIT: The audio issue on Wayland seems to have magically resolved itself after several reboots, so while I never figured out why the option for VRR disappeared in the Xorg session, I’ve resorted to using Wayland and everything seems to be as it should....

Atemu ,
@Atemu@lemmy.ml avatar

Does it work if you enable VRR via xorg config?

Which xorg driver are/were you using, amdgpu or modesetting?

Help with HDD

I have a 4TB HDD that I use to store music, films, images, and text files. I have a 250GB SDD that I use to install my OS and video games. So far I didn’t have any problem with this setup, obviously it’s a bit slower when it reads the HDD but nothing too serious, but lately it’s gotten way worse, where it just lags too...

Atemu ,
@Atemu@lemmy.ml avatar

Monitor I/O on the drive; is anything using it while your system is idle?

What’s I/O like when loading an album?

Atemu ,
@Atemu@lemmy.ml avatar

This reads like a phrase from Half as Interesting.

Atemu ,
@Atemu@lemmy.ml avatar

The process for this is that you want to set your prefix to the /boot partition in the (hd1, gpt1) syntax (use ls) and then load the “normal” module. From then on, you should have regular GRUB again and should be able to boot your OS to properly fix GRUB.

Atemu ,
@Atemu@lemmy.ml avatar

It’s too early to tell; you must investigate further.

Atemu ,
@Atemu@lemmy.ml avatar

XZ is a slog to compress and decompress but compresses a bit smaller than zstd.

zstd is quite quick to compress, very quick to decompress, scales to many cores (vanilla xz is single-core only) and scales a lot further in the quicker end of the compression speed <-> file size trade-off spectrum while using the same format.

How the xz backdoor highlights a major flaw in Nix (shadeyg56.vercel.app)

The main issue is the handling of security updates within the Nixpkgs ecosystem, which relies on Nix’s CI system, Hydra, to test and build packages. Due to the extensive number of packages in the Nixpkgs repository, the process can be slow, causing delays in the release of updates. As an example, the updated xz 5.4.6 package...

Atemu ,
@Atemu@lemmy.ml avatar

This has nothing to do with “unstable” or the specific channel. It could have happened on the stable channel too; depending on the timing.

Atemu ,
@Atemu@lemmy.ml avatar

It was not vulnerable to this particular attack because the attack didn’t specifically target Nixpkgs. It could have very well done so if they had wanted to.

Atemu ,
@Atemu@lemmy.ml avatar

This blog post misses entirely that this has nothing to do with the unstable channel. It just happened to only affect unstable this time because it gets updates first. If we had found out about the xz backdoor two months later (totally possible; we were really lucky this time), this would have affected a stable channel in exactly the same way. (It’d be slightly worse actually because that’d be a potentially breaking change too but I digress.)

I see two way to “fix” this:

  • Throw a shitton of money at builders. I could see this getting staging-next rebuild times down to just 1-2 days which I’d say is almost acceptable. This could even be a temporary thing to reduce cost; quickly renting an extremely large on-demand fleet from some cloud provider for a day whenever a critical world rebuild needs to be done which shouldn’t be too often.
  • Implement pure grafting for important security patches through a second overlay-like mechanism.
Atemu ,
@Atemu@lemmy.ml avatar

This would better be done in the front-end rather than a comment bot.

Atemu ,
@Atemu@lemmy.ml avatar

I don’t like the Piped bot at all.

What should be posted on the internet should be the canonical source of some content, not a proxy for it. If users prefer a proxy, they should configure their clients to redirect to the proxy. Piped instances come and go and the entire project is at the mercy of Google tolerating it/not taking action against it, so it could be gone tomorrow.

I use piped myself. I have client-side configurations which simply redirects all Youtube links to my piped instance. No need for any bots here.

What do you think about Abstract Wikipedia?

Wikifunctions is a new site that has been added to the list of sites operated by WMF. I definitely see uses for it in automating updates on Wikipedia and bots (and also for programmers to reference), but their goal is to translate Wikipedia articles to more languages by writing them in code that has a lot of linguistic...

Atemu ,
@Atemu@lemmy.ml avatar

The writer will need to tag things down, to minimal details, for the sake of languages that they don’t care about.

Sure and that’s likely a good bit of work.

However, you must consider the alternative which is translating the entire text to dozens of languages and doing the same for any update done to said text. I’d assume that to be even more work by at least one order of magnitude.

Many languages are quite similar to another. An article written in the hypothetical abstract language and tuned on an abstract level to produce good results in German would likely produce good results in Dutch too and likely wouldn’t need much tweaking for good results in e.g. English. This has the potential to save ton of work.

This issue affects languages as a whole, and sometimes in ways that you can’t arbitrate through a fixed writing style because they convey meaning.

The point of the abstract language would be to convey the meaning without requiring a language-specific writing style. The language-specific writing style to convey the specified meaning would be up to the language-specific “renderers”.

(For example: if you don’t encode the social gender into the 3rd person pronouns, English breaks.)

That’s up to the English “renderer” to do. If it decides to use a pronoun for e.g. a subject that identifies as male, it’d use “he”. All the abstract language’s “sentence” would contain is the concept of a male-identifying subject. (It probably shouldn’t even encode the fact that a pronoun is used as usage of pronouns instead of nouns is also language-specific. Though I guess it could be an optional tag.)

Often there’s no such thing as the “default”. The example with pig/pork is one of those cases - if whoever is writing the article doesn’t account for the fact that English uses two concepts (pig vs. pork) for what Spanish uses one (cerdo = puerco etc.), and assumes the default (“pig”), you’ll end with stuff like *“pig consumption has increased” (i.e. “pork consumption has decreased”). And the abstraction layer has no way to know if the human is talking about some living animal or its flesh.

No, that’d simply be a mistake in building the abstract sentence. The concept of a pig was used rather than the concept of edible meat made from pig which would have been the correct subject to use in this sentence.

Mistakes like this will happen and I’d even consider them likely to happen but the cool thing here is that “pig consumption has increased”, while obviously slightly wrong, would still be quite comprehensible. That’s an insane advantage considering that this would apply to any language for which a generic “renderer” was implemented.


It ends like that story about a map so large that it represents the terrain accurately being as big as the terrain, thus useless.

As I said in the top, you’ll end with a “map” that is as large as the “terrain”, thus useless. (Or: spending way more effort explicitly describing all concepts that it’s simply easier to translate it by hand.)

I don’t see how that would necessarily be the case. Most sentences on Wikipedia are of descriptive nature and follow rather simple structures; only complicated further for the purpose of aiding text flow. Let’s take the first sentence of the Wikipedia article on Lemmy:

Lemmy is a free and open-source software for running self-hosted social news aggregation and discussion forums.

This could be represented in a hypothetical abstract sentence like this:


<span style="color:#323232;">(explanation
</span><span style="color:#323232;"> (proper</span><span style="font-weight:bold;color:#a71d5d;">-</span><span style="color:#323232;">noun </span><span style="color:#183691;">"lemmy"</span><span style="color:#323232;">)
</span><span style="color:#323232;"> (software</span><span style="font-weight:bold;color:#a71d5d;">-</span><span style="color:#323232;">facilitating
</span><span style="color:#323232;">  :kind FOSS
</span><span style="color:#323232;">  :purpose (purposes
</span><span style="color:#323232;">            (</span><span style="color:#62a35c;">apply</span><span style="font-weight:bold;color:#a71d5d;">-</span><span style="color:#323232;">property 'self</span><span style="font-weight:bold;color:#a71d5d;">-</span><span style="color:#323232;">hosted '(news</span><span style="font-weight:bold;color:#a71d5d;">-</span><span style="color:#323232;">aggregation</span><span style="font-weight:bold;color:#a71d5d;">-</span><span style="color:#323232;">platform discussion</span><span style="font-weight:bold;color:#a71d5d;">-</span><span style="color:#323232;">forum)))))
</span>

(IDK why I chose lisp to represent this but it felt surprisingly natural.)

What this says is that this sentence explains the concept of lemmy by equating it with the concept of a software which facilitates the combination of multiple purposes.

A language-specific “renderer” such as the English one would then take this abstract representation and turn it into an English sentence:

The concept of an explanation of a thing would then be turned into an explanation sentence. Explanation sentences depend on what it is that is being explained. In this case, the subject is specifically marked as a proper noun which is usually explained using a structure like “<explained thing> is <explanation>”. (An explanation for a different type of word could use a different structure.) Because it’s a proper noun and at the beginning of a sentence, “Lemmy” would be capitalised.

Next the explanation part which is declared as a concept of being software of the kind FOSS facilitating some purpose. The combined concept of an object and its purpose is represented as “<object> for the purpose of <purpose>” in English. The object is FOSS here and specifically a software facilitating some purpose, so the English “renderer” can expand this into “free and open-source software for the purpose of facilitating <purpose>”.

The purpose given is the purpose of having multiple purposes and this concept simply combines multiple purposes into one.
The purposes are two objects to which a property has been applied. In English, the concept of applying a property is represented as as “a <property as adjective> <object>”, so in this case “a self-hosted news-aggregation platform” and “a self-hosted online discussion forum”. These purposes are then combined using the standard English method of combining multiple objects which is listing them: “a self-hosted news-aggregation platform and a self-hosted online discussion forum”. Because both purposes have the same adjective applied, the English “renderer” would likely make the stylistic choice of implicitly applying it to both which is permitted in English: “a self-hosted news-aggregation platform and online discussion forum”.

It would then be able to piece together this English sentence: “Lemmy is a free and open source software for the purposes of facilitating a self-hosted news-aggregation platform and online discussion forum.”.

You could be even more specific in the abstract sentence in order to get exactly the original sentence but this is also a perfectly valid sentence for explaining Lemmy in English. All just from declaring concepts in an abstract way and transforming that abstract representation into natural language text using static rules.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines