There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

Nixpkgs committer. (Probably won’t be active much anymore.)

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Linux 6.10 To Merge NTSYNC Driver For Emulating Windows NT Synchronization Primitives (

Going through my usual scanning of all the “-next” Git subsystem branches of new code set to be introduced for the next Linux kernel merge window, a very notable addition was just queued up… Linux 6.10 is set to merge the NTSYNC driver for emulating the Microsoft Windows NT synchronization primitives within the kernel for...

Atemu , avatar

Old reddit absolutely had its issues. The new and newnew design is just decisively worse however.

[HELP] Option for Variable Refresh is gone after installing new graphics card (PowerColor 6750 XT)

Howdy. I just installed a new graphics card in my gaming rig, and now the option for Variable Refresh Rate is gone from the Display Settings when I log into a Gnome Xorg session. I swapped out my trusty Vega 64 for a new PowerColor 6750 XT. Before the swap, I always signed into an Xorg session and the option for Variable Refresh...

Atemu , avatar

Does it work if you enable VRR via xorg config?

Which xorg driver are/were you using, amdgpu or modesetting?

Help with HDD

I have a 4TB HDD that I use to store music, films, images, and text files. I have a 250GB SDD that I use to install my OS and video games. So far I didn’t have any problem with this setup, obviously it’s a bit slower when it reads the HDD but nothing too serious, but lately it’s gotten way worse, where it just lags too...

Atemu , avatar

Monitor I/O on the drive; is anything using it while your system is idle?

What’s I/O like when loading an album?

Atemu , avatar

This reads like a phrase from Half as Interesting.

Atemu , avatar

The process for this is that you want to set your prefix to the /boot partition in the (hd1, gpt1) syntax (use ls) and then load the “normal” module. From then on, you should have regular GRUB again and should be able to boot your OS to properly fix GRUB.

Atemu , avatar

It’s too early to tell; you must investigate further.

Atemu , avatar

XZ is a slog to compress and decompress but compresses a bit smaller than zstd.

zstd is quite quick to compress, very quick to decompress, scales to many cores (vanilla xz is single-core only) and scales a lot further in the quicker end of the compression speed <-> file size trade-off spectrum while using the same format.

How the xz backdoor highlights a major flaw in Nix (

The main issue is the handling of security updates within the Nixpkgs ecosystem, which relies on Nix’s CI system, Hydra, to test and build packages. Due to the extensive number of packages in the Nixpkgs repository, the process can be slow, causing delays in the release of updates. As an example, the updated xz 5.4.6 package...

Atemu , avatar


Atemu , avatar

xz is necessarily in the stdenv. Patching it means rebuilding the world, no matter what you optimise.

Atemu , avatar

AFAIK, affected versions never made it to stable as there was no reason to backport it.

Atemu , avatar

This has nothing to do with “unstable” or the specific channel. It could have happened on the stable channel too; depending on the timing.

Atemu , avatar

It was not vulnerable to this particular attack because the attack didn’t specifically target Nixpkgs. It could have very well done so if they had wanted to.

Atemu , avatar

This blog post misses entirely that this has nothing to do with the unstable channel. It just happened to only affect unstable this time because it gets updates first. If we had found out about the xz backdoor two months later (totally possible; we were really lucky this time), this would have affected a stable channel in exactly the same way. (It’d be slightly worse actually because that’d be a potentially breaking change too but I digress.)

I see two way to “fix” this:

  • Throw a shitton of money at builders. I could see this getting staging-next rebuild times down to just 1-2 days which I’d say is almost acceptable. This could even be a temporary thing to reduce cost; quickly renting an extremely large on-demand fleet from some cloud provider for a day whenever a critical world rebuild needs to be done which shouldn’t be too often.
  • Implement pure grafting for important security patches through a second overlay-like mechanism.
Atemu , avatar

This would better be done in the front-end rather than a comment bot.

Atemu , avatar

I don’t like the Piped bot at all.

What should be posted on the internet should be the canonical source of some content, not a proxy for it. If users prefer a proxy, they should configure their clients to redirect to the proxy. Piped instances come and go and the entire project is at the mercy of Google tolerating it/not taking action against it, so it could be gone tomorrow.

I use piped myself. I have client-side configurations which simply redirects all Youtube links to my piped instance. No need for any bots here.

Atemu , avatar

That does not address the point made. It doesn’t matter whether it’s a complex hardware or software component in the stack; they will both fail.

Atemu , avatar

That whole situation was such an overblown idiotic mess. Kagi has always used indices from companies that do far more unethical things than committing the extreme crime of having a CEO who has stupid opinions on human rights.
I 100% agree with Vlad’s response to this whole thing and anyone who thinks otherwise should question what exactly it is they’re criticising.

I don’t like Brave (super shady IMHO) and certainly not their CEO but I didn’t sign up for a 100% ethically correct search engine, I signed up for a search engine with innovative features and good search results. The only viable alternatives are to use 100% not ethically correct search indices with meh (Google) to bad (Bing, DDG) search results. If you’re going to tell me how Google and M$ are somehow ethical, I’m going to have to laugh at you.

The whole argument amounts to whining about the status quo and bashing the one company that tries anything to change it. The only way to get away from the Google monopoly is alternative indices. Yes those alternatives may not be much more ethical than friggin Google. So what.

Atemu , avatar

Your search results look very different to mine:

Did you disable Grouped Results?

All the LLM-generated “top 10” listicles are grouped into one large block I can safely ignore. (I could hide them entirely but the visual grouping allows for easy mental filtering, so I haven’t bothered.) Your weird top10 fake site does not show up.

But yes, as the linked article says, Kagi is primarily a proxy for Google with some extra on top. This is, unfortunately, a feature as Google’s index still reigns supreme for general purpose search. It absolutely is bad and getting worse but sadly still the best you can get. Using only non-Google indices would just result in bad search results.
The Google-ness is somewhat mitigated by Kagi-exclusive features such as the LLM garbage grouping.

What Google also cannot do is highlighted in my screenshot: You can customise filtering and ranking.
The first search result is a Reddit thread with some decent discussion because I configured Kagi to prefer Reddit search results. In the case of household appliances, this doesn’t do a whole lot as I have not researched trusted/untrusted sources in this field yet but it’s very noticeable in fields like programming where I have manually ranked sites.

Kagi is not “all about” privacy. It’s a factor, sure but ultimately you still have to trust a U.S. company. Better than “trusting” a known abuser (Google, M$) but without an external audit, I wouldn’t put too much wight into this.
The index ain’t it either as it’s mostly Google though sometimes a bit better.
What really sets it apart is the features. Customised ranking aswell as blocking some sites outright (bye bye pinterest and userbenchmark) are immensely useful. So are filtering garbage results that Google still likes to return.

Atemu , avatar

I personally have not found Kagi’s default search results to be all that impressive

At their worst, they’re as bad as Google’s. For me however, this is a great improvement over using bing/Google proxies which would be the alternative.

maybe if I took the time to customize, I might feel differently.

That’s the killer feature IMHO.

Atemu , avatar

I think you’re underestimating how huge of an undertaking a half-decent search index is, much less a good one.

Atemu , (edited ) avatar

Whether this is bad depends on your threat model. Additionally, you must also consider that other search engines are able to easily identify you without you explicitly identifying yourself. If you can’t fool, you certainly can’t fool Google for instance. And that’s even ignoring the immense identifying potential of user behaviour.

Billing supports OpenNode AFAICT which I guess you could funnel your Moneros through but meh.

Edit: Phrasing.

Atemu , avatar

Is “Grouped Results” disabled in settings?

Atemu , avatar

Certainly better than the U.S. in that regard but I wouldn’t consider Germany “resilient” either.

Atemu , avatar

Waitwaitwaitwaitwait, haha Intel did us dirty again. There is no performance improvement whatsoever, they just lowered the internal resolution. The 10% “performance improvement” is simply the difference between 2.0x and 2.3x upscaling. Malicious fuckers.

There may be a quality improvement but that cannot be determined by anyone affiliated with Intel as they’re clearly using every opportunity to lie about this. WTF?

Atemu , avatar

I think it could be because Google may offer them quite a bit longer hardware support. They had to go with some industrial SoC for the FP5 to get Qualcomm to offer even a half decent hardware support cycle.

Atemu , avatar

Sorry, can’t answer that as my crystal ball is broken at the moment.

Atemu , avatar

Merge is not the issue here, rebase would do the same.

Atemu , avatar

For merge you end up with this nonsense of mixed commits and merge commits like A->D->B->B’->E->F->C->C’ where the ones with the apostrophe are merge commits.

Your notation does not make sense. You’re representing a multi-dimensional thing in one dimension. Of course it’s a mess if you do that.

Your example is also missing a crucial fact required when reasoning about merges: The merge base.
Typically a branch is “branched off” from some commit M. D’s and A’s parent would be M (though there could be any amount of commits between A and M). Since A is “on the main branch”, you can conclude that D is part of a “patch branch”. It’s quite clear if you don’t omit this fact.

I also don’t understand why your example would have multiple merges.

Here’s my example of a main branch with a patch branch; in 2D because merges can’t properly be represented in one dimension:

<span style="color:#323232;">M - A - B - C - C'
</span><span style="color:#323232;">             /
</span><span style="color:#323232;">    D - E - F

The final code ought to look the same, but now if you’re debugging you can’t separate the feature patch from the main path code to see which part was at fault.

If you use a feature branch workflow and your main branch is merged into, you typically want to use first-parent bisects. They’re much faster too.

Atemu , avatar

Because when debugging, you typically don’t care about the details of wip, some more stuff, Merge remote-tracking branch ‘origin/master’, almost working, Merge remote-tracking branch ‘origin/master’, fix some tests etc. and would rather follow logical steps being taken in order with descriptive messages such as component: refactor xyz in preparation for feature, component: add do_foo(), component: implement feature using do_foo() etc.

Atemu , avatar

…or you simply rebase the subset of commits of your branch onto the rewritten branch. That’s like 10 simple button presses in magit.

Atemu , avatar

You should IMO always do this when putting your work on a shared branch

No. You should never squash as a rule unless your entire team can’t be bothered to use git correctly and in that case it’s a workaround for that problem, not a generally good policy.

Automatic squashes make it impossible to split commit into logical units of work. It reduces every feature branch into a single commit which is quite stupid.
If you ever needed to look at a list of feature branch changes with one feature branch per line for some reason, the correct tool to use is a first-parent log. In a proper git history, that will show you all the merge commits on the main branch; one per feature branch; as if you had squashed.

Rebase “merges” are similarly stupid: You lose the entire notion of what happened together as a unit of work; what was part of the same feature branch and what wasn’t. Merge commits denote the end of a feature branch and together with the merge base you can always determine what was committed as part of which feature branch.

Atemu , avatar

The only difference between a *rebase-merge and a rebase is whether main is reset to it or not. If you kept the main branch label on D and added a feature branch label on G’, that would be what @andrew meant.

Atemu , avatar

you also lose the merge-commits, which convey no valuable information of their own.

In a feature branch workflow, I do not agree. The merge commit denotes the end of a feature branch. Without it, you lose all notion of what was and wasn’t part of the same feature branch.

Atemu , avatar

Note that I didn’t say that you should never squash commits. You should do that but with the intention of producing a clearer history, not as a general rule eliminating any possibly useful history.

Atemu , avatar

The thing is, you can get your cake and eat it too. Rebase your feature branches while in development and then merge them to the main branch when they’re done.

Atemu , avatar

They were mentioned because a file they are the code owner of was modified in the PR.

The modifications came from another branch which you accidentally(?) merged into yours. The problem is that those commits weren’t in master yet, so GH considers them to be part of the changeset of your branch. If they were in master already, GH would only consider the merge commit itself part of the change set and it does not contain any changes itself (unless you resolved a conflict).

If you had rebased atop of the other branch, you would have still had the commits of the other branch in your changeset; it’d be as if you tried to merge the other branch into master + your changes.

Atemu , avatar

I am not. Read the context mate.

Atemu , avatar

That article is interesting and important but it does not show any causal links between lockdowns and the disappearance.

It is, for example, also possible that it was merely displaced by SARS-CoV2.

Atemu , avatar

I consider those measures to be included in “lockdown” but it’s besides the point: The paper contains no evidence that those measures made it disappear, just that it disappeared.

Atemu , avatar

No, they’ve got the same information as us. That’s why they explicitly say:

when Covid pandemic lockdowns and social distancing appeared to have halted circulation

It is still speculation, not data.

I’d tend to agree with the speculation but it’s still speculation.

Will antivirus be more significant on Linux desktop after this xz-util backdoor?

I understand that no Operating System is 100% safe. Although this backdoor is likely only affects certain Linux desktop users, particularly those running unstable Debian or testing builds of Fedora (like versions 40 or 41), **Could this be a sign that antivirus software should be more widely used on Linux desktops? ** ( I know...

Atemu , avatar


You still need to trust a full Linux kernel and x86 hardware system.

Atemu , avatar

Pretty much any?

Headless distros won’t really differ in RAM usage. The only generic OS property that I could relistically see saving significant resources in this regard would be 32bit but that’s… eh.

What’s more important is how you utilize the limited resources. If you have to resort to containers for everything and run 50 instences of postgres, redis etc. because the distro doesn’t ship the software you want to run natively, that won’t work.

For NAS purposes and a few web services though, even containers would likely work just fine.

Atemu , avatar

Just a hunch but I’d look into rtkit. A bad process with realtime priority could starve out others.

Temporarily disable rtkit and log out.

Atemu , avatar

The only important instance I know of would be your audio server (pipewire, pulse) which could also explain why audio continues to work.

how do I disable rtkit? It seems to just start up regardless of what I do.

Masking the service should do it.

How do you manage your headphone cables?

I recently switched from wireless to wired headphones (Samson SR-850, probably the best for the very reasonable price) and my chair’s wheels instantly started eating its cable. Right now I’m using a small plastic hook that came with a face mask to keep it off the floor, but I’d like to hear other solutions.

Atemu , avatar

Link already died?

Atemu , avatar

Ah, indeed. No idea why it didn’t work yesterday.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines