There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

github.com/Atemu
reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

This profile is from a federated server and may be incomplete. Browse more on the original instance.

How the xz backdoor highlights a major flaw in Nix (shadeyg56.vercel.app)

The main issue is the handling of security updates within the Nixpkgs ecosystem, which relies on Nix’s CI system, Hydra, to test and build packages. Due to the extensive number of packages in the Nixpkgs repository, the process can be slow, causing delays in the release of updates. As an example, the updated xz 5.4.6 package...

Atemu ,
@Atemu@lemmy.ml avatar

No.

Atemu ,
@Atemu@lemmy.ml avatar

xz is necessarily in the stdenv. Patching it means rebuilding the world, no matter what you optimise.

Atemu ,
@Atemu@lemmy.ml avatar

AFAIK, affected versions never made it to stable as there was no reason to backport it.

Atemu ,
@Atemu@lemmy.ml avatar

This has nothing to do with “unstable” or the specific channel. It could have happened on the stable channel too; depending on the timing.

Atemu ,
@Atemu@lemmy.ml avatar

That does not address the point made. It doesn’t matter whether it’s a complex hardware or software component in the stack; they will both fail.

Atemu ,
@Atemu@lemmy.ml avatar

That whole situation was such an overblown idiotic mess. Kagi has always used indices from companies that do far more unethical things than committing the extreme crime of having a CEO who has stupid opinions on human rights.
I 100% agree with Vlad’s response to this whole thing and anyone who thinks otherwise should question what exactly it is they’re criticising.

I don’t like Brave (super shady IMHO) and certainly not their CEO but I didn’t sign up for a 100% ethically correct search engine, I signed up for a search engine with innovative features and good search results. The only viable alternatives are to use 100% not ethically correct search indices with meh (Google) to bad (Bing, DDG) search results. If you’re going to tell me how Google and M$ are somehow ethical, I’m going to have to laugh at you.

The whole argument amounts to whining about the status quo and bashing the one company that tries anything to change it. The only way to get away from the Google monopoly is alternative indices. Yes those alternatives may not be much more ethical than friggin Google. So what.

Atemu ,
@Atemu@lemmy.ml avatar

Your search results look very different to mine:

https://lemmy.ml/pictrs/image/01eae1b8-2367-4533-a739-a59b944b4946.png

Did you disable Grouped Results?

All the LLM-generated “top 10” listicles are grouped into one large block I can safely ignore. (I could hide them entirely but the visual grouping allows for easy mental filtering, so I haven’t bothered.) Your weird top10 fake site does not show up.

But yes, as the linked article says, Kagi is primarily a proxy for Google with some extra on top. This is, unfortunately, a feature as Google’s index still reigns supreme for general purpose search. It absolutely is bad and getting worse but sadly still the best you can get. Using only non-Google indices would just result in bad search results.
The Google-ness is somewhat mitigated by Kagi-exclusive features such as the LLM garbage grouping.

What Google also cannot do is highlighted in my screenshot: You can customise filtering and ranking.
The first search result is a Reddit thread with some decent discussion because I configured Kagi to prefer Reddit search results. In the case of household appliances, this doesn’t do a whole lot as I have not researched trusted/untrusted sources in this field yet but it’s very noticeable in fields like programming where I have manually ranked sites.

Kagi is not “all about” privacy. It’s a factor, sure but ultimately you still have to trust a U.S. company. Better than “trusting” a known abuser (Google, M$) but without an external audit, I wouldn’t put too much wight into this.
The index ain’t it either as it’s mostly Google though sometimes a bit better.
What really sets it apart is the features. Customised ranking aswell as blocking some sites outright (bye bye pinterest and userbenchmark) are immensely useful. So are filtering garbage results that Google still likes to return.

Atemu ,
@Atemu@lemmy.ml avatar

I personally have not found Kagi’s default search results to be all that impressive

At their worst, they’re as bad as Google’s. For me however, this is a great improvement over using bing/Google proxies which would be the alternative.

maybe if I took the time to customize, I might feel differently.

That’s the killer feature IMHO.

Atemu ,
@Atemu@lemmy.ml avatar

I think you’re underestimating how huge of an undertaking a half-decent search index is, much less a good one.

Atemu , (edited )
@Atemu@lemmy.ml avatar

Whether this is bad depends on your threat model. Additionally, you must also consider that other search engines are able to easily identify you without you explicitly identifying yourself. If you can’t fool abrahamjuliot.github.io/creepjs/, you certainly can’t fool Google for instance. And that’s even ignoring the immense identifying potential of user behaviour.

Billing supports OpenNode AFAICT which I guess you could funnel your Moneros through but meh.

Edit: Phrasing.

Atemu ,
@Atemu@lemmy.ml avatar

Is “Grouped Results” disabled in settings?

Atemu ,
@Atemu@lemmy.ml avatar

Certainly better than the U.S. in that regard but I wouldn’t consider Germany “resilient” either.

Atemu ,
@Atemu@lemmy.ml avatar

Waitwaitwaitwaitwait, haha Intel did us dirty again. There is no performance improvement whatsoever, they just lowered the internal resolution. The 10% “performance improvement” is simply the difference between 2.0x and 2.3x upscaling. Malicious fuckers.

There may be a quality improvement but that cannot be determined by anyone affiliated with Intel as they’re clearly using every opportunity to lie about this. WTF?

Atemu ,
@Atemu@lemmy.ml avatar

I think it could be because Google may offer them quite a bit longer hardware support. They had to go with some industrial SoC for the FP5 to get Qualcomm to offer even a half decent hardware support cycle.

Atemu ,
@Atemu@lemmy.ml avatar

Sorry, can’t answer that as my crystal ball is broken at the moment.

Atemu ,
@Atemu@lemmy.ml avatar

Merge is not the issue here, rebase would do the same.

Atemu ,
@Atemu@lemmy.ml avatar

For merge you end up with this nonsense of mixed commits and merge commits like A->D->B->B’->E->F->C->C’ where the ones with the apostrophe are merge commits.

Your notation does not make sense. You’re representing a multi-dimensional thing in one dimension. Of course it’s a mess if you do that.

Your example is also missing a crucial fact required when reasoning about merges: The merge base.
Typically a branch is “branched off” from some commit M. D’s and A’s parent would be M (though there could be any amount of commits between A and M). Since A is “on the main branch”, you can conclude that D is part of a “patch branch”. It’s quite clear if you don’t omit this fact.

I also don’t understand why your example would have multiple merges.

Here’s my example of a main branch with a patch branch; in 2D because merges can’t properly be represented in one dimension:


<span style="color:#323232;">M - A - B - C - C'
</span><span style="color:#323232;">             /
</span><span style="color:#323232;">    D - E - F
</span>

The final code ought to look the same, but now if you’re debugging you can’t separate the feature patch from the main path code to see which part was at fault.

If you use a feature branch workflow and your main branch is merged into, you typically want to use first-parent bisects. They’re much faster too.

Atemu ,
@Atemu@lemmy.ml avatar

Because when debugging, you typically don’t care about the details of wip, some more stuff, Merge remote-tracking branch ‘origin/master’, almost working, Merge remote-tracking branch ‘origin/master’, fix some tests etc. and would rather follow logical steps being taken in order with descriptive messages such as component: refactor xyz in preparation for feature, component: add do_foo(), component: implement feature using do_foo() etc.

Atemu ,
@Atemu@lemmy.ml avatar

…or you simply rebase the subset of commits of your branch onto the rewritten branch. That’s like 10 simple button presses in magit.

Atemu ,
@Atemu@lemmy.ml avatar

You should IMO always do this when putting your work on a shared branch

No. You should never squash as a rule unless your entire team can’t be bothered to use git correctly and in that case it’s a workaround for that problem, not a generally good policy.

Automatic squashes make it impossible to split commit into logical units of work. It reduces every feature branch into a single commit which is quite stupid.
If you ever needed to look at a list of feature branch changes with one feature branch per line for some reason, the correct tool to use is a first-parent log. In a proper git history, that will show you all the merge commits on the main branch; one per feature branch; as if you had squashed.

Rebase “merges” are similarly stupid: You lose the entire notion of what happened together as a unit of work; what was part of the same feature branch and what wasn’t. Merge commits denote the end of a feature branch and together with the merge base you can always determine what was committed as part of which feature branch.

Atemu ,
@Atemu@lemmy.ml avatar

The only difference between a *rebase-merge and a rebase is whether main is reset to it or not. If you kept the main branch label on D and added a feature branch label on G’, that would be what @andrew meant.

Atemu ,
@Atemu@lemmy.ml avatar

you also lose the merge-commits, which convey no valuable information of their own.

In a feature branch workflow, I do not agree. The merge commit denotes the end of a feature branch. Without it, you lose all notion of what was and wasn’t part of the same feature branch.

Atemu ,
@Atemu@lemmy.ml avatar

Note that I didn’t say that you should never squash commits. You should do that but with the intention of producing a clearer history, not as a general rule eliminating any possibly useful history.

Atemu ,
@Atemu@lemmy.ml avatar

The thing is, you can get your cake and eat it too. Rebase your feature branches while in development and then merge them to the main branch when they’re done.

Atemu ,
@Atemu@lemmy.ml avatar

They were mentioned because a file they are the code owner of was modified in the PR.

The modifications came from another branch which you accidentally(?) merged into yours. The problem is that those commits weren’t in master yet, so GH considers them to be part of the changeset of your branch. If they were in master already, GH would only consider the merge commit itself part of the change set and it does not contain any changes itself (unless you resolved a conflict).

If you had rebased atop of the other branch, you would have still had the commits of the other branch in your changeset; it’d be as if you tried to merge the other branch into master + your changes.

Atemu ,
@Atemu@lemmy.ml avatar

I am not. Read the context mate.

Atemu ,
@Atemu@lemmy.ml avatar

That article is interesting and important but it does not show any causal links between lockdowns and the disappearance.

It is, for example, also possible that it was merely displaced by SARS-CoV2.

Atemu ,
@Atemu@lemmy.ml avatar

I consider those measures to be included in “lockdown” but it’s besides the point: The paper contains no evidence that those measures made it disappear, just that it disappeared.

Atemu ,
@Atemu@lemmy.ml avatar

No, they’ve got the same information as us. That’s why they explicitly say:

when Covid pandemic lockdowns and social distancing appeared to have halted circulation

It is still speculation, not data.

I’d tend to agree with the speculation but it’s still speculation.

Will antivirus be more significant on Linux desktop after this xz-util backdoor?

I understand that no Operating System is 100% safe. Although this backdoor is likely only affects certain Linux desktop users, particularly those running unstable Debian or testing builds of Fedora (like versions 40 or 41), **Could this be a sign that antivirus software should be more widely used on Linux desktops? ** ( I know...

Atemu ,
@Atemu@lemmy.ml avatar

Sorta.

You still need to trust a full Linux kernel and x86 hardware system.

Atemu ,
@Atemu@lemmy.ml avatar

Pretty much any?

Headless distros won’t really differ in RAM usage. The only generic OS property that I could relistically see saving significant resources in this regard would be 32bit but that’s… eh.

What’s more important is how you utilize the limited resources. If you have to resort to containers for everything and run 50 instences of postgres, redis etc. because the distro doesn’t ship the software you want to run natively, that won’t work.

For NAS purposes and a few web services though, even containers would likely work just fine.

Atemu ,
@Atemu@lemmy.ml avatar

Just a hunch but I’d look into rtkit. A bad process with realtime priority could starve out others.

Temporarily disable rtkit and log out.

Atemu ,
@Atemu@lemmy.ml avatar

The only important instance I know of would be your audio server (pipewire, pulse) which could also explain why audio continues to work.

how do I disable rtkit? It seems to just start up regardless of what I do.

Masking the service should do it.

How do you manage your headphone cables?

I recently switched from wireless to wired headphones (Samson SR-850, probably the best for the very reasonable price) and my chair’s wheels instantly started eating its cable. Right now I’m using a small plastic hook that came with a face mask to keep it off the floor, but I’d like to hear other solutions.

Atemu OP , (edited )
@Atemu@lemmy.ml avatar

Arch is on 5.6.1 as of now: archlinux.org/packages/core/x86_64/xz/

We at Nixpkgs have barely evaded having it go to a channel used by users and we don’t seem to be affected by the backdoor.

Atemu OP ,
@Atemu@lemmy.ml avatar

We know that sshd is targeted but we don’t know the full extent of the attack yet.

Atemu ,
@Atemu@lemmy.ml avatar

Security knowledge and ethical concerns are two separate things. Whether we like it or not, we pay online creators through private data we must give to entities who will use it against our best interests.

Atemu ,
@Atemu@lemmy.ml avatar

What a great argument! You didn’t even read the first sentence…

It isn’t an ethical concern and hasn’t been since the 90s.

You’ll have to explain to me how not compensating someone for their work has been ethical since the 90s.

Atemu ,
@Atemu@lemmy.ml avatar

Cool story bro but you clearly still didn’t even read the first sentence of what I wrote.

Atemu ,
@Atemu@lemmy.ml avatar

Yes and that’s precisely the point. You can make the decision not to pay and there are good reasons to do so (I do so too) but you must recognise that someone is still not getting paid for their work.

Atemu ,
@Atemu@lemmy.ml avatar

To the person receiving the money, it is worth it. Else they wouldn’t be doing it.

Atemu ,
@Atemu@lemmy.ml avatar

Size of diff between btrfs subvolume and snapshot is 11GiB

WDYM by “diff”?

Also forgot to mention but if you want to know what’s taking how much space on your btrfs, try btdu. It uses a sampling-based approach and will therefore never be 100% accurate but it should be quite accurate enough after a little bit.

Atemu ,
@Atemu@lemmy.ml avatar

Note that the diff does not necessarily correlate with the amount of data that changed, not how much additional space the snapshot takes.

Atemu ,
@Atemu@lemmy.ml avatar

engineers don’t like reinventing the wheel.

Engineers aren’t the ones to decide whether to reinvent the wheel or not.

Atemu ,
@Atemu@lemmy.ml avatar

I don’t see how undervolting would result in power savings on modern CPUs if you’re not up against clock limits as the CPU would simply boost higher.

Atemu ,
@Atemu@lemmy.ml avatar

This is the way. You need to check whether CPU and package are mostly in the highest C-states they can be. If not, you’ve got a task or IO device causing a lot of wasted power.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines