There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Just some Internet guy

He/him/them 🏳️‍🌈

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I think it can also get weird when you call other makefiles, like if you go make -j64 at the top level and that thing goes on to call make on subprojects, that can be a looooot of threads of that -j gets passed down. So even on that 64 core machine, now you have possibly 4096 jobs going, and it surfaces bugs that might not have been a problem when we had 2-4 cores (oh no, make is running 16 jobs at once, the horror).

Max_P ,
@Max_P@lemmy.max-p.me avatar

Easiest for this might be NextCloud. Import all the files into it, then you can get the NextCloud client to download or cache the files you plan on needing with you.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I’d say mostly because the client is fairly good and works about the way people expect it to work.

It sounds very much like a DropBox/Google Drive kind of use case and from a user perspective it does exactly that, and it’s not Linux-specific either. I use mine to share my KeePass database among other things. The app is available on just about any platform as well.

Yeah NextCloud is a joke in how complex it is, but you can hide it all away using their all in one Docker/Podman container. Still much easier than getting into bcachefs over usbip and other things I’ve seen in this thread.

Ultimately I don’t think there are many tools that can handle caching, downloads, going offline, reconcile differences when back online, in a friendly package. I looked and there’s a page on Oracle’s website about a CacheFS but that might be enterprise only, there’s catfs in Rust but it’s alpha, and can’t work without the backing filesystem for metadata.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Paywalled medium article? I’ll pass.

Fuck employers that steal from their employees paychecks though.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The page just deletes itself for me when using that. It loads and .5 second later it just goes blank. They really don’t want people to bypass it.

Max_P ,
@Max_P@lemmy.max-p.me avatar

You guys still use fstab? It’s systemd/Linux, you use mount units.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Yeah that’s what it does, that was a shitpost if it wasn’t obvious :p

Though I do use ZFS which you configure the mountpoints in the filesystem itself. But it also ultimately generates systemd mount units under the hood. So I really only need one unit, for /boot.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I forgot about that, I should try it on my new laptop.

Did I just solve the packaging problem? (please feel free to tell me why I'm wrong)

You know what I just realised? These “universal formats” were created to make it easier for developers to package software for Linux, and there just so happens to be this thing called the Open Build Service by OpenSUSE, which allows you to package for Debian and Ubuntu (deb), Fedora and RHEL (rpm) and SUSE and OpenSUSE (also...

Max_P ,
@Max_P@lemmy.max-p.me avatar

The problem is that you can’t just convert a deb to rpm or whatever. Well you can and it usually does work, but not always. Tools for that have existed for a long time, and there’s plenty of packages in the AUR that just repacks a deb, usually proprietary software, sometimes with bundled hacks to make it run.

There’s no guarantee that the libraries of a given distro are at all compatible with the ones of another. For example, Alpine and Void use musl while most others use glibc. These are not binary compatible at all. That deb will never run on Alpine, you need to recompile the whole thing against musl.

What makes a distro a distro is their choice of package manager, the way of handling dependencies, compile flags, package splitting, enabled feature sets, and so on. If everyone used the same binaries for compatibility we wouldn’t have distros, we would have a single distro like Windows but open-source but heaven forbid anyone dares switching the compiler flags so it runs 0.5% faster on their brand new CPU.

The Flatpak approach is really more like “fine we’ll just ship a whole Fedora-lite base system with the apps”. Snaps are similar but they use Ubuntu bases instead (obviously). It’s solving a UX problem, using a particular solution, but it’s not the solution. It’s a nice tool to have so developers can ship a reference environment in which the software is known to run well into and users that just want it to work can use those. But the demand for native packages will never go away, and people will still do it for fun. That’s the nature of open-source. It’s what makes distros like NixOS, Void, Alpine, Gentoo possible: everyone can try a different way of doing things, for different usecases.

If we can even call it a “problem”. It’s my distro’s job to package the software, not the developer’s. That’s how distros work, that’s what they signed up for by making a distro. To take Alpine again for example, they compile all their packages against musl instead of glibc, and it works great for them. That shouldn’t become the developer’s problem to care what kind of libc their software is compiled against. Using a Flatpak in this case just bypasses Alpine and musl entirely because it’s gonna use glibc from the Fedora base system layer. Are you really running Alpine and musl at that point?

And this is without even touching the different architectures. Some distros were faster to adopt ARM than others for example. Some people run desktop apps on PowerPC like old Macs. Fine you add those to the builds and now someone wants a RISC-V build, and a MIPS build.

There are just way too many possibilities to ever end up with an universal platform that fits everyone’s needs. And that’s fine, that’s precisely why developers ship source code not binaries.

Max_P ,
@Max_P@lemmy.max-p.me avatar

My experience with AI is it sucks and never gives the right answer, so no, good ol’ regular web search for me.

When half your searches only gives you like 2-3 pages of result on Google, AI doesn’t have nearly enough training material to be any good.

Max_P ,
@Max_P@lemmy.max-p.me avatar

If you want FRP, why not just install FRP? It even has a LuCI app to control it from what it looks like.

OpenWRT page showing the availability of FRP as an app

NGINX is also available at a mere 1kb in size for the slim version, full version also available as well as HAproxy. Those will have you more than covered, and support SSL.

Looks like there’s also acme.sh support, with a matching LuCI app that can handle your SSL certificate situation as well.

How much does it matter what type of harddisk i buy for my server?

Hello, I’m relatively new to self-hosting and recently started using Unraid, which I find fantastic! I’m now considering upgrading my storage capacity by purchasing either an 8TB or 10TB hard drive. I’m exploring both new and used options to find the best deal. However, I’ve noticed that prices vary based on the specific...

Max_P ,
@Max_P@lemmy.max-p.me avatar

The concern for the specific disk technology is usually around the use case. For example, surveillance drives you expect to be able to continuously write to 24/7 but not at crazy high speeds, maybe you can expect slow seek times or whatever. Gaming drives I would assume are disposable and just good value for storage size as you can just redownload your steam games. A NAS drive will be a little bit more expensive because it’s assumed to be for backups and data storage.

That said in all cases if you use them with proper redundancy like RAIDZ or RAID1 (bleh) it’s kind of whatever, you just replace them as they die. They’ll all do the same, just not with quite the same performance profile.

Things you can check are seek times / latency, throughput both on sequential and random access, and estimated lifespan.

I keep hearing good things about decomissioned HGST enterprise drives on eBay, they’re really cheap.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I mean, OPs distro choice didn’t help here:

EndeavourOS is an Arch-based distro that provides an Arch experience without the hassle of installing it manually for x86_64 machines. After installation, you’re provided with a lightweight and almost bare-bones environment ready to be explored with your terminal, along with our home-built Welcome App as a powerful guide to help you along.

If you want Arch with actual training wheels you probably want Manjaro or at least a SteamOS fork like Chimera/HoloISO.

It probably would have been much smoother with an actual beginner friendly distro like Nobara and Bazzite, or possibly Mint/Pop for a more classic desktop experience.

It’s not perfect and still has woes but OP fell for Arch with a fancy graphical installer, it still comes with the expectation of the user being able to maintain an Arch install.

Max_P ,
@Max_P@lemmy.max-p.me avatar

EndeavourOS isn’t a gaming distro it’s just an Arch installer with some defaults. It’s still Arch and comes with Arch’s woes. It’s not a beginner friendly just works kind of distro.

Coming from kionite, you’d probably want Bazzite if you want a gaming distro: it’s also Fedora atomic with all the gaming stuff added.

Max_P ,
@Max_P@lemmy.max-p.me avatar

It would be nice if they’d make “web” search the good old keyword search we used to have that made Google good, now that normies will just use the AI search and it doesn’t have to care about natural language anymore.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I must be lucky, works just fine for me with SDDM configured for Wayland only, autologin to a Wayland session.


<span style="color:#323232;">max-p@media ~ % cat /etc/sddm.conf
</span><span style="color:#323232;">[Autologin]
</span><span style="color:#323232;">User=max-p
</span><span style="color:#323232;">Session=plasma
</span><span style="color:#323232;">#Session=plasma-bigscreen
</span><span style="color:#323232;">Relogin=true
</span><span style="color:#323232;">
</span><span style="color:#323232;">[General]
</span><span style="color:#323232;">DisplayServer=wayland
</span>
Max_P ,
@Max_P@lemmy.max-p.me avatar

Arch. That leads me to believe it’s possibly a configuration issue. Mine is pretty barebones, it’s literally just that one file.

AFAIK the ones in sddm.conf.d are for useful because the GUI can focus on just one file without nuking other user’s configurations. But they all get loaded so it shouldn’t matter.

The linked bug report seems to blame PAM modules, kwallet in particular which I don’t think I’ve got configured for unlock at login since there’s no password to that account in the first place.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Kbin is an example. But just due to the nature of the protocol, it has to be stored somewhere but Lemmy also just lets admins view all the individual votes directly in the UI.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Still report as well, it sends emails to the mods and the admins. Just make sure it’s identifiable at a glance, like just type “CSAM” or whatever 1-2 words makes sense. You can add details after to explain but it needs to be obvious at a glance, and also mods/admins can send those to a special priority inbox to address it as fast as possible. Having those reports show up directly in Lemmy makes it quicker to action or do bulk actions when there’s a lot of spam.

It’s also good to report it directly into the Lemmy admin chat on Matrix as well afterwards, because in case of CSAM, everyone wants to delete it from their instance ASAP in case it takes time for the originating instance to delete it.

Max_P ,
@Max_P@lemmy.max-p.me avatar

That’s fine to do once you’ve reported it: you’ve done your part, there’s no value still seeing the post it’s gonna get removed anyway.

If the IBM PC used an ARM (or related) CPU instead of the Intel 8088, would smartphones ultimately have sucked less?

Developers still continue to shaft anyone that isn’t using an IBM PC compatible. But if the IBM PC was more closely related to the latest Nexus/Pixel device, then would the gaming experience on smartphones be any good?

Max_P , (edited )
@Max_P@lemmy.max-p.me avatar

Why do you keep comparing phones and PCs? They’re not comparable and never will. My PC can draw probably close to 1000W when running full bore. Mobile chips have a TDP of like 10-20W. My PC can throw 50-100x more power at the problem than your phone can. In the absolute worst case, it would have a dozen or two of those power efficient ARM chips because it can. And PC games would make use of all of them and you circle back to PC superiority. My netbook is within the same range and crappier than my phone in many aspects, around 5-10W. My new Framework 16 has a TDP of 45W, already like 2-4x more than a high end phone has.

Even looking at Apple, the M2 has a TDP of 20W because it was spun off their iPad chips, and primarily targets mobile devices like MacBooks. So while the performance is impressive in the efficiency department, I could build an ARM server with 10x the core count and have a 10x more powerful computer than the top of the line M3 iMac.

PCs running ARM would have no effect on the mobile ecosystem whatsoever. Android runs Linux, and Linux runs on a lot of CPU architectures. You can run Android on RISC-V today if you want to spend the time building it. Or MIPS. Or PowerPC. There’s literally nothing stopping you from doing that.

The gaming experience on mobile sucks because gaming on mobile sucks. If you ran your phone at full power to game and have the best graphics it would probably be dead in 1-2 hours. Nobody would play games that murders their battery. And most people that do play games on mobile want like 10 minute games to play while sitting on the toilet, or on a bus or train or whatever. Thus, battery life is an important factor in making a game: you don’t want your game to chew through battery because then people start rationing their gameplay to make it to the end of the day or the next charger.

PCs are better not because of IBM, or even the x86 architecture, not even because of Windows. They’re better because PCs can be built with any part you want, and you can throw as many CPUs and GPUs and NPUs and FPGAs at the problem as you want. Heck there’s even SBC PCs on PCI/PCIe cards so you can have multiple PCs in your PC.

Whatever you can come up with that fits in a mobile device, I can make a 10-20x more powerful PC if anything by throwing 10-20 phones in it and split the load across all of them.

PC games are ambitious and make use of as much hardware as it can deal with. If you want to show off your 3D tech you don’t limit yourself to mobile, you target dual RTX 4090 Ti graphics cards. There are great games made for lower end hardware, and consoles like the switch runs ARM, like the Zelda games. The switch is vastly inferior to modern phones, and Yuzu can run those games better than the switch can. My PC will happily run BotW and TotK at 4K 240Hz HDR if I ask it to. But it was designed for the Switch and it’s pretty darn good games. So the limitation clearly isn’t that PCs exist, it’s what developers write their games for. CPU architecture isn’t a problem, we have emulators, we have Rosetta, we have Box64, we have FEX.

If PCs didn’t exist, something else would have taken its place a long time ago, and we’d circle back to the exact same problem/question. Heck there’s routers and firewalls that run games better than your phone.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The quality of what the community is doing vs what they shipped with NSO especially on launch is laughable.

Native OoT and MM on the switch would have been really sick. Instead they went with 90s level of emulator quality.

Max_P ,
@Max_P@lemmy.max-p.me avatar

“Trust me bro” from the developer pretty much.

I think it makes sense, they’re a small developer and it’s all stuff I’d expect from the ad networks so if you get premium you also kill the ads and therefore the data collection.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Saw Boost mentioned already, but also I think Tesseract deserves a shoutout for clean and modern web experience.

Self-hosted website for posting web novel/fiction

Hey hello, self-hosting noob here. I just want to know if anyone would know a good way to host my writing. Something akin to those webcomic sites, except for writing. Multiple stories with their own “sections” (?) and a chapter selection for each. Maybe a home page or profile page to just briefly detail myself or whatever, I...

Max_P ,
@Max_P@lemmy.max-p.me avatar

Wordpress or some of its alternatives would probably work well for this. Another alternative would be static site generators, where you pretty much just write the content in Markdown.

It’s also a pretty simple project, it would be a great project to learn basic web development as well.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The 10 year old PC has a much much bigger power budget than a phone. It wasn’t until really recently that ARM got anywhere close to x86 performance.

While the phone technically possibly could be better, it would also drain in an hour or two if it was maxed out. And most people have crappy phones that can barely hold 60fps doing nothing so mobile games usually target the lower end devices to maximize the amount of potential players, while also remaining battery conscious.

There’s also just not that much demand. Nobody has space on their phones for a 120GB game, and nobody wants to play a AAA game on their phones because gaming on a phone sucks ass and if you’re going to dock the phone you might as well get a console.

Max_P ,
@Max_P@lemmy.max-p.me avatar

To be fair you don’t really have to use filters for this. Cameras are much better at capturing the colors of the aurora while in person it looks like a faint white glow in the sky. Possibly some white-balance thing where it way overcompensate.

Cameras also need relatively long exposures to capture those so it’ll also appear much brighter and vivid than we see with our own eyes, possibly because of the low light conditions we use our cones more than the rods.

Max_P ,
@Max_P@lemmy.max-p.me avatar

That’s the eternal cycle of social media. It starts nice and then it get flooded by MAGA extremists until it becomes a cesspool of hate and disinformation.

See: Facebook, Reddit, Twitter, TikTok is well on that path as well.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Fairly new to ham, what’s nice to listen to during an aurora? Just funny noise bursts? Any antenna precautions so I don’t fry my SDR?

Max_P ,
@Max_P@lemmy.max-p.me avatar

Nothing hotter than a giant electric fleshlight whirring away as you get off.

I saw one in a sex shop, it looks like such a chore to get going and clean up afterwards. It’s fucking huge too. Hands are so much easier to clean, and readily available anywhere anytime.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I can definitely see the improvement, even just between my desktop monitor (27in 1440p) and the same resolution at 16 inch on my laptop. Text is very nice and sharp. I’m definitely looking at 4K or even 5K next monitor upgrade cycke.

But the improvement is far from how much of an upgrade 480p to 1080p and moving away from CRTs to flat screens. 1080p was a huge thing when I was in highschool as CRT TVs were being phased out in favor of those new TVs.

For media I think 1080p is good enough. I’ve never gone “shit, I only downloaded the 1080p version”. I like 4K when I can have it like on YouTube and Netflix, but 1080p is still a quite respectable resolution otherwise. The main reason to go higher resolutions for me is text. I’m happy with FSR to upscale the games from 1080p to 1440p for slightly better FPS.

HDR is interesting and might be what convinces people to upgrade from 1080p. On a good TV it feels like more of an upgrade than 4K does.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I’ve actually ran into some of those problems. If you run sudo su --login someuser, it’s still part of your user’s process group and session. With run0 that would actually give you a shell equivalent to as if you logged in locally, and manage user units, all the PAM modules.

systemd-run can do a lot of stuff, basically anything you can possibly do in a systemd unit, which is basically every property you can set on a process. Processor affinity, memory limits, cgroups, capabilities, NUMA node binding, namespaces, everything.

I’m not sure I would adopt run0 as my goto since if D-Bus is hosed you’re really locked out and stuck. But it’s got its uses, and it’s just a symlink, it’s basically free so its existence is kBs of bloat at most. There’s always good ol su when you’re really stuck.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Basically, the SUID bit makes a program get the permissions of the owner when executed. If you set /bin/bash as SUID, suddenly every bash shell would be a root shell, kind of. Processes on Linux have a real user ID, an effective user ID, and also a saved user ID that can be used to temporarily drop privileges and gain them back again later.

So tools like sudo and doas use this mechanism to temporarily become root, then run checks to make sure you’re allowed to use sudo, then run your command. But that process is still in your user’s session and process group, and you’re still its real user ID. If anything goes wrong between sudo being root and checking permissions, that can lead to a root shell when you weren’t supposed to, and you have a root exploit. Sudo is entirely responsible for cleaning the environment before launching the child process so that it’s safe.

Run0/systemd-run acts more like an API client. The client, running as your user, asks systemd to create a process and give you its inputs and outputs, which then creates it on your behalf on a clean process tree completely separate from your user session’s process tree and group. The client never ever gets permissions, never has to check for the permissions, it’s systemd that does over D-Bus through PolKit which are both isolated and unprivileged services. So there’s no dangerous code running anywhere to exploit to gain privileges. And it makes run0 very non-special and boring in the process, it really does practically nothing. Want to make your own in Python? You can, safely and quite easily. Any app can easily integrate sudo functionnality fairly safely, and it’ll even trigger the DE’s elevated permission prompt, which is a separate process so you can grant sudo access to an app without it being able to know about your password.

Run0 takes care of interpreting what you want to do, D-Bus passes the message around, PolKit adds its stamp of approval to it, systemd takes care of spawning of the process and only the spawning of the process. Every bit does its job in isolation from the others so it’s hard to exploit.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I haven’t had D-Bus problems in quite a while but actually run0 should help with some of those issues. Like, systemctl --user will actually work when used with run0, or at least systemd-run can.

Haven’t used it yet so it’s all theoretical, but it makes sense to me especially at work. I’ve used systemd-run to run processes in very precise contexts, it’s worth using even if just to smush together schedtool, numactl, nice, taskset and sudo in one command and one syntax. Anything a systemd unit can do, systemd-run and run0 can do as well.

I’m definitely going to keep su around just in case because I will break it the same I’ve broken sudo a few times, but I might give it a shot and see if it’s any good just for funsies.

Just trying to explain what it does and what it can do as accurately as possible, because out of context “systemd adds sudo clone” people immediately jump to conclusions. It might not be the best idea in the end but it’s also worth exploring.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Some executables are special. When you run them, they automagically run as root instead! But if sudo isn’t very, very careful, you can trick it into letting you run things as root that you shouldn’t be able to.

Run0 DM’s systemd asking it to go fork a process as root for you, and serves as the middleman between you and the other process.

Max_P ,
@Max_P@lemmy.max-p.me avatar

If you dig deeper into systemd, it’s not all that far off the Unix philosophy either. Some people seem to think the entirety of systemd runs as PID1, but it really only spawns and tracks processes. Most systemd components are separate processes that focus on their own thing, like journald and log management. It’s kinda nice that they all work very similarly, it makes for a nice clean integrated experience.

Because it all lives in one repo doesn’t mean it makes one big fat binary that runs as PID1 and does everything.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The same is on the way in the US with how hard conservatives are fighting to keep graduates dumb and educated. Educated people don’t lean towards wars.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Yeah, even Asahi has better OpenGL support than real macOS. They make damn sure you have to use Metal to get the most out of it, just like eventually you get caught up in DirectX on Windows whether you want it or not. You can use Vulkan and OpenGL, but the OS really wants to work with Metal/DirectX buffers in the end.

I appreciate that the devs care enough to make it really good from the start, because that sets the benchmark. Now the Linux version has to have a similar enough polish to it.

In comparison, Atom and VSCode both worked fine on Linux just about day one thanks to Electron, but it was also widely disliked for the poor performance. It’s a part of what Zed competes on, performance compared to VSCode.

Max_P ,
@Max_P@lemmy.max-p.me avatar

That’s why half decent VPN apps also add firewall rules to prevent leakage. Although nothing can beat Linux and shoving the real interface in a namespace so it’s plainly not available to anything except the VPN process.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Some providers have managed to make split tunnelling work fine so those I suspect are not affected because they override the routing at the driver level. It’s really only the kinda lame OpenVPN wrappers that would be affected. When you have the custom driver, you can affect the routing. It’s been a while since I’ve tested this stuff on Windows since obviously I haven’t been paid to do that for 6 years, but yeah I don’t even buy that all providers are affected and that it’s unfixable. We had workarounds for that when I joined PIA already so it’s probably been a known thing for at least a decade.

The issues we had is sometimes you could get the client to forget to remove the firewall rules or to add back the routes and it would break people’s internet entirely. Not great but a good problem to have in context.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Most VPN providers don’t use DHCP. OpenVPN emulates and hooks DHCP requests client-side to hand the OS the IP it got over the OpenVPN protocol in a more standard way (unless you use Layer 2 tunnels which VPN providers don’t because it’s useless for that use case). WireGuard doesn’t support DHCP at all and it always comes from configuration.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The attack vector here seems to be public WiFi like coffee shops, airports, hotels and whatnot. The places you kinda do want to use a VPN.

On those, if they’re not configured well such as coffee shops using consumer grade WiFi routers, an attacker on the same WiFi can respond to the DHCP request faster than the router or do an ARP spoof attack. The attacker can proxy the DHCP request to make sure you get a valid IP but add extra routes on top.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Adding routes for other thing on the network the clients can reach directly and remove some load from the router. For example, reaching another office location through a tunnel, you can add a route to 10.2.0.0/16 via 10.1.0.4 and the clients will direct the traffic directly at the appropriate gateway.

Arguably one should design the network such that this is not necessary but it’s useful.

Max_P ,
@Max_P@lemmy.max-p.me avatar

And it’s NVIDIA so it’s still gonna be a flickery mess until explicit sync is all done and rolled out.

Max_P , (edited )
@Max_P@lemmy.max-p.me avatar

On my computer that’d unmount my home directory, my external storage, my scratch space and my backup storage, and my NAS.

It would also unmount /sys and /proc and /tmp and /run. Things can get weird fast without those, for example that’s where the Xorg/Wayland socket is located.

If all you have is home and root on the same partition I guess it’s not too bad because it’s guaranteed to be in use so it won’t let you, but still, I wouldn’t do that to save like 5 keystrokes in a terminal.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Fair enough, TIL. I’ve used mount -a a fair bit, but unmounting the world is not something that crossed my mind to even attempt. It would still unmount a good dozen ZFS datasets for me.

Good example with the Snaps! Corrected my post.

Max_P ,
@Max_P@lemmy.max-p.me avatar

And using loads of sensitive permissions to pull it off, like accessibility to read the screen. It’s not stealing the auth cookies from the app nor throwing exploits at Android to escape the sandbox.

Headline definitely makes it sound like it’s a drive-by exploit, but no it’s just the usual social engineering everyone is familiar with.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Masquerading a normal looking link for another one, usually phishing, malware, clones loaded with ads.

Like, lets say I post something like

https://www.google.com

And also have my instance intercept it to provide Google’s embed preview image, and it federates that with other instances.

Now, for everyone it would look like a Google link, but you get Microsoft Google instead.

I could also actually post a genuine Google link but make the preview go somewhere else completely, so people may see the link goes where they expect even when putting the mouse over it, but then they end up clicking the preview for whatever reason. Bam, wrong site. Could also be a YouTube link and embed but the embed shows a completely different preview image, you click on it and get some gore or porn instead. Fake headlines, whatever way you can think of to abuse this, using the cyrillic alphabet, whatever.

People trust those previews in a way, so if you post a shortened link but it previews like a news article you want to go to, you might click the image or headline but end up on a phony clone of the site loaded with malware. Currently, if you trust your instance you can actually trust the embed because it’s generated by your instance.

On iMessage, it used that the sender would send the embed metadata, so it was used for a zero click exploit by sending an embed of a real site but with an attachment that exploited the codec it would be rendered with.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines