There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

IsoKiero

@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

IsoKiero ,

What’s your end goal here? You try to keep files just on that one media without any options to make copies of them? Or maintain an image which has enforced files at their directories? And against what kind of scenarios?

ACLs and SELinux aren’t useful as they can be simply bypassed by using another installation and overriding those as root, same thing with copying. Only thing I can think of, up to a degree, is to use immutable media, like CD-R, where it’s physically impossible to move files once they’re in place and even that doesn’t prevent copying anything.

IsoKiero ,

I want to prevent myself from reinstalling my system.

Any even remotely normal file on disk doesn’t stop that, regardless of encryption, privileges, attributes or anything your running OS could do to the drive. If you erase partition table it’ll lose your ‘safety’ file too without any questions asked as on that point the installer doesn’t care (nor see/manage) on individual files on the medium. And this is exactly what ‘use this drive automatically for installation’ -option does on pretty much all of the installers I’ve seen.

Protecting myself from myself.

That’s what backups are for. If you want to block any random usb-stick installer from running you could set up a boot options on bios to exclude those and set up a bios password, but that only limits on if you can ‘accidently’ reinstall system from external media.

And neither of those has anything to do on read/copy protection for the files. If they contain sensitive enough data they should be encrypted (and backed up), but that’s a whole another problem than protecting the drive from accidental wipe. Any software based limitation concerning your files falls apart immediately (excluding reading the data if it’s encrypted) when you boot another system from external media or other hard drive as whatever solution you’re using to protect them is no longer running.

Unless you give up the system management to someone else (root passwords, bios password and settings…) who can keep you from shooting yourself on the foot, there’s nothing that could get you what you want. Maybe some cloud-based filesystem from Amazon with immutable copies could achieve that, but it’s not really practical on any level, financial very much included. And even with that (if it’s even possible in the first place, I’m not sure) if you’re the one holding all the keys and passwords, the whole system is on your mercy anyways.

So the real solution is to back up your files, verify regularly that backups work and learn not to break your things.

IsoKiero ,

Geeqie is a quick one to go trough photos and it groups RAW+JPG as a single item on preview, so even a few hundred photos are quickly ran trough with just a keyboard. I’m not sure on how well it manages tags as I don’t use it for tagging, but it’s most likely in your distros repository so testing it out is quick.

IsoKiero ,

Then do sudo apt install xfce4 and sudo apt purge cinnamon* muffin* nemo*.

It’s been a while since I installed xfce4 on anything, but if things haven’t changed I think the metapackage doesn’t include xfce4-goodies and some other packages, so if you’re missing something it’s likely that you just need to ‘apt install xfce4-whatever’. Additionally you can keep cinnamon around as long as you like as a kind of a backup, just change lightdm (or whatever login manager LMDE uses) to use xfce4 as default. And then there’s even lighter WM’s than XFCE, like LXDE, which is also easy to install via apt and try out if that works for you.

IsoKiero ,

I understand the mindset you have, but trust me, you’ll learn (sooner or later) a habit to pause and check your command before hitting enter. For some it takes a bit longer and it’ll bite you in the butt for few times (so have backups), but everyone has gone down that path and everyone has fixed their mistakes now and then. If you want hard (and fast) way to learn to confirm your commands, use dd a lot ;)

One way to make it a bit less scary is to ‘mv <thing you want removed> /tmp’ and when you confirmed that nothing extra got removed you can ‘cd /tmp; rm -rf <thing>’, but that still includes the ‘rm -rf’ part.

Linux on old School Machines?

Hi all, the private school I work at has a tonne of old windows 7/8 era desktops in a student library. The place really needs upgrades but they never seem to prioritise replacing these machines. Ive installed Linux on some older laptops of mine and was wondering if you all think it would be worth throwing a light Linux distro on...

IsoKiero ,

Absolutely. Maybe leave Gnome/KDE out and use a lighter WM, but they’ll be just fine. Specially if they have 8GB or more RAM. I suppose those have at least dual core processors, so that won’t be a (huge) bottleneck either. You can do a ton of stuff with those beyond just web browsing, like programming/text editing/spreadsheets and so on. I’d guess that available RAM is the biggest bottleneck on what they can do, specially if you like to open a ton of tabs on your browser.

IsoKiero ,

Make sure you have package alsa-utils installed and try to run alsamixer. That’ll show all the audio devices your system detects. Maybe you’re lucky and it’s just that some volume control is muted and if you’re not it’ll give you at least some info to work with. Majority of audio devices don’t need any additional firmware to work and they almost always work out of the box just fine. What’s the hardware you’re running? Maybe it is something exotic which isn’t installed by default (which I doubt).

And additionally, what you’re trying to play audio from? For example MP3’s need non-free codecs to be installed and without them your experience is “a bit” limited on audio side of things.

IsoKiero ,

They both use upstream version number (as in the number software developer gave to the release). They might additionally have some kind of revision number related to packaging or some patch number, but as a rule of thumb, yes, the bigger number is the most recent. If you should use that as a only variable on deciding which to install is however another discussion. Sometimes dpkg/apt version is preferred over snap regardless of version differences, for example to save a bit of disk space, but that depends on a ton of different things.

IsoKiero ,

Mullvad (apparenlty, first time I’ve heard from the service) uses DNS over TLS and I don’t think that the current GUI version has the option to enable it. Here’s a quickly googled howto from Fedora on how to enable it on your system. If that doesn’t help search for ‘NetworkManager DOT’ or ‘DNS over TLS’.

IsoKiero ,

I don’t know where the Debian project is based.

Trademark and at least some copyright for the project is owned by an entity in the New York and Ian Murdoch who started the project was US citizen. But calling the whole project as USA based is wrong, it is based ‘on the internet’ as even the core team is spread across the globe.

IsoKiero ,

It’s been a while (few years actually) since I even tried, but bluetooth headsets just won’t play nicely. You either get the audio quality from a bottom of the barrel or somewhat decent quality without microphone. And the different protocol/whatever isn’t selected automatically, headset randomly disconnects and nothing really works like it does with my cellphone/windows-machines.

YMMV, but that’s been my experience with my headsets. I’ve understood that there’s some propietary stuff going on with audio codecs, but it’s just so frustrating.

IsoKiero ,

I’m tempted to say systemd-ecosystem. Sure, it has it’s advantages and it’s the standard way of doing things now, but I still don’t like it. Journalctl is a sad and poor replacement from standard log files, it has a ton of different stuff which used to be their separate own little things (resolved, journald, crontab…) making it pretty monolithic thing and at least for me it fixed a problem which wasn’t there.

Snapcraft (and flatpack to some extent) also attempts to fix a non-existing problem and at least for me they have caused more issues than any benefits.

IsoKiero ,

The command in question recursively changes file ownership to account “user” and group “user” for every file and folder in the system. With linux, where many processes are run as root and on various other accounts (like apache or www-data for web server, mysql for MySql database and so on) and after that command none of the services can access the files they need to function. And as the whole system is broken on a very fundamental level changing everything back would be a huge pain in the rear.

On this ubuntu system I’m using right now I have 53 separate user accounts for various things. Some are obsolete and not in use, but majority are used for something and 15 of them are in active use for different services. Different systems have a bit different numbers, but you’d basically need to track down all the millions of files on your computer and fix each of their permission by hand. It can be done, and if you have similar system to copy privileges from you could write a script to fix most of the things, but in vast majority of cases it’s easier to just wipe the drive and reinstall.

IsoKiero ,

I’ve ran into that with one shitty vendor (I won’t/can’t give any details beyond this) lately. They ‘support’ deb-based distributions, but specially their postinst-scripts don’t have any kind of testing/verification on the environment they’re running in and it seems to find new and exiting ways to break every now and then. I’m experienced (or old) enough with Linux/Debian that I can go around the loopholes they’ve left behind, but in our company there’s not too many others who have sufficient knowledge on how deb-packages work.

And they even either are dumb or play one when they claim that their packages work as advertised even after I sent them their postinst-scripts from the package, including explanations on why this and that breaks on a system which doesn’t have graphical environment installed (among other things).

But that’s absolutely fault on the vendor side, not Debian/Linux itself. But it happens.

IsoKiero ,

I don’t know about homeassistant, but there’s plenty of open source software to interact with odb2 at least for linux. With some tinkering it should be possible to have bluetooth enabled odb2 adapter where you can dump even raw data out and feed it to some other system of your choise, homeassistant included.

If you want live data from the drive itself you of course need to have some kind of recording device with you (raspberry pi comes to mind) but if you’re happy just to log whatever is available when parking the car you could set up a computer with bluetooth nearby the parking spot on your yard and pull data from that. It may require that you keep the car powered on for a while after arrival to keep bus active, but some cars give at least some data via odb even when without the key being in ignition lock.

IsoKiero ,

Why billboard system would have sane installed? I don’t think Debian or derivatives install it by default. Vnstat is also a bit odd, but maybe that’s just me. I assume they have multiple of these displays around and for them it would make more sense to use something more centralized, like zabbix, to monitor the whole network (obviously they could do that too).

IsoKiero ,

Wait a second. They used AMPRNet to manage these things? In here this kind of things are either hardwired to the internet or they use 3/4/5G uplink and while of course techinally possible either way to breach the system it’s a bit more difficult to find out proper IP’s and everything.

Once upon a time I had a task to plan a scalable system to display stuff on billboards and even replace printed ads on stores with monitors. The whole thing fell down as we couldn’t secure a funding for it, but I made a POC setup where individual displays had a linux host running and managing the display with (if memory serves) plain X.org session with mplayer (or something similar, it was about 20 years ago) running on full screen and a torrent network to deliver new content to them with a web-based frontend to manage what’s shown on which site. Back then it would’ve been stupidly expensive to have the hardware and bandwidth on a single point to service potentially few thousand clients, so distributing the load was the sensible solution. I think that even today it would be a neat solution for the task, but no one has put up the money to actually make it happen.

IsoKiero ,

5.2 for me. I got it as a gift, in a offical retail box. I think the box with manuals is still around somewhere, but I’m not sure where.

IsoKiero ,

It’s not a problem to consume that amount of ammunition, you just need ‘a few’ barrels and men to operate them but I’m pretty sure they didn’t produce 700 000 shells per hour.

But yeah, the manufacturing capabilities of whole Europe is a poor joke right now.

IsoKiero ,
IsoKiero ,

I’m pretty sure the cameras around here don’t use OCR at all or even if it does it only recognizes the format for plates from a thing shaped like a plate. So if you’re driving like an ass with the drop tables-“plate” that is pretty relevant.

The Bobby Tables one I’m quite sure would work at least on some systems if they let you input your kids name by yourself to some sort of digital form. Or at least I would be pretty surprised if every school system on earth would be patched against simple sql injections.

IsoKiero ,

Most, but not all, do. So it might be as simple as setting a static address, or it may overlap in the future.

You could ask from ISP (or try it out yourself) if you can use some addresses outside of DHCP pool, my ISP router had /24 subnet with .0.1 as gateway but DHCP pool started from .0.101 so there would’ve been plenty of addresses to use. Mine had a ‘end user’ account too from wehere I could’ve changed LAN IP’s, SSID and other basic stuff, but I replaced the whole thing with my own.

Questions about Linux-Linux dualboot

So I’ve had enough from partitioning my HDD between Linux and Windows, and I want to go full Linux, my laptop is low end and I tend to keep some development services alive when I work on stuff (like MariaDB’s) so I decided to split my HDD into three partitions, a distro (Arch) for my dev stuff, a distro (Pop OS) for gaming,...

IsoKiero ,

With plain linux that’s a bit complicated to actually dualboot. When booting windows grub just throws the ball to windows bootloader and it manages things from that on, but with grub you’d need to have two separate grub-installations on different partitions so that changes made in Arch doesn’t mess up stuff with PopOS (and other way round). It’s very much doable, but I suppose (without any experience on a setup like that) that if you just go with default options it’ll break something sooner or later and you need to pay attention to grub configs on both sides at all times, so it requires some knowledge. Basically you’d need a grub installed on (as an example) /dev/sda for the system to boot from bios and another grub instance at /dev/sda5 (or whatever you have) for second grub. They’d both have independent /boot directories, grub configs and all the jazz. It’s doable, but as both systems can access either one of the confgurations you really need to pay attention on what’s happening and where.

Mixing home directory with different distros can create issues, as things have slightly different versions of software and their underlying philosophy, specially when mixing different package managers, is a bit different and they might not be compatible with eachother. Personally I would avoid that, but your mileage may vary wildly on how it actually plays out.

For the partitioning, you can safely delete all the partitions, but you’ll of course lose the data on the drive while doing it.

If I’d need such a system I might build a virtual machine to run all the dev stuff and just connect to it from a “real” desktop environment. Essentially mimic a two separate systems where you’ll have a “server” for the dev things and a “desktop” to connect with it. Or if you want a clear separation between the two it’s possible to run a different window manager for each of the tasks and just logout/login to switch between the two and with some scripting/tweaks you can even start/stop services as required when you switch between “modes”. Depending on your needs it might be enough just to run development environment with a virtualbox and start/stop it as needed and adjust the actual desktop experience accordingly.

IsoKiero ,

If I remember correctly, 1000Base-T standard has a requirement that device has to negotiate pinout on the fly. No matter which pin is connected to which. Obvioiusly just randomly wiring a cable up has other problems, like signal-to-noise, but in theory it should work even if you make a cable that’s as unstandard as you can make it.

IsoKiero ,

I’d first recommend that you think about what you need.

This is the absolutely correct option. I’ve set up way too many things without a use case and lost interest shortly after. If you have a real world use case for your project, even if it’s just for yourself, you’ll have the incentive to keep it going. If you’re just setting things up for the sake of it the hobby loses it’s appeal pretty quickly. Of course you’ll learn a thing or two on the way but without a real world use case the things you set up will either become a burden to keep up with or they’re eventually just deleted.

Personally, tinkering with things that are just removed after a while gave me skills which landed me on my current job, but it’s affected myself enough that I don’t enjoy setting things up just for the sake of it anymore. Of course time plays a part on this, I’ve been doing this long enough that when I started a basic LAMP server was a pretty neat thing to have around, so take this with a grain of oldtimer salt, but my experience is that setting up things that are actually useful on a long term is way more rewarding than spinning up something which gets deleted in a month and it’ll keep the spark going on for much longer.

IsoKiero ,

Logging depends on the instance. Many admins choose to not log any data which could be used to identify any individual, but verifying their claims (without a doubt) as a single user is pretty much impossible and there’s nothing stopping an instance admin of gathering all the data (s)he wants to.

Like are they protected or encrypted so the hackers can’t use them ?

Passwords are encrypted, but in case of a security breach on an instance they are still vulnerable, like with any other password leak. Majority of the systems today use one way encryption with their passwords, but still millions and millions of user accounts are leaked almost daily.

Also what is stoping the instance owners from abusing or selling these behind our back ?

Nothing.

or running a modded version of lemmy are they detectable ?

If done properly, no, you can’t detect them.

But that’s not any different from any of the services around the net. Companies like Meta and Google make their money by selling user data, advertisers track you and all the other things you’re most likely already aware of.

Administrator of my instance said that they don’t gather IP addresses or any other data they don’t need to keep the servers running and I trust them on that, but your mileage may vary. And then there’s different legal systems around the world where an admin might be forced to give out information about individual user, but where I live that’s not a thing.

IsoKiero ,

Hashing is one-way encryption. So, while you’re techcnically correct that they’re not encrypted in the traditional sense (encryption is reversible), for many it’s easier to understand the concept of encryption instead of hashing and terms are often used interchangeable.

Self hosted Wetransfer?

Hello, i am looking for a self hosted application for sharing files like with wetransfer. I have tried the discontinued Firefox Send which has nice features like link expiry and works great in general but lacks authentication (only offers simple password protection). I also want the option to share with registered users. Is...

IsoKiero ,

Seafile. I’ve used it for years, but I’m moving over to nextcloud as I could use other features it provides. They have paid options too, but unless you need LDAP or something more sophisticated for user management the community edition works just fine.

IsoKiero ,

I personally like mikrotik routers. They have all the features you could wish for and then some and they’re relatively cheap for the things they can do. I have RB4011iGS+ (I don’t think that exaxt model is available anymore) and it’s been rock solid. As I have fiber I just pulled the SPF-module from ISP’s box and plugged it in on my own hardware, so the router ISP provided is just gathering dust right now.

But it depends on what you’re really after. If you just need basic firewall/NAT/DHCP functionality and your connection speed is below 1Gbit pretty much any router will do. If you have fast connection and/or need for totally separate networks/VLAN/something else it’s a whole another matter.

IsoKiero ,

Generic answer for this is to get a refurbished corporate laptop. At least in here we have several companies which buy previously leased computers and give them a refurb (new hard drive, good cleaning, things like that) and sell them for pretty good price.

W, T or X series Thinkpads are pretty safe options, my T495 was 300€(ish) on sale. L and carbon are something I’d avoid, L (at least few years back) weren’t built as well as T-series and X1 carbon doesn’t have options to expand/swap out ram.

How can i rsync over my network without using ssh?

I have 6 devices that i rsync to a central location to back them up. Ive been using ssh as the -e option. Problem is i use public key with passphrases, meaning to backup all six i need to go to each device and run the backup script. Since i typically backup /etc, /home, and /root this means entering sudo and the ssh passphrase...

IsoKiero ,

You can run rsyncd as a service on host you wish to back up and connect to that from your central point directly without ssh. Traffic is unencrypted and I wouldn’t trust on that over public network, but you can bind rsyncd to localhost and open a single ssh tunnel for each host (or even write a small script to keep tunnels open automatically) and then just run rsync over that. That’s how I backup my things, just with backuppc in the mix (I’ve got scripts to open/close ssh tunnels at backuppc configuration). VPN tunnels are also an option to encrypt traffic, but depending on your use case that might be a bit overkill.

Or if you’re not tied to rsync you could use something like BorgBackup or other tools which manage the whole jazz for you out of the box.

IsoKiero ,

Bing Chat (so ChatGPT) wrote an opening scene (or maybe more like a trailer, prompt was opening scene) too:

The opening scene of “Server Down” begins with a shot of a high-tech skyscraper. The camera then zooms into the basement of the building, where we see Jack Bauer, a nerdy and obsessive system administrator, working on his servers. He is surrounded by racks of blinking machines, tangled cables, and humming fans. Jack is wearing a t-shirt with a witty tech joke printed on it, and he is sipping a cup of coffee from a “World’s Best Sysadmin” mug.

Suddenly, the lights flicker, and the servers go offline. Jack’s face turns pale, and he drops his coffee mug. He frantically tries to reboot the servers, but nothing works. He checks the network connections, the power supply, and the cooling system, but everything seems fine. He then realizes that the problem is not with his servers, but with the building’s infrastructure. He rushes to the security room, where he finds out that the building has been taken over by terrorists.

The camera then cuts to the terrorists, who are holding the employees hostage and demanding a ransom from the government. They have cut off all communication lines, disabled the security cameras, and planted a bomb in the building. They are confident that they can get away with their plan, but they didn’t count on one thing: Jack Bauer.

The scene ends with Jack staring at the security monitors, his eyes burning with determination. He knows that he has to act fast to save his servers, his colleagues, and himself. He takes a deep breath, puts on his backpack, and heads towards the ventilation ducts. The adventure begins!

IsoKiero ,

Firewall kinda-sorta plays with this idea, but all the tech is pretty stupid (and main focus, if memory serves, isn’t even in the technology). It’s not a bad movie as a hollywood 13-in-a-dozen action category, but it’s a far cry from “Server Down” we have here.

IsoKiero ,

And I don’t look even slightest like Humphrey Bogart no matter what kind of fedora I’d wear.

IsoKiero ,

have seen some people have networking issues with them.

I’ve been a happy customer for hetzner for almost a decade and I haven’t had any issues with their networking. If you’re running virtualization you need to take care of you MAC addresses or they won’t allow traffic and eventually will kick you off from their platform (and they have a good reason to do so). As long as you play by their rules on their hardware it’s rock solid, specially for the price.

IsoKiero ,

I used to have old ThinkStation as a home server. Even older ones like S20 I have couple of laying around is still pretty capable system (I’m typing this on one) and as they’ve been CAD workstations and things like that when they were new many have 12+GB of RAM already. I got mine for free troguh a work contact, but they should be available via ebay or (preferably) your local version of it for pretty cheap.

Then you just need new drives and their prices have dropped too. 100€ is a bit of a stretch, but if you can get a whole computer from someone in the industry it should be possible. I have a few systems laying around I could get rid of for a case of beer or something, but shipping alone from here would eat up majority of your budget (if anyone is interested in x3550 m3 throw me a message, located in Finland, I might remember the model wrong but that’s roughly in the ballpark).

Other than thinkstations I’d say you’ll want a xeon CPU with at least 4 hyperthread cores, 16GB RAM and all the drives your budget has left. SSD for a boot drive(s) is nice to have, but spinning rust will get you there eventually.

Many rack mounted servers only accept SAS-drives which are a bit more expensive. Tower mounts generally use SATA so you can just throw in whatever you have laying around. The main concern is amount of RAM available. For older systems it might be a bit difficult to find suitable components, so more you have already in place the better. For VM server I think 16GB or above is fine for learning and it might be possible to shoehorn most of the stuff in even with 8GB. Performance will definetly take a hit with less RAM, but with that budget some compromises are necessary.

So, in short, with that budget it might be possible if you have a friend who has access to discarded workstations or happen to stumble in a good deal with local companies. It’ll require some compromises and/or actively hunting for parts and with old hardware there’s always possibility of failure so plan accordingly.

IsoKiero ,

While I agree with @rglullis, this isn’t strictly speaking on-topic for this community, that kind of knee-jerk response is very much out of the topic as well. The first community rule is to be civil and in general I, perhaps optimistically, would like that conversation over fediverse in global would be civil, or at least well argumented, a bit like it used to be (more or less, YMMV) back in the usenet days.

And on the topic of self-hosting, that’s a line drawn in the water. I run various of things by myself (postfix+dovecot, LAMP, bitwarden, seafile, nextcloud…) on a rented servers running linux+kvm. And I get money by doing that, it’s a very much a business case, so I’m a bit reluctant to ask questions about the setup I have in here as I think it wouldn’t be fair to ask for advice from hobbyists in a project where money is directly involved. But for me personally that setup checks both sides of things. I get money by doing it, but at the same time I personally can get out of the walled gardens like M365 or Gsuite.

TL;DR: There’s no need to be rude, you can choose to politely point people in the right direction.

IsoKiero ,

Not necessarily. VPN can be used for that, but I’d be that more common use case is to access networks which are otherwise firewalled off from the public internet, like corporate LAN.

IsoKiero ,

While I think you could techincally spoof your originating IP at the VPN server to match your clients IP it wouldn’t do anything useful. That’s not how IP routing works. What you’re trying to achieve with a setup like that?

IsoKiero ,

I do wonder about when VPNs started being used as proxies…

About at the same time operators at the US noticed that they could profit from profiling users behaviour. In here that’s very much illegal thing to do and most use cases for VPN is to connect yourself into corporate network. VPNs are of course useful to protect you from MITM attacks at open wifi networks and things like that, but hiding your behavior from your ISP is very much an US thing.

IsoKiero ,

Not spesifically a tool to put on a USB stick, but Ventoy is worth checking. I’ve had a bit mixed results with it on older hardware but when it works it’s pretty easy to manage your carry-on-tools.

IsoKiero ,

Dd. It writes on disk at a block level and doesn’t care if there’s any kind of filesystem or raid configuration in place, it just writes zeroes (or whatever you ask it to write) to drive and that’s it. Depending on how tight your tin foil hat is, you might want to write couple of runs from /dev/zero and from /dev/urandom to the disk before handing them over, but in general a single full run from /dev/zero to the device makes it pretty much impossible for any Joe Average to get anything out of it.

And if you’re concerned that some three-letter agency is interested of your data you can use DBAN which does pretty much the same than dd, but automates the process and (afaik) does some extra magic to completely erase all the data, but in general if you’re worried enough about that scenario then I’d suggest using an arc furnace and literally melting the drives into a exciting new alloy.

IsoKiero ,

And if you’re concenred on data written on sectors since reallocated you should physically destroy the whole drive anyways. With SSDs this is even more complicated, but I like to keep it pretty simple. If the data which has been stored on the drive at any point of it’s life is under any kind of NDA or other higly valuable contract it’s getting physically destroyed. If the drive spent it’s life storing my family photos a single run of zeroes with dd is enough.

At the end the question is that if at any point the drive held bits of anything even remotely near a cost of a new drive. If it did it’s hammer time, if it didn’t, most likely just wiping the partition table is enough. I’ve given away old drives with just ‘dd if=/dev/zero of=/dev/sdx bs=100M count=1’. On any system that appears as a blank drive and while it’s possible to recover the files from the drive it’s good enough for the donated drives. Everything else is either drilled trough multiple times or otherwise physically destroyed.

Can one recover from an accidental rm -rf of system directories by copying those files back in from a backup?

Well I’ve joined the “accidentally trashing your system with rm -rf” club! Luckily I didn’t delete my home directory with all the things I care about, but I did delete /boot and /usr, and maybe /var (long story, boils down to me trying to delete non-system directories named those but reflexively adding the slash in front...

IsoKiero ,

That can be done, but as others mentioned, if you don’t have permissions/other attributes for the files it’s going to be a real PITA to get everything working. If I had to do that I’d just copy over the files, chown everything to root and then use package manager to reinstall everything, but even that will most likely need manual fixes and figuring out what to change and to what value will take quite a bit of time and complexity of it depends heavily on what you had running on the host, specially things under /var.

IsoKiero ,

According to my spotify wrapped I listened to about 2500 different artists. Yearly subscription is 143,88€, so if spotify took 30% and ther rest is split equally to every artist they’d get a nice 0,0578€ from me each. For your $26 that’d mean on similar math that you’d need ~450 listeners, so it’s atleast nearby the ballpark if you have 1000 streams on there.

I obviously omitted things like VAT and other taxes, payment processor fees and complexity of revenue streams in general, like how long I listened to each to keep it simple.

I’m not saying if that’s fair or not, I just did quick and rough math around the data I had easily available. All I know is that for that half a cent per artist I’m not providing anything to anyone but I receive quite a lot every day.

For more detailed info you can check spotifys own report.

IsoKiero ,

With some models it can be done, but they are delicate things and going over the whole keyboard will most likely result in a couple of broken mechanisms and/or missing hooks on keycaps.

IsoKiero ,

You can’t configure DNS server by name on anything, so you’d need some kind of script/automation to query current IP address of your pihole from google/your ddns provider/someone and update that on your parents router which can be a bit tricky or straight impossible depending on the hardware.

VPN would solve both 1 and 2 from your list as your pihole would be available with static address on both locations. You can’t authenticate on DNS server by MAC as you don’t receive originating MAC at all. Other solution would be to get a static IP address from some provider and tunnel traffic so that your pihole could be reached trough that static address.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines