There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

You just finished setting up all your services and it works fine - how do you now prepare for eventual drive failure?

I know that for data storage the best bet is a NAS and RAID1 or something in that vein, but what about all the docker containers you are running, carefully configured services on your rpi, installed *arr services on your PC, etc.?

Do you have a simple way to automate backups and re-installs of these as well or are you just resigned to having to eventually reconfigure them all when the SD card fails, your OS needs a reinstall or the disk dies?

CarbonatedPastaSauce ,

I actually run everything in VMs and have two hypervisors that sync everything to each other constantly, so I have hot failover capability. They also back up their live VMs to each other every day or week depending on the criticality of the VM. That way I also have some protection against OS issues or a wonky update.

Probably overkill for a self hosted setup but I’d rather spend money than time fixing shit because I’m lazy.

surewhynotlem ,

HA is not redundancy. It may protect from a drive failure but it completely ignores data corruption issues.

I learned this the hard way when my cryptomator decided to corrupt some of my files, and I noticed but didn’t have backups.

CarbonatedPastaSauce ,

That’s why I also do backups, as I mentioned.

rentar42 ,

yeah, there's a bunch of lessons that tend to only be learned the hard way, despite most guides mentioning them.

similarly to how RAID should not be treated as a backup.

guitarsarereal , (edited )

The most useful philosophy I’ve come across is “make the OS instance disposable.” That means an almost backups-first approach. Everything of importance to me is thoroughly backed up so once main box goes kaput, I just have to pull the most recent copy of the dataset and provision it on a new OS, maybe new hardware if needed. These days, it’s not that difficult. Docker makes scripting backups easy as pie. You write your docker-compose so all config and program state lives in a single directory. Back up the directory, and all you need to get up and running again with your services is access to Docker Hub to fetch the application code.

Some downsides with this approach (Docker’s security model sorta assumes you can secure/segment your home network better than most people are actually able to), but honestly, for throwing up a small local service quickly it’s kind of fantastic. Also, if you decide to move away from Docker the experience will give you insight into what amounts to program state for the applications you use which will make doing the same thing without Docker that much easier.

HeartyBeast ,
@HeartyBeast@kbin.social avatar

carefully configured services on your rpi

I have a back up on an SD Card waiting for the day the SD Card fails. Slot it in and reboot

desentizised ,

I recently “upgraded” one of my raspberrys SD cards to an industrial grade one. Seems to me like those are a lot slower but for that particular use case it doesnt matter to me. What matters is that the card doesn’t die. It runs noticeably cooler when lots of data is being written to it so I feel like I must be onto something there.

ShellMonkey , (edited )
@ShellMonkey@lemmy.socdojo.com avatar

Routine backups of the VM’s and raid disk for the hypervisor running them. If the box hosting the backups went screwy there’s a problem but with something like 20TB of space used copies off-box are a bit cumbersome. To that end I just manually copy the irreplaceable stuff to a separate external storage and wish the movies and stuff good luck.

It ends up with a situation though where I’d have to lose both the disks on the hypervisor and if that happened several disks on the NAS (12 disks in a ZFS pool with each vdev being a mirror pair) or for the whole pool to get screwed up to lose the VMs fully. Depending on the day I might lose up to a week of VM state though since they only do a full copy once a week.

tetris11 ,
@tetris11@lemmy.ml avatar

Radical suggestion:

  • Once a year you buy a hard drive that can handle all of your data.
  • rsync everything to it
  • unplug it, put it back in cold storage
atzanteol ,

Once a… year? There’s a lot that can change in a year. Cloud storage can be pretty cheap these days. Backup to something like backblaze, S3 or Glacier nightly instead.

Appoxo ,
@Appoxo@lemmy.dbzer0.com avatar

You can save periodically to it like once a month but keep one as a yearly backup.

Decronym Bot , (edited )

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
HA Home Assistant automation software
~ High Availability
LXC Linux Containers
NAS Network-Attached Storage
Plex Brand of media server package
RAID Redundant Array of Independent Disks for mass storage
RPi Raspberry Pi brand of SBC
SBC Single-Board Computer
SSD Solid State Drive mass storage

8 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

[Thread for this sub, first seen 18th Nov 2023, 10:35] [FAQ] [Full list] [Contact] [Source code]

Skies5394 ,

On my main server: I have my SSD RAID1 ZFS snapshots of my container appdata, VM VHDs and docker image, that is also backed up as a full backup once per night to the RAID10 array, then rsynced to the backup server which then is uploaded to the cloud.

The data on the RAID is backups, repos or media that I’ve deposited there for an extra copy it for serving via Plex/Jellyfin. I have extra copies of the data, and if I were to lose the array totally, I wouldn’t be pleased, but my personal pictures/videos wouldn’t be in danger.

I run two back up servers, which both upload to the cloud. One of which takes bare metal images of all my computers (sans servers bulk drives), the other which takes live folders.

This is more due to convenience so that I can pull a bare metal image to restore a device, or easily go find a file with versioning online if necessary on both accounts.

As a wise man said, you can never have too many backups.

namelivia ,

I have all my configuration as Ansible and Terraform code, so everything can be destroyed and recreated with no effort.

When it comes to the data, I made some bash script to copy, compress, encrypt and upload them encrypted. Not sure if this is the best but it is how I’m dealing with it right now.

mhzawadi ,
@mhzawadi@lemmy.horwood.cloud avatar

That sounds a lot like how I keep my stuff safe, I use backblaze for my off-site backup

rentar42 ,

I've got a similar setup, but use Kopia for backup which does all that you describe but also handles deduplication of data very well.

For example I've added older less structured backups to my "good" backup now and since there is a lot of duplication between a 4 year old backup and a 5 year old backup it barely increased the storage space usage.

ehrenschwan ,

I use duplicati for docker containers. You just host it in docker and attach all the persistent volumes from the other containers to it, then you can set up backup jobs for each.

ad_on_is ,
@ad_on_is@lemmy.world avatar

Most of the docker services use mounted folders/files, which I usually store in the users home folder /home/username/Docker/servicename.

Now, my personal habit of choice is to have user folders on a separate drive and mount them into /home/username. Additionally, one can also mount /var/lib/docker this way. I also spin up all of these services with portainer. The benefit is, if the system breaks, I don’t care that much, since everything is on a separate drive. In case of needing to re-setup everything again, I just spin up portainer again which does the rest.

However, this is not a backup, which should be done separately in one way or the other. But it’s for sure safer than putting all the trust into one drive/sdcard etc.

eskuero ,
@eskuero@lemmy.fromshado.ws avatar

My docker containers are all configured via docker compose so I just tar the .yml files and the outside data volumes and backup that to an external drive.

For configs living in /etc you can also backup all of them but I guess its harder to remember what you modified and where so this is why you document your setup step by step.

Something nice and easy I use for personal documentations is mdbooks.

Kaldo OP , (edited )
@Kaldo@kbin.social avatar

Ahh, so the best docker practice is to always just use outside data volumes and backup those separately, seems kinda obvious in retrospect. What about mounting them directly to the NAS (or even running docker from NAS?), for local networks the performance is probably good enough? That way I wouldn't have to schedule regular syncs and transfers between "local" device storage and NAS? Dunno if it would have a negative effect on drive longevity compared to just running a daily backup.

adam ,

If you’ve got a good network path NFS mounts work great. Don’t forget to also back up your compose files. Then bringing a machine back up is just a case of running them.

Appoxo ,
@Appoxo@lemmy.dbzer0.com avatar

My whole environment is in docker-compose which is “backed” to github.
My config/system drive is backed with veeam to one drive.
The backup is backed with rsync to another drive every week.

But: I only have a 1-drive NAS because I don’t have the place for a proper PC with drive caddies and a commercial nas (synology, qnap) are not my jam because I’d need a transcoding capable gpu and those models are overpriced for what I need.
And with plain debian I get unlimited system updates (per distro release) and learn linux along the way.

drkt ,

configs are backed up I can spin up a new container in minutes, I just accept the manual labor. It’s probably a good thing to clean out the spiders and skeletons every now and then.

ftbd ,

By using NixOS and tracking the config files with git

Haystack ,

For real, saves so much space that would be used for VM backups.

Aside from that, I have anything important backed up to my NAS, and Duplicati backs up from there to Backblaze B2.

CameronDev ,

I rsync my root and everything under it to a NAS, will hopefully save my data. I wrote some scripts manually to do that.

I think the next best thing to do is to doco your setup as mich as possible. Either by typed up notes, or ansible/packer/whatever, any documentation is better than nothing if you have to rebuild.

foggy ,

I have a 16tb USB HDD that syncs to my NAS whenever my workstation is idle for 20 minutes.

darvocet ,

I run history and then clean it up so i have a guide to follow on the next setup. It’s not even so much for drive failure but to move to the newer OS versions when available.

The ‘data’ is backed up by scripts that tar folders up and scp them off to another server.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines