There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

backups for SD card disk image? that don't take up tonnes of space and can be rolled back?

I would like to make manual backups of an SD card as a disk image so that it can be easily recreated when needed. I’d like to keep a few versions in case there is a problem I didn’t know about, it can be rolled back.

How can I do this incrementally, or with de-duplication, so that I don’t have to keep full copies of the complete SD card? It’s very big but most of the content won’t be changing much.

It’s for MiyooCFW ROM which is on FAT 32-formatted micro SD card.

Thanks for your help! Also let me know if I am going about the problem in a wrong way.

tkk13909 ,

If it’s a filesystem, it can be backed up using BorgBackup. There are a few different clients but I personally use Vorta on Linux.

dan ,
@dan@upvote.au avatar

+1 Borgbackup is great, and its deduplication works very well. Vorta works well, and there’s also GNOME Pika which has a very simple UI. For servers, I use Borgmatic.

RedSquadCampFollower OP ,

@tkk13909 @dan

Borg backup has insane deduping. The first time I used it I thought it was broken because of how much smaller the backup was compared to the original. I used it with vorta GUI.

I am not sure how to combine the task of making a disk image with backing up with borg either on the command line or via one of the GUIs?

chameleon ,
@chameleon@fedia.io avatar

Easiest way would be to use borg create --read-special --chunker-params fixed,4194304 '/home/user/sdcardbackup::{now}' /dev/sdX (which I copied from the examples in the documentation). I'm not sure if Vorta has a way to activate --read-special but I suspect not; you can most likely still use it to make the repo and manage archives inside of it though.

Backing up from a command/stdin might also be relevant as an alternative, since that lets you back up more or less anything.

Shdwdrgn ,

I’m not sure about anything that does rolling backups of full disks, but I have used rdiff-backup for years for rolling backups of individual files. The format for the backup is similar to (and based on) rsync so it’s fairly easy to script. For complete servers I just keep a copy of the install image on hand, in a catastrophic drive failure I can do a new installation to a new drive (creating the partitions, grub setup, etc), then restore the latest backup. An alternative might be to use dd and create a full drive image file to use as your starting point in a full recovery.

One thing to keep in mind though is that the backups should NOT contain any system folders like dev or proc that get generated at boot. If possible, when making a starting image with dd, you want the drive to be separate and not part of the running OS, because some folders like dev and var have a basic set of files in place needed for the boot process which may be different than the final version you see after the OS is up and running. That’s why I find it easier to just plan around a clean install to new drives when needed.

RedSquadCampFollower OP ,

Thanks I will look at rdiff. I am not sure if rsync is able to “see inside” the *.img files to discern the individual files. If it can then it would be helpful because I could just re-write the same file over and over again and keep backups using rsync or any of the various rsync-derrived tools?

The filesystem will be cold at time of back up because I will need to shut it down, remove the card from the console and put it into my computer’s reader so no worries about that.

lurch ,

clonezilla comes with multiple tools to copy entire disks to images, but only copying used blocks.

RedSquadCampFollower OP ,

the clonezilla website clonezilla.org says

Differential/incremental backup is not implemented yet.

NeoNachtwaechter ,

I haven’t tried such a thing, but I remember ZFS has an option for block deduplication.

So you would set up a ZFS with block deduplication (and probably without compression - try this point out), and then you make your backup images with the dd tool and the correct block size.

Now you make always full copies and have them as normal files but they take only the disk space of the differences.

let me know if I am going about the problem in a wrong way.

I would not say “wrong way”. I’s fun to think about such things and try them out.

On the other hand I think a FAT32 can have only 32Gb. I would not mind having many of them lying around on my home NAS that has 12 Tb on RAID :-)

RedSquadCampFollower OP ,

hmmm I think this is a bit beyond me; at this point I don’t want to create an additional side project. I might learn about the more modern spiffy file systems in a few years.

NeoNachtwaechter ,

in a few years.

Never give up :-)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines