There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

tell me your experience using zfs/btrfs

cross-posted from: programming.dev/post/9319044

Hey,

I am planning to implement authenticated boot inspired from Pid Eins’ blog. I’ll be using pam mount for /home/user. I need to check integrity of all partitions.

I have been using luks+ext4 till now. I am hesistant hesitant to switch to zfs/btrfs, afraid I might fuck up. A while back I accidently purged ‘/’ trying out timeshift which was my fault.

Should I use zfs/btrfs for /home/user? As for root, I’m considering luks+(zfs/btrfs) to be restorable to blank state.

Duckytoast ,

Luks+btrfs with Arch as daily driver for 3 years now, mostly coding and browsing. Not a single problem so far :D

unhinge OP ,

that sounds good.

Have you used luks integrity feature? though it’s marked experimental in man page

uiiiq ,

I have the same use-case as @Duckytoast. I didn’t test the integrity feature because it is my work machine and I am not fond of doing experimental stuff on it.

possiblylinux127 ,

If you want to be sure you don’t loss data make sure you backup your keys.

rtxn ,

My experience with btrfs is “oh shit I forgot to set up subvolumes”. Other than that, it just works. No issues whatsoever.

unhinge OP ,

oh shit I forgot to set up subvolumes

lol

I’m also planning on using its subvolume and snapshot feature. since zfs also supports native encryption, it’ll be easier to manage subvolums for backups

rhys ,
@rhys@rhys.wtf avatar

@unhinge I run a simple 48TiB zpool, and I found it easier to set up than many suggest and trivial to work with. I don't do anything funky with it though, outside of some playing with snapshots and send/receive when I first built it.

I think I recall reading about some nuance around using LUKS vs ZFS's own encryption back then. Might be worth having a read around comparing them for your use case.

unhinge OP ,

afaik openzfs provides authenticated encryption while luks integrity is marked experimental (as of now in man page).

openzfs also doesn’t reencrypt dedup blocks if dedup is enabled Tom Caputi’s talk, but dedup can just be disabled

unhinge OP ,

if you happen to find the comparison, could you link it here

skullgiver , (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

deleted_by_author

  • Loading...
  • superbirra ,

    […] there were rumours some French guy got arrested and had his LUKS encryption fail on him, so you never know.

    xkcd.com/538

    skullgiver , (edited )
    @skullgiver@popplesburger.hilciferous.nl avatar

    deleted_by_author

  • Loading...
  • possiblylinux127 ,

    Or its possible that he reused passwords

    pastermil ,

    Fucking French, man…

    pastermil ,

    Can’t vouch for ZFS, but btrfs is great!

    You can mount root, log, and home on different subvolumes, they’d practically be on different partitions while still sharing the size limit.

    I would also take system snapshots while the system is still running with one command. No need to exclude the home or log directories, nor the pseudo fs (e.g. proc, sys, tmp, dev).

    Penguincoder ,

    As a home user I’d recommend btrfs. It has main line kernel support and is way easier to get operational than zfs. I’d you don’t need the more advance raid types of zfs or deduplication, btrfs can do everything you want. Also btrfs is a lot more resource friendly. Zfs, especially with deduplication, takes a ton of RAM.

    SeeJayEmm ,
    @SeeJayEmm@lemmy.procrastinati.org avatar

    My only complaint with btrfs when I used to run it, is that kvm disk performance was abysmal on it. Otherwise I had no issues with the fs.

    Bitrot ,
    @Bitrot@lemmy.sdf.org avatar

    Most of the tools now should be setting nocow for virtual drives, performance these days isn’t bad.

    Atemu ,
    @Atemu@lemmy.ml avatar

    nodatacow is a hack and will disable any and all consistency mechanisms for that file’s contents. Tools should not be setting nodatacow for virtual drives, certainly not by default.

    Bitrot , (edited )
    @Bitrot@lemmy.sdf.org avatar

    Default libvirt behavior since 2020. Pretty sure some database tools turn it on too.

    Atemu ,
    @Atemu@lemmy.ml avatar

    Yikes.

    drwho ,
    @drwho@beehaw.org avatar

    They do. Otherwise they run like Oracle when auditd is configured and running.

    possiblylinux127 ,

    Really? Were the virtual disks running ext4?

    SeeJayEmm ,
    @SeeJayEmm@lemmy.procrastinati.org avatar

    Yes.

    XTL ,

    My experiences:

    ZFS: never even tried because it’s not integrated (license).

    Btrfs: iirc I’ve tried it three times. Several years ago now. On at least two of those tries, after maybe a month or some of daily driving, suddenly the fs goes totally unresponsive and because it’s the entire system, could only reboot. FS is corrupted and won’t recover. There is no fsck. There is no recovery. Total data loss. Start again from last backup. Haven’t seen that since reiserfs around 2000. Found lots of posts with similar error message. Took btrfs off the list of things I’ll be using in production.

    I like both from a distance, but still use ext*. Never had total data loss that wasn’t a completely electrically dead drive with any version I’ve used since 1995.

    BCsven ,

    There is btrfs-check --repair to fix corruption

    Chewy7324 ,

    www.suse.com/support/kb/doc/?id=000018769

    WARNING: Using ‘–repair’ can further damage a filesystem instead of helping if it can’t fix your particular issue.

    Edit:

    It is extremely important that you ensure a backup has been created before invoking ‘–repair’.

    BCsven , (edited )

    That is a caveat with OS disk tools. Even partition resizing gives this warning, as does Windows checkdisk…something about unnessary disk checks ahould be avoided as they can create issues where none might have existed, so only run when you suspect a problem.

    But as lemann pointed out in this thread btrfs scrub is less risky

    lemann ,

    Ouch, that must have been a pain to recover from…

    I’ve had almost the opposite experience to yours funnily. Several years ago my HDDs would drop out at random during heavy write loads, after a while I narrowed down the cause to some dodgy SATA power cables, which sadly I could not replace at the time. Due to the hardware issue I could not scrub the filesystem successfully either. However I managed to recover all my data to a separate BTRFS filesystem, using some “restore” utility that was mentioned in the docs, and to the best of my knowledge all the recovered data was intact.

    While that past error required a separate filesystem to perform the recovery, my most recent hardware issue with drives dropping out didn’t need any recovery at all - after resolving the hardware issue (a loose power connection) BTRFS pretty much fixed itself during a scheduled scrub and spat out all the repairs in dmesg.

    I would suggest enabling some kind of monitoring on BTRFS’s counters if you haven’t, because the fs will do whatever it can to prevent interruption to operations. In my previous two cases, performance was pretty much unaffected, and I only noticed the hardware problems due to the scheduled scrub & balance taking longer or failing.

    Don’t run a fsck - BTRFS essentially does this to itself during filesystem operations, such as a scrub or a file read. The provided btrfs check tool (fsck) is for the internal B-tree structure specifically AFAIK, and irreversably modifies the filesystem internally in a way that can cause unrecoverable data loss if the user does not know what they are doing. Instead of running fsck, run a scrub - it’s an online operation that can be done while the filesystem is still mounted

    possiblylinux127 ,

    DO NOT RUN A SCRUB IF YOU SUSPECT HARDWARE FAILURE.

    No seriously. If you are having hardware issues a scrub could make the corruption much worse. You should first make a complete copy of your data and then run btrfs check. Sorry for shouting but it is really important you don’t stub a bad disk.

    waigl ,

    Several years ago now. On at least two of those tries, after maybe a month or some of daily driving, suddenly the fs goes totally unresponsive and because it’s the entire system, could only reboot. FS is corrupted and won’t recover. There is no fsck. There is no recovery. Total data loss.

    Could you narrow it down to just how long ago? BTRFS took a very long time to stabilise, so that could possibly make a difference here. Also, do you remember if you were using any special features, especially RAID, and if RAID, which level?

    XTL ,

    I could see if there’s notes somewhere. Very plain desktop and laptop. Probably encrypted LVM. At least one was doing a lot of software builds with big system image trees and snapshots.

    possiblylinux127 ,

    Btrfs has come a long way in the last few years. I have been using it for a little over 5 years and its rock solid. It now powers all my bare metal machines and I use Raid 1 on my servers.

    There was one time I had a disk unexpectedly go bad (it started returning bad data on read) which lead to the system going read only. It took me about 5min to swap disks and it was fine. Needless to say I was impressed that no data was lost.

    Btrfs will normally won’t get corrupted unless you have a hardware issue. It uses cow so writes can never be half competed. If you do manage to get corruption you can use btrfs check.

    TCB13 ,
    @TCB13@lemmy.world avatar

    Btrfs will normally won’t get corrupted unless you have a hardware issue. It uses cow so writes can never be half competed. If you do manage to get corruption you can use btrfs check.

    From my experience BTRFS is way more reliable against hardware failure then Ext4 ever was. Ext* filesystems tend to go corrupt on the first and smallest power loss or hardware failure.

    savvywolf ,
    @savvywolf@pawb.social avatar

    Many many years ago I set up btrfs for the disks I write my backups to with a raid 1 config for them. Unfortunately one of those disks went bad and ended up corrupting the whole array. Makes me wonder if I set it up correctly or not.

    Nowadays, I have the following disks in my system set up as btrfs:

    • My backups disk because of compression.
    • My OS drive because of Timeshift.
    • My home folder because it feels safer. COW feels like it’ll handle power failures better, whilst there’s also checksumming so I can identify corrupted files.
    • My SSD Steam library over two drives because life is short and I cba managing the two ssds independently.

    It’s going fine, but it feels like I need to manually run a balance every one in a while when the disk fills up.

    I also like btrfs-assistant for managing the devices.

    Out of interest, since I’ve not used the “recommended partion setup” for any install for a while now, is ext4 still the default on most distros?

    waigl ,

    Out of interest, since I’ve not used the “recommended partion setup” for any install for a while now, is ext4 still the default on most distros?

    I recently installed Nobara Linux on an additional drive, because after 20 years, I wanted to give Linux gaming another shot (works a lot better than I had hopes for, btw), and it defaulted to btrfs. I’ll assume so does Fedora, because I cannot imagine Nobara changed that part over the Fedora base for gaming purposes.

    Bitrot ,
    @Bitrot@lemmy.sdf.org avatar

    Fedora does, with compression enabled. It’s one of the largest divergences from Red Hat since Red Hat doesn’t support it at all. openSUSE does also.

    Quazatron ,
    @Quazatron@lemmy.world avatar

    My SSD Steam library over two drives because life is short and I cba managing the two ssds independently.

    You do know that Steam handles multiple libraries transparently, even on removable drives?

    savvywolf ,
    @savvywolf@pawb.social avatar

    I know they all show up in the same interface and I can move games between drives in the storage interface.

    But I don’t want to deal with having to shuffle things around to install a 40GiB game where both drives only have 30GiB free. Or having to remember which of the two drives has a specific game on when I want to find their files.

    It also gives a possibly-insignificant speed boost and extra cool points.

    Quazatron ,
    @Quazatron@lemmy.world avatar

    Can’t argue with cool points.

    drwho ,
    @drwho@beehaw.org avatar

    Just out of curiosity, did you RAID-1 the metadata as well?

    savvywolf ,
    @savvywolf@pawb.social avatar

    This was ages ago, so I can’t really remember I’m afraid. I think maybe the files themselves were corrupted, not the folder structure, so perhaps? Although I can see that as a thing I forget to do though.

    BCsven ,

    Btrfs is default on OpenSUSE, has worked great for me for 7 years. No issues.

    Sureito ,

    Same here, but for only 1 year on my main machine and 6 years on my laptop. I looove snapper. It saved my ass so many times

    BCsven ,

    Yes it is great. For me snapper rollback was an awesome onboarding experience to linux. Being eager to try things I read online for tweaks and general explorarion it brought me back to a working system after some custom kernel compiling gone awry, or deleting the wrong file etc.

    sxan ,
    @sxan@midwest.social avatar

    I’ve been on btrfs for so many years, with nightly backups with restic, so I’ve been dragging my feet on snapper. Finally installed it a couple weeks ago, and while I opened the config, I don’t think I changed anything. It’s worked so well, and the Arch package was so well done, that I’d forgotten I had it installed until a few days later I noticed that it was taking snapshots every time before I installed something. It’s shockingly good, and I don’t understand why btrfs+snapper(+grub-btrfs) isn’t the default on installs now.

    floofloof ,

    I haven’t used them professionally but I’ve been using ZFS on my home router (OPNsense) and NAS (TrueNAS with RAID-Z2) for many years without problem. I’ve used Btrfs on laptops and desktops with OpenSUSE Tumbleweed for the past year and a bit, also without problem. Btrfs snapshots have saved me a couple of times when I messed something up. Both seem like solid filesystems for everyday use.

    possiblylinux127 ,

    Why ZFS on a router?

    floofloof ,

    The two options are UFS and ZFS, and their documentation recommends that ZFS is more reliable. I had UFS before and after a power outage the router wouldn’t reboot, so I switched to ZFS. That was two or three years ago and the router has stayed up since then (except one time when an SSD died, but that was a hardware failure).

    possiblylinux127 ,

    Honestly I’m surprised UFS is still a thing. I guess its useful for read only flash.

    deliriousn0mad ,

    After 4 years on btrfs I haven’t had a single issue, I never think about it really. Granted, I have a very basic setup. Snapper snapshots have saved me a couple of times, that aspect of it is really useful.

    hellvolution ,
    @hellvolution@lemmygrad.ml avatar

    Stick with ext3/ext4

    possiblylinux127 ,

    Ext4 is bad for data integrity and has slow performance. ext3 is just dated.

    hellvolution ,
    @hellvolution@lemmygrad.ml avatar

    I’m running ext2/ext3/ext4 since 2002(?)… Never had a problem! But I’ve lost lots of data using reiser4, xfs & xfs, specially when blackout happens. If you don’t have a no-break/not using a notebook, and you have important data for yourself, I’d stick with ext4. I actually didn’t notice thaaaat much of performance boost, when using fast HDDs, SSDs & Nvme between any of these formats!

    possiblylinux127 ,

    Like you say, ext4 is absolutely ancient at this point. I still use it for VMs as it has low overhead and no compression but for bare metal ext4 feels old.

    XFS can’t really be compared to btrfs or ZFS as it is closer to ext4. If your curious Wikipedia as a table of filesystems and the features they provide. As far as XFS’s reliability goes I can’t really say as I just use ext4, btrfs or ZFS.

    Ramin_HAL9001 ,

    Linux does not support ZFS as well as operating systems like OpenBSD or OpenIndiana, but I do use it on my Ubuntu box for my backup array. It is not the best setup: RAID-Z over USB is not at all guaranteed to keep your data safe, but it was the most economical thing I was able to build myself, and it gets the job done well enough with regular scrubbing to give me piece of mind about at least having one other reliable copy of my data. And I can write files to it quickly, and take snapshots of the state of the filesystem if need be.

    I used to use Btrfs on my laptop and it worked just fine, but I did have trouble once when I ran out of disk space. A Btrfs filesystem puts itself into read-only mode when that happens, and that makes it tough to delete files to free-up space. There is a magic incantation that can restore read-write functionality, but I never learned what it was, I just decided to stop using it because Btrfs is pretty clearly not for home PC use. Freezing the filesystem in read-only mode makes sense in a data-center scenario, but not for a home user who might want to try to erase data so one can keep using it normally. I might consider using Btrfs in place of ZFS on a file server, though ZFS does seem to provide more features and seems to be somewhat better tested and hardened.

    There is also BCacheFS now as an alternative to Btrfs, but it is still fairly new, and not widely supported by default installations. I don’t know how stable it is or how well it compares to Btrfs, but I thought I would mention it.

    possiblylinux127 ,

    Btrfs is good for small systems with 1-2 disks. ZFS is good for many disks and benefits heavily from ram. ZFS also has specially disks.

    FrederikNJS ,

    BTRFS is running just fine for my 8 disk home server.

    possiblylinux127 ,

    That is not a recommended setup. Raid5 is not stable yet.

    FrederikNJS ,

    I never said anything about RAID5. I’m running RAID1.

    possiblylinux127 ,

    For 8 disks?

    FrederikNJS ,

    Yep

    possiblylinux127 ,

    Interesting

    FrederikNJS ,

    Oh, I misremembered… It’s only 7 disks in BTRFS RAID1.

    I have:

    • 12 TB
    • 8 TB
    • 6 TB
    • 6 TB
    • 3 TB
    • 3 TB
    • 2 TB

    For a combined total of 40 TB raw storage, which in RAID1 turns into 20 TB usable.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines