There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Which filesystem should I use for stable storage?

Hello everyone. I’m going to build a new PC soon and I’m trying to maximize its reliability all I can. I’m using Debian Bookworm. I have a 1TB M2 SSD to boot on and a 4TB SATA SSD for storage. My goal is for the computer to last at least 10 years. It’s for personal use and work, playing games, making games, programming, drawing, 3d modelling etc.

I’ve been reading on filesystems and it seems like the best ones to preserve data if anything is lost or corrupted or went through a power outage are BTRFS and ZFS. However I’ve also read they have stability issues, unlike Ext4. It seems like a tradeoff then?

I’ve read that most of BTRFS’s stability issues come from trying to do RAID5/6 on it, which I’ll never do. Is everything else good enough? ZFS’s stability issues seem to mostly come from it having out-of-tree kernel modules, but how much of a problem is this in real-life use?

So far I’ve been thinking of using BTRFS for the boot drive and ZFS for the storage drive. But maybe it’s better to use BTRFS for both? I’ll of course keep backups but I would still like to ensure I’ll have to deal with stuff breaking as little as possible.

Thank you in advance for the advice.

hornedfiend ,

I’ve been using ext4/btrfs for a long time,but recently I decided to give xfs a try and it feels pretty solid all rounder fs.

I know it’s a very old and very well supported fs,developed by Silicon Graphics and has been getting constant improvements over time with various performance improvements andchecksuming. TBH,for my use casesanything would work but BTRFS snapshots were killing my storage and I got bored with the maintenance task.

Archwiki has amazing documentation for all FS,so might be worth a look.

sibloure ,

Been using BTRFS for several years and have never once had any sort of issue. i just choose BTRFS at system setup and never think about it again. I like that when I copy a file it is INSTANT and makes my computer feel super fast, whereas EXT4 can take several minutes to copy large files. This is with similar use to what you describe. No RAID.

Andy3153 ,
@Andy3153@lemmy.ml avatar

It’s gonna be a hard decision to make. I know that because I read about Btrfs for about a whole week before deciding to switch to it. But, I’m a happy Btrfs user now for about 8 months, and I’ll be honest with you, in my opinion, if your application does not mainly involve small random writes that’ll make Btrfs inevitably fragment a ton, it is most likely good for any situation. I don’t know much about the other modern/advanced filesystems like ZFS or XFS to tell you anything about them though

ryannathans ,

Zfs with zraid/mirror and increased copies

It’s self healing and can’t corrupt on power loss

JWBananas , (edited )
@JWBananas@startrek.website avatar

This might be controversial here. But if reliability is your biggest concern, you really can’t go wrong with:

  • A proper hardware RAID controller

You want something with patrol read, supercapacitor- or battery-backed cache/NVRAM, and a fast enough chipset/memory to keep up with the underlying drives.

  • LVM with snapshots
  • Ext4 or XFS
  • A basic UPS that you can monitor with NUT to safely shut down your system during an outage.

I would probably stick with ext4 for boot and XFS for data. They are both super reliable, and both are usually close to tied for general-purpose performance on modern kernels.

That’s what we do in enterprise land. Keep it simple. Use discrete hardware/software components that do one thing and do it well.

I had decade-old servers with similar setups that were installed with Ubuntu 8.04 and upgraded all the way through 18.04 with minimal issues (the GRUB2 migration being one of the bigger pains). Granted, they went through plenty of hard drives. But some even got increased capacity along the way (you just replace them one at a time and let the RAID resilver in-between).

Edit to add: The only gotcha you really have to worry about is properly aligning the filesystem to the underlying RAID geometry (if the RAID controller doesn’t expose it to the OS for you). But that’s more important with striping.

ryannathans ,

Oh great another single point of failure. Seriously, don’t use raid cards. With ZFS, there’s no corruption on power loss. It’s also self healing.

JWBananas ,
@JWBananas@startrek.website avatar

How many hardware RAID controllers have you had fail? I have had zero of 800 fail. And even if one did, the RAID metadata is stored on the last block of each drive. Pop in new card, select import, done.

ryannathans ,

1/1, irrecoverable array as that particular card was no longer available at time of failure failure Problems that don’t exist with ZFS

JWBananas ,
@JWBananas@startrek.website avatar

I am sorry that you had to personally experience data loss from one specific hardware failure. I will amend the post to indicate that a proper hardware RAID controller should use the SNIA Common RAID DDF. Even mdadm can read it in the event of a controller failure.

Any mid- to high-tier MegaRAID card should support it. I have successfully pulled disks directly from a PERC 5 and imported them to a PERC 8 without issues due to the standardized format.

ZFS is great too if you have the knowledge and know-how to maintain it properly. It’s extremely flexible and extremely powerful. But like most technologies, it comes with its own set of tradeoffs. It isn’t the most performant out-of-the-box, and it has a lot of knobs to turn. And no filesystem, regardless of how resilient it is, will ever be as resilient to power failures as a battery/supercapacitor-backed path to NVRAM.

To put it simply, ZFS is sufficiently complex to be much more prone to operator error.

For someone with the limited background knowledge that the OP seems to have on filesystem choices, it definitely wouldn’t be the easiest or fastest choice for putting together a reliable and performant system.

If it works for you personally, there’s nothing wrong with that.

Or if you want to trade anecdotes, the only volume I’ve ever lost was on a TrueNAS appliance after power failure, and even iXsystems paid support was unable to assist. Ended up having to rebuild and copy from an off-site snapshot.

TCB13 ,
@TCB13@lemmy.world avatar

BTRFS - easy, fast, reliable, snapshots, compression, usable RAID, CoW, online resizing… ZFS - hard to get into, reliable, snapshots, compression, state of the art RAID, CoW…

Everything else, particularly Ext4 should be avoided. Your life will be a lot easier once you discover snapshotting and also how more robust and reliable BTRFS and ZFS are. I got into BTRFS a few years ago in order to survive power losses as I had issues with Ext3 and Ext4 regularly with that. My experience with Ext4 disks was always: if something goes slightly wrong your data is… puff gone.

cybersandwich ,

Id go with your distro default which I think is ext4 for most distros and do proper backups/data management (which might include a Nas running zfs–so you get the best of both worlds).

Depending on your data it might be small and not need a Nas necessarily. Things like code could go up on GitHub or gitlab. Games themselves can always be redownloaded etc. If your data is small enough, cloud storage isn’t too pricey.

One of the best things you can do for a PC is getting a solid true sine wave battery backup that will let you weather electricity fluctuations, surges, brownouts ,and give you time to shut down properly during an outage.

kogasa ,
@kogasa@programming.dev avatar

Distro defaults are chosen for general purpose use and stability. For op’s specific requirement, zfs, xfs, and btrfs are all definitely better. For the boot drive, I can understand going for the default since you just want it to be stable, but having some snapshotting and fault protection isn’t a bad thing.

fmstrat ,

A lot of these responses seem… dated. There’s a reason TruNAS and such use ZFS now.

I would recommend ZFS 100%. The copy-on-write (allowing you to recover almost anything), simple snapshots, direct disk encryption, and ability to not only check the file system, but tell you exactly which file has an issue if there is an error, make it an easy choice even if its a one-disk system.

Personally I use date times for my snapshot names, and delete old ones as time goes on. Its fabulous for backups.

JWBananas ,
@JWBananas@startrek.website avatar

There’s a reason TruNAS and such use ZFS now.

Do you mean for the boot drive?

happyhippo ,

Isn’t zfs out of tree?

moist_towelettes ,

Yes its license is not GPL compliant.

sugar_in_your_tea ,

Well yeah, ZFS is absolutely fantastic for a NAS, but it’s complete overkill for a desktop. That’s why I recommend BTRFS for a desktop like this, and still recommend ZFS if you’re building a NAS (mine also uses BTRFS, but that’s because I don’t need the features and would rather only deal with one FS).

sudotstar ,

I recommend using whatever is the "least hands-on" option for your boot drive, a.k.a your distro default (ext4 for Debian). In my admittedly incompetent experience, the most likely cause for filesystem corruption is trying to mess with things, like resizing partitions. If you use your distro installer to set up your boot drive and then don't mess with it, I think you'll be fine with whatever the default is. You should still take backups through whatever medium(s) and format(s) make sense for your use case, as random mishaps are still a thing no matter what filesystem you use.

Are you planning on dualbooting Windows for games? I use https://github.com/maharmstone/btrfs to mount a shared BTRFS drive that contains my Proton-based Steam library in case I need to run one of those games on Windows for whatever reason. I've personally experienced BTRFS corruption a few times due to the aforementioned incompetence, but I try to avoid keeping anything important on my games drive to limit the fallout when that does occur. Additionally if you're looking to keep non-game content on the storage drive (likely if you're doing 3D modeling work) this may not be as safe.

mimichuu_ OP ,

I don’t plan on installing Windows at all. The only thing I’d do in my boot drive is have a separate home partition, I won’t really do anything else though. Did the corruption you experience happened just on its own? Or was it something you did?

sudotstar ,

For me it's always been after I tried to resize a partition.

ninekeysdown ,
@ninekeysdown@lemmy.world avatar

I’m assuming you don’t want to tinker with things? I’m also assuming you do not have experience with things like ZFS. So….

Unless you’re running multiple drives (or special options) zfs & btrfs aren’t going to give you much. For instance btrfs (unless it’s set to DUP) isn’t going to protect from bitrot or other data corruption. Same goes for ZFS. It will throw an error when something doesn’t match the checksum though.

Your best option is to either use ext4 or xfs for your 4tb storage. If you’re working with a lot of large files xfs has some advantages but overall you’re not going to notice that much of a difference in your uses.

For your ssd, btrfs has the advantage over ext4 and xfs. Although so does f2fs. In practical uses for what you’re describing it’s not going to make that much of a difference.

Unless you have a specific reason to use something other than ext4 then just stick with that. It’s simple and just works. Make sure you’re keeping backups eg restic, borg, rsync, duplicity, etc etc. and follow the 3,2,1 rule where possible and you’ll be fine.

If it were me setting up that system I’d mirror the drives and use btrfs. Which is pretty much what I did on my PC. But that double the costs of storage.

The only place (at home) I use ZFS is on my NAS. I have Rocky8 setup and all it does is handle storage. I use mirrored pairs on my important data and Z1 on everything else. But that’s a topic for another post

If you REALLY want some of the features of zfs or btrfs eg snapshots, I’d lean on your backup software for that but if you can use LVM to take snapshots in a similar fashion. See STRAIS for another example too. However that’s beyond the scope of this post.

mimichuu_ OP ,

Thanks for the help. Both of my drives are SSDs, the boot drive is M2 and the storage is SATA. I’ve heard filesystems that support compression would be better for their health and lifespan as they’d have to write less. But yes, no matter what, I will keep constant backups. Snapshots would be appreciated, but since I’ll run Debian I don’t think they’d be that necessary, if to have them there’s a lot of problems to deal with in exchange.

ninekeysdown ,
@ninekeysdown@lemmy.world avatar

Since both drives are SSDs there’s nothing really stopping you from using BTRFS. You are correct that the features for BTRFS are better for the long term health of your SSDs and if you feel comfortable with it then you should 100% use it. That being said with todays SSDs the life span extending features of BTRFS, F2FS, etc are going to be minimal at best. So don’t stress too much over running it or not. Just use whatever you’re most comfortable with.

Reborn2966 ,

btrfs and you get snapshots, the ability to send subvolumes around + compression and a ton of other stuff.

be aware that to configure a good layout in btrfs you will need to do that manually, follow the arch wiki and you will be ok.

BitingChaos ,
@BitingChaos@lemmy.world avatar

I’ve been running ZFS for around 12 years (on FreeBSD and Linux) and I’m not sure why anyone would ever use anything else.

It’s got the age, community, and wide acceptance that makes it a proven safe & reliable option.

gablank ,

I agree that ZFS is a very solid choice. The only thing that can be slightly painful is the out of tree modules.

mojo ,

ext4 is the tried and true file system. I use that for reliability. Btrfs is nice with a ton of modern features, but I have had some issues in the past, but they are pretty rare.

worfamerryman ,

I use ext4 for my boot drive as that’s what Linux mint defaults to.

I do not do raids and use btrfs on my other drives.

You can turn on compression on write with btrfs which may reduce the amount of data being read and written to your drive which could further reduce its lifespan.

But you shouldn’t expect the drives to last 10 years.

They might, but don’t expect it and have a backup of whatever is important. Ideally you should have a local backup and a cloud based backup or at least an offsite backup somewhere else.

mimichuu_ OP ,

Yeah I’ll always do backups. When I have the money I probably will buy another drive and try to do RAID1 on the two, just to be sure. But I do want them to last as much as possible.

worfamerryman ,

Don’t use raid for backing up use a backup program instead. I’d recommend vorta or kopia.

mimichuu_ OP ,

It wouldn’t be for backing up, just for the storage to last longer if one drive fails.

worfamerryman ,

I would still not recommend it. If the drive fails and data is lost or corrupted it could also be lost or corrupted on the other drive.

It would really be better to use backup software to save your data. Also depending on how the drive is used, it may put less wear on the second drive if you use a backup application.

PM_me_your_doggo ,
@PM_me_your_doggo@lemmy.world avatar

Ten years is a long time. In ten years 4Tb storage will be less than a crappy thumb drive.

For reliant storage I personally would get two hdd for a price of one ssd, slap a software raid1 with ext4 on them and forget about them until the mdadm alerts

PotatoesFall ,

Moore’s Law is not really in effect anymore. At least the growth is no longer at an exponential rate.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines