There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

non-Euclidean filesystem

I noticed that I only had 5 GiB of free space left today. After quickly deleting some cached files, I tried to figure out what was causing this, but a lot was missing. Every tool gives a different amount of remaining storage space. System Monitor says I’m using 892.2 GiB/2.8 TiB (I don’t even have 2.8 TiB of storage though???). Filelight shows 32.4 GiB in total when scanning root, but 594.9 GiB when scanning my home folder.

https://lemmy.world/pictrs/image/6fcbcd91-5b00-42f5-bd64-5b3a4d3f4150.png

Meanwhile, ncdu (another tool to view disk usage) shows 2.1 TiB with an apparent size of 130 TiB of disk space!


<span style="color:#323232;">    1.3 TiB [#############################################] /.snapshots
</span><span style="color:#323232;">  578.8 GiB [####################                         ] /home
</span><span style="color:#323232;">  204.0 GiB [#######                                      ] /var
</span><span style="color:#323232;">   42.5 GiB [#                                            ] /usr
</span><span style="color:#323232;">   14.1 GiB [                                             ] /nix
</span><span style="color:#323232;">    1.3 GiB [                                             ] /opt
</span><span style="color:#323232;">. 434.6 MiB [                                             ] /tmp
</span><span style="color:#323232;">  350.4 MiB [                                             ] /boot
</span><span style="color:#323232;">   80.8 MiB [                                             ] /root
</span><span style="color:#323232;">   23.3 MiB [                                             ] /etc
</span><span style="color:#323232;">.   5.5 MiB [                                             ] /run
</span><span style="color:#323232;">   88.0 KiB [                                             ] /dev
</span><span style="color:#323232;">@   4.0 KiB [                                             ]  lib64
</span><span style="color:#323232;">@   4.0 KiB [                                             ]  sbin
</span><span style="color:#323232;">@   4.0 KiB [                                             ]  lib
</span><span style="color:#323232;">@   4.0 KiB [                                             ]  bin
</span><span style="color:#323232;">.   0.0   B [                                             ] /proc
</span><span style="color:#323232;">    0.0   B [                                             ] /sys
</span><span style="color:#323232;">    0.0   B [                                             ] /srv
</span><span style="color:#323232;">    0.0   B [                                             ] /mnt
</span>

I assume the /.snapshots folder isn’t really that big, and it’s just counting it wrong. However, I’m wondering whether this could cause issues with other programs thinking they don’t have enough storage space. Steam also seems to follow the inflated amount and refuses to install any games.

I haven’t encountered this issue before, I still had about 100 GiB of free space last time I booted my system. Does anyone know what could cause this issue and how to resolve it?

EDIT 2024-04-06:

snapper ls only shows 12 snapshots, 10 of them taken in the past 2 days before and after zypper transactions. There aren’t any older snapshots, so I assume they get cleaned up automatically. It seems like snapshots aren’t the culprit.

I also ran btrfs balance start --full-balance --bg / and that netted me an additional 30 GiB’s of free space, and it’s only at 25% yet.

EDIT 2024-04-07: It seems like Docker is the problem. https://lemmy.world/pictrs/image/ed61f280-9f14-4f4b-9c63-4251955a5add.png

I ran the docker system prune command and it reclaimed 167 GB! https://lemmy.world/pictrs/image/aeb2cda7-b1e6-4bcb-a3e0-955c4ac3ac4b.png

mst241 ,

Well, maybe you just have a non-measurable set as filesystem 😅

Brickardo ,

I confess I’m a big fan of the post title

digdilem ,

This is a common thing one needs to do. Not all linux gui tools are perfect, and some calculate number differently (1000 vs 1024 soon mounts up to big differences). Also, if you’re running as a user, you’re not going to be seeing all the files.

Here’s how I do it as a sysadmin:

As root, run:

du /* -shc |sort -h

“disk usage for all files in root, displaying a summary instead of listing all sub-files, and human-readable numbers, with a total. Then sort the results so that the largest are at the bottom”

Takes a while (many minutes, up to hours or days if you’ve slow disks, many files or remote filesystems) to run on most systems and there’s no output until it finishes because it’s piping to sort. You can speed it up by omitting the “|sort -h” bit, and you’ll get summaries when each top level dir is checked, but you won’t have a nice sorted output.

You’ll probably get some permission errors when it goes through /proc or /dev

You can be more targetted by picking some of the common places, like /var - here’s mine from a debian system, takes a couple of seconds. I’ll often start with /var as it’s a common place for systems to start filling up along with /home.


<span style="color:#323232;">root@scrofula:~# du /var/* -shc |sort -h
</span><span style="color:#323232;">0       /var/lock
</span><span style="color:#323232;">0       /var/run
</span><span style="color:#323232;">4.0K    /var/local
</span><span style="color:#323232;">4.0K    /var/mail
</span><span style="color:#323232;">4.0K    /var/opt
</span><span style="color:#323232;">168K    /var/tmp
</span><span style="color:#323232;">4.1M    /var/spool
</span><span style="color:#323232;">5.5M    /var/backups
</span><span style="color:#323232;">781M    /var/log
</span><span style="color:#323232;">787M    /var/cache
</span><span style="color:#323232;">8.3G    /var/www
</span><span style="color:#323232;">36G     /var/lib
</span><span style="color:#323232;">46G     total
</span>

Here we can see /var/lib has a lot of stuff in it, so we can look into that with du /var/lib/* -shc|sort -h - it turns out mine has some big databases in /var/lib/mysql and a bunch of docker stuff in /var/lib/docker, not surprising.

Sometimes you just won’t be able to tally what you’re seeing with what you’re using. Often that might be due to a locked file having been deleted or truncated, but the lock’s still preventing the OS from seeing the recovered space. That generally sorts itself out with various timeouts, but you can try and find it with lsof, or if the machine isn’t doing much, a quick reboot.

deadbeef79000 ,

I tend to use du -hxd1 / rather than -hs so that it stays on one filesystem (usually I’m looking for usage of only one file system) and descends one directory.

digdilem ,

Good thinking. That would speed things up on some systems for sure.

t0m5k1 ,
@t0m5k1@lemmy.world avatar

This is the way.

rotopenguin ,
@rotopenguin@infosec.pub avatar

compsize will give you an honest overview of what’s going on with btrfs.

lurch ,

when summing up totals, docker containers and snaps are likely counted twice in some programs: they have volume files that are counted once and then those are mounted as file systems and their contents can be counted again in the mount point.

jaybone ,

Use df and du

qaz OP ,

Those don’t work properly due to BTRFS snapshots and compression.

rImITywR ,

From the btrfs page on the archwiki

General linux userspace tools such as df(1) will inaccurately report free space on a Btrfs partition. It is recommended to use btrfs filesystem usage to query Btrfs partitions.

savvywolf ,
@savvywolf@pawb.social avatar

Just a heads up: I’ve noticed that Steam tends to require a bunch of spare space beyond the size the game takes up.

rollingflower ,

Look at this:

gitlab.com/TheEvilSkeleton/flatpak-dedup-checker

I think that has some BTRFS stuff in it to display actual size with deduplication

BTRFS support in Filelight/kio is pretty important.

dataprolet ,
@dataprolet@lemmy.dbzer0.com avatar

If you’re using compression, try compsize.

maiskanzler ,

Your btrfs snapshots are possibly counted separately by all the regular tools. They simply go into every directory they can find and add up the size of the files they see. They do not care if they are looking at an identical snapshot of the folder next to them, they simply add it all up.

Use sudo btrfs filesystem show (and maybe add a path behind it, I am not sure). That will give you the true usage.

qaz OP ,

sudo btrfs filesystem show seems to display a reasonable amount.


<span style="color:#323232;">Label: none  uuid: af5f864d-2de9-48a9-b521-5923dc08c9e3
</span><span style="color:#323232;">        Total devices 1 FS bytes used 867.13GiB
</span><span style="color:#323232;">        devid    1 size 922.12GiB used 921.12GiB path /dev/mapper/system-root
</span>
friend_of_satan , (edited )

This makes sense. When you use a copy-on-write block device, it is doing things below the level of the filesystem, so you have to use cow-aware tools to get an accurate view of your used disk space. For example, if you have two files that are 100% deduplicated at the cow-block level, they would show up as different inodes on the filesystem and would appear as using twice the space in the filesystem as they do on the block device. Same would go for snapshots and compressed blocks.

See also: www.ctrl.blog/entry/file-cloning.html

bionicjoey ,

It could be that hardlinked files are being double-counted. What software manages that snapshot folder?

qaz OP ,

I’m using BTRFS with snapper.

Strit ,
@Strit@lemmy.linuxuserspace.show avatar

Maybe it’s time to clean out some old snapshots in Snapper.

qaz OP ,

sudo snapper list shows 1 snapshot without a date, 1 old one, and 10 taken in the past couple of days before and after zypper transactions. It seems like they get cleaned up automatically.

rotopenguin ,
@rotopenguin@infosec.pub avatar

There’s hardlink, and then below that there’s the COW/dedupe version called “reflink”. Two files can point to the same chunks of data (extents), and altering one does not alter the other. Two files can point to just some of the same chunks of data, too. I don’t think there is much indicator for when this is happening, besides the free space vs used space accounting looking crazy. If you “compsize” two reflinked files at once, it’ll show you the difference.

HumanPerson ,

Sorry I don’t have an answer but I like the title.

Diplomjodler ,

Something… something… Schrödinger

Hjalamanger ,
@Hjalamanger@feddit.nu avatar

A typical quantum entangled hyperbolic non-linear file system, or QEHNLFS for short. This was first described in Einstein’s fourth relativity theory. I states the following:

In any QEHNLFS the perceived storage space (used and unused) may vary depending on the reference frame. All reference frames are equally valid and therefore the absolute storage space of the QEHNLFS is not well defined. QEHNLFSs generally appear around a central supermassive black hole, typically located at /dev/null in the QEHNLFS

NeoNachtwaechter ,

Sorry, but Maxwell’s equations look wayyy better.

Cyber ,

Probably due to too much coffee in his house.

rah ,

Use df to show disk usage. df -h is most useful.

I’d guess the odd usage numbers is due to sparse files. wiki.archlinux.org/title/Sparse_file

qaz OP ,

I’m using BTRFS with compression, so that might also explain the numbers to some extent.

I ran df -h but I’m not exactly sure how to interpret this. There are multiple file systems which seem to use all the space on the disk.


<span style="color:#323232;">Filesystem               Size  Used Avail Use% Mounted on
</span><span style="color:#323232;">/dev/mapper/system-root  923G  875G   29G  97% /
</span><span style="color:#323232;">devtmpfs                 4.0M  8.0K  4.0M   1% /dev
</span><span style="color:#323232;">tmpfs                     16G   86M   16G   1% /dev/shm
</span><span style="color:#323232;">efivarfs                 128K   46K   78K  37% /sys/firmware/efi/efivars
</span><span style="color:#323232;">tmpfs                    6.3G  3.0M  6.3G   1% /run
</span><span style="color:#323232;">tmpfs                     16G  442M   16G   3% /tmp
</span><span style="color:#323232;">/dev/mapper/system-root  923G  875G   29G  97% /.snapshots
</span><span style="color:#323232;">/dev/mapper/system-root  923G  875G   29G  97% /boot/grub2/i386-pc
</span><span style="color:#323232;">/dev/mapper/system-root  923G  875G   29G  97% /boot/grub2/x86_64-efi
</span><span style="color:#323232;">/dev/mapper/system-root  923G  875G   29G  97% /home
</span><span style="color:#323232;">/dev/mapper/system-root  923G  875G   29G  97% /opt
</span><span style="color:#323232;">/dev/mapper/system-root  923G  875G   29G  97% /srv
</span><span style="color:#323232;">/dev/mapper/system-root  923G  875G   29G  97% /root
</span><span style="color:#323232;">/dev/mapper/system-root  923G  875G   29G  97% /var
</span><span style="color:#323232;">/dev/mapper/system-root  923G  875G   29G  97% /usr/local
</span><span style="color:#323232;">/dev/nvme1n1p1           511M  226M  286M  45% /boot/efi
</span><span style="color:#323232;">overlay                  923G  875G   29G  97% /var/lib/docker/overlay2/f307539e15a1a33ca416c757e267c389450275eec9e7f945ef0d8680d162eac2/merged
</span><span style="color:#323232;">overlay                  923G  875G   29G  97% /var/lib/docker/overlay2/8e4898a8e32696e94dd6bb5c00d02893c0b629efda7f4a8c37da2d213fe1ffab/merged
</span><span style="color:#323232;">overlay                  923G  875G   29G  97% /var/lib/docker/overlay2/db20cdcf8192f6a6597a3ad8330273f0435db9d4acfa8e20ad65524ab075697f/merged
</span><span style="color:#323232;">overlay                  923G  875G   29G  97% /var/lib/docker/overlay2/92ce05516bde97ae9ff6d3c6b079e7c49b6691ebcfc60b850637cab20a921ebe/merged
</span><span style="color:#323232;">tmpfs                    3.2G   17M  3.2G   1% /run/user/1000
</span><span style="color:#323232;">overlay                  923G  875G   29G  97% /var/lib/docker/overlay2/5a00d8c61b23c26c87fcb3be721bc1224db7de3c9a53ae4f9bc2b922ebe40c83/merged
</span><span style="color:#323232;">overlay                  923G  875G   29G  97% /var/lib/docker/overlay2/4f20dcdebc64c2603b5b5f6ad71e116b52e8e20af2a3fe53f9ca653421f871db/merged
</span>
bjoern_tantau ,
@bjoern_tantau@swg-empire.de avatar

Unless you have multiple partitions or disks just concentrate on the one for /. So you have 29 GiB available.

Everything else is sharing the same drive for different purposes.

The beauty of BTRFS is that you can partition your disk into different parts but still actually use the whole disk for every “partition”. That makes management of snapshots easier. I think it would even enable you to combine multiple physical disks into one.

bitfucker ,

… combine multiple physical disks into one.

Isn’t that RAID 0 and generally a bad idea? Since one disk failure can bring down the whole system.

bjoern_tantau ,
@bjoern_tantau@swg-empire.de avatar

Probably. I never looked into how it actually works with BTRFS.

EinfachUnersetzlich ,

You can set the metadata and data independently as RAID0, RAID1 or other levels depending on the number of disks and your desired level of data loss risk.

rotopenguin ,
@rotopenguin@infosec.pub avatar

You can do “zfs style raid things” with btrfs, but there are way too many reports of it ending badly for my tastes. Something-something about “write hole”.

Corngood ,

Try using btdu. I’m not sure how it works with compression, but it at least understands snapshots, as long as they are named in a sane way.

qaz OP , (edited )

Thanks for the suggestion. The repository says it is able to deal with BTRFS compression.

I do have some issues using the application. The instructions say to run it with the filesystem you want to check as argument. However, I get an error when using it with the root filesystem from df -h --output=source,target. Running sudo btdu /dev/mapper/system-root gives the following error: Fatal error: /dev/mapper/system-root is not a btrfs filesystem. /etc/fstab shows /dev/system/root as being mounted on /, but it gives the same error.

Do you happen to know which path I should be using (or how I can find out)?

EDIT 2024-04-07: It seems like Docker is the problem. https://lemmy.world/pictrs/image/ed61f280-9f14-4f4b-9c63-4251955a5add.png

I ran the docker system prune command and it reclaimed 167 GB! https://lemmy.world/pictrs/image/aeb2cda7-b1e6-4bcb-a3e0-955c4ac3ac4b.png

EinfachUnersetzlich ,

You need to point it at a directory that has the btrfs root subvolume mounted on it (subvolid=5) although I thought it gave a different error if that was your problem.

Corngood ,

As the other user suggested, you probably just need to mount the root subvolume somewhere and run it on that.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines