There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

linuxPIPEpower

@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

linuxPIPEpower ,

I agree. Chromebooks are a viable choice for those who want a web terminal. I used one for about a year. Got the job done.

linuxPIPEpower OP ,

I’ve used WebDAV here and there. I found some aspects of set up frustrating so I tend to keep away from it except for smaller, short term use cases.

Does it do the caching thing or is it more of an alternative to SSH/SFTP?

If it’s an alternative, what is the benefit?

IIRC WebDAV can be set up from inside certain filemanagers (like nautilus with an extension installed) or by using a web server like apache, or by using smaller stand alone services.

linuxPIPEpower OP ,

A few weeks ago I put some serious time/brainpower into the network and got it waaaay smoother and faster than before. Finally implemented some upgraded hardware that has been sitting on a shelf for too long.

I tried iperf. Actually iperf3 because that’s the first tutorial I found. Do you have any opinion on iPerf vs iperf3? On Desktop I ran:


<span style="color:#323232;">iperf3 -s -p 7673
</span>

On Laptop I am currently doing some stuff I didn’t want to quit so this may not be a totally fair test. I’ll try re running it later. That said I ran:


<span style="color:#323232;"> iperf3 -c desktop.lan -p 7673 -bidir
</span>

And what looks like a summary at the bottom:


<span style="color:#323232;">[ ID] Interval           Transfer     Bitrate         Retr
</span><span style="color:#323232;">[  5]   0.00-10.00  sec   102 MBytes  86.0 Mbits/sec  152             sender
</span><span style="color:#323232;">[  5]   0.00-10.00  sec   102 MBytes  85.6 Mbits/sec                  receiver
</span>

I actually have AnotherDesktop on the LAN also connected via ethernet. Going from Laptop —> AnotherDesktop gets similar to the above.

However going AnotherDesktop —> Desktop gets 10x better results:


<span style="color:#323232;">[  5]   0.00-10.00  sec  1.09 GBytes   936 Mbits/sec    0             sender
</span><span style="color:#323232;">[  5]   0.00-10.00  sec  1.09 GBytes   933 Mbits/sec                  receiver
</span>

Laptop has Intel Dual Band Wireless-AC 8260 who’s Max Speed = 867 Mbps. It probably isn’t the bottleneck. Although with the distro running at the moment (Fedora) I have a LOT of problems with everything so possibly things aren’t set up ideally here.

I still didn’t upgrade the actual wireless access point for the network; don’t recall what the max speed is for current WAP but could be around 100Mbps.

So this is an interesting path to optimize. However I am still interested in solving the original problem because even when I am directly using Desktop, things are slow. I do not really want to upgrade it is I can get away with a software solution. There are many items on my list of projects and purchases that I’d rather concentrate on.

linuxPIPEpower OP ,

hmm interesting idea. I do not get the idea that nextcloud is reliably “easy” as it’s kind of a joke how complex it can be.

Someone else suggested WebDAV which I believe is the filesharing Nextcloud uses. Does Nextcloud add anything relevant above what’s available from just WebDAV?

linuxPIPEpower OP ,

if you delete a file on your laptop it will also be deleted on your desktop on the next sync

This is my fear! I have done it before… Forgetting something is synced and deleting what I thought was “an extra copy” only to realize later that it propagated to the original.

linuxPIPEpower OP ,

I don’t know what that means

linuxPIPEpower OP ,

What would be the role of Zerotier? It seems like some sort of VPN-type application. What do I need that for?

rclone is cool and I used it before. I was never able to get it to work really consistently so always gave up. But that’s probably use error.

That said, I can mount network drives and access them from within the file system. I think GVFS is doing the lifting for that. There are a couple different ways I’ve tried including with rclone, none seemed superior performance-wise. I should say the Desktop computer is just old and slow; there is only so much improvement possible if the files reside there. I would much prefer to work on my Laptop directly and move them back to Desktop for safe keeping when done.

“vfs cache” is certainly an intriguing term

Looks like maybe the main documentation is https://rclone.org/commands/rclone_mount/#vfs-file-caching and specifically https://rclone.org/commands/rclone_mount/#vfs-cache-mode-full

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

I’m not totally sure what this would be doing, if it is exactly what I want, or close enough? I am remembering now one reason I didn’t stick with rclone which is I find the documentation difficult to understand. This is a really useful lead though.

linuxPIPEpower OP ,

What would be the role of Zerotier? It seems like some sort of VPN-type application. I don’t understand what it’s needed for though. Someone else also suggested it albeit in a different configuration.

Just doing some reading on NFS, it certainly seems promising. Naturally ArchWiki has a fairly clear instruction document. But I am having a ahrd time seeing what it is exactly? Why is it faster than SSHFS?

Using the Cache with NFS > Cache Limitations with NFS:

Opening a file from a shared file system for direct I/O automatically bypasses the cache. This is because this type of access must be direct to the server.

Which raises the question what is “direct I/O” and is it something I use? This page calls direct I/O “an alternative caching policy” and the limited amount I can understand elsewhere leads me to infer I don’t need to worry about this. Does anyone know otherwise?

The issue with syncing, is usually needing to sync everything.

yes this is why syncthing proved difficult when I last tried it for this purpose.

Beyond the actual files ti would be really handy if some lower-level stuff could be cache/synced between devices. Like thumbnails and other metadata. To my mind, remotely perusing Desktop filesystem from Laptop should be just as fast as looking through local files. I wouldn’t mind having a reasonable chunk of local storage dedicated to keeping this available.

linuxPIPEpower OP ,

Maybe Syncthing is the way forward. I use it for years and am reasonably comfortable with it. When it works, it works. Problems is that when it doesn’t work, it’s hard to solve or even to know about. For the present use case it would involve making a lot of shares and manually toggling them on and off all the time. And would need to have some kind of error checking system also to avoid deleting unsynced files.

Others have also suggested NFS but I am having a difficult time finding basic info about what it is and what I can expect. How is it different than using SSHFS mounted? Assuming I continue limping along on my existing hardware, do you think it can do any of the local caching type stuff I was hoping for?

Re the hardware, thanks for the feedback! I am only recently learning about this side of computing. Am not a gamer and usually have had laptops, so never got too much into the hardware.

I have actually 2 desktops, both 10+ years old. 1 is a macmini so there is no chance of getting the storage properly installed. I believe the CPU is better and it has more RAM because it was upgraded when it was my main machine. The other is a “small” tower (about 14") picked up cheaply to learn about PCs. Has not been upgraded at all other than SSD for the system drive. Both running debian now.

In another comment I ran iperf3 Laptop (wifi) —> Desktop (ethernet) which was about 80-90MBits/s. Whereas Desktop —> OtherDesktop was in the 900-950 MBits/s range. So I think I can say the networking is fine enough when it’s all ethernet.

One thing I wasn’t expecting from the tower is that it only supports 2x internal HDDs. I was hoping to get all the loose USB devices inside the box, like you suggest. It didn’t occur to me that I could only get the system drive + one extra. I don’t know if that’s common? Or if there is some way to expand the capacity? There isn’t too much room inside the box but if there was a way to add trays, most of them could fit inside with a bit of air between them.

This is the kind of pitfall I wanted to learn about when I bought this machine so I guess it’s doing its job. :)

Efforts to research what I would like to have instead have led me to be quite overwhelmed. I find a lot of people online who have way more time and resources to devote than I do, who want really high performance. I always just want “good enough”. If I followed the advice I found online I would end up with a PC costing more than everything else I own in the world put together.

As far as I can tell, the solution for the miniPC type device is to buy an external drive holder rack. Do you agree? They are sooo expensive though, like $200-300 for basically a box. I don’t understand why they cost so much.

linuxPIPEpower OP ,
  1. In another comment I ran iperf3 Laptop (wifi) —> Desktop (ethernet) which was about 80-90MBits/s. Whereas Desktop —> OtherDesktop was in the 900-950 MBits/s range. So I think I can say the networking is fine enough when it’s all ethernet. Is there some other kind of benchmarking to do?
  2. Just posted a more detailed description of the desktops in this comment (4th paragraph). It’s not ideal but for now its what I have. I did actually take the time (gnome-disks benchmarking) to test different cables, ports, etc to find the best possible configuration. While there is an upper limit, if you are forced to use USB, this makes a big difference.
  3. Other people suggested ZeroTier or VPNs generally. I don’t really understand the role this component would be playing? I have a LAN and I really only want local access. Why the VPN?
  4. Ya, I have tried using syncthing for this before and it involves deleting stuff all the time then re-syncing it when you need it again. And you need to be careful not to accidentally delete while synced, which could destroy all files.
  5. Resilio I used it a long time ago. Didn’t realize it was still around! IIRC it was somewhat based on bittorrent with the idea of peers providing data to one another.
linuxPIPEpower OP ,

Thanks!

I elaborated on why I’m using USB HDDs in this comment. I have been a bit stuck knowing how to proceed to avoid these problems. I am willing to get a new desktop at some point but not sure what is needed and don’t have unlimited resources. If I buy a new device, I’ll have to live with it for a long time. I have about 6 or 8 external HDDs in total. Will probably eventually consolidate the smaller ones into a larger drive which would bring it down. Several are 2-4TB, could replace with 1x 12TB. But I will probably keep using the existing ones for backup if at all possible.

Re the VPN, people keep mentioning this. I am not understanding what it would do though? I mostly need to access my files from within the LAN. Certainly not enough to justify the security risk of a dummy like me running a public service. I’d rather just copy files to an encrypted disk for those occasions and feel safe with my ports closed to outsiders.

Is there some reason to consider a VPN for inside the LAN?

linuxPIPEpower OP ,

sounds sweet! Perfectly what I am looking for.

It’s so rare to be jealous of windows users!!

I do find this repo: jstaf/onedriver: A native Linux filesystem for Microsoft OneDrive. So I guess in theory it would be possible in linux? If you could apply it to a different back end…

linuxPIPEpower OP ,

Thanks! I have gone to look at TrueNAS or FreeNAS a few times over the years. I am dissuaded because hardware-wise they seem expensive. Then on the other hand, they are limited in what they can do.

Comprehension check. Is the below accurate?

  1. TrueNAS is an OS, it would replace Debian.
  2. Main purpose of TrueNAS is to maintain the filesystem
  3. There are some packages available for TrueNAS, like someone mentioned Syncthing supports it
  4. But basically if I run TrueNAS, I will likely need a second computer to run services

Also for comprehension check:

  • The reason many people are recommending NAS (or WebDAV, NFS, VPN etc) is because with better storage and network infrastructure I would no longer be interested in this caching idea.
  • Better would be to have solid enough file sharing within the LAN that accessing files located on Desktop from Laptop would work.
  • The above would be completely plausible

How’m I doing?

linuxPIPEpower OP ,

Do you mean take the board out of this case and put it in another, bigger one?

I actually do have a larger, older tower that I fished out of the trash. Came with a 56k modem! But I don’t know if they would fit together. I also don’t notice anywhere particularly suitable to holding a bunch of storage; I guess I would have to buy (or make?) some pieces.

Here is the board configuration for the Small Form Factor:

https://discuss.tchncs.de/pictrs/image/9c6e33fc-d07c-40fa-8a9b-6d207fb97846.png

I did try using #9 and for storage and I seem to recall it kind of worked but didn’t totally work but not sure of the details. But hey, at least I can use a CD drive and a floppy drive at the same time!

linuxPIPEpower OP ,

Forget NFS, SSHFS and syncthing as those are to complex and overkill at the moment. SMB is dead simple in a lot of ways and is hard to mess up.

OTOH, SSHFS and syncthing are already humming along and I’m framiliar with them. Is SMB so easy or having other benefits that would make it better even though I have to start from scratch? It looks like it (and/or NFS) can be administered from cockpit web interface which is cool.

Now that I look around I think I actually have a bit of RAM I could put in the PC. MacMini’s original RAM which is DDR3L; but I read you can put it in a device that wants DDR3. So I will do that next time it’s powered off.

Thanks for letting me know I could use an expansion card. I was wondering about that but the service manual didn’t mention it at all and I had a hard time finding information online.

Is this the sort of thing I am looking for: SATA Card 4 Port with 4 SATA Cables, 6 Gbps SATA 3.0 Controller PCI Express Expression Card with Low Profile Bracket Support 4 SATA 3.0 Devices ($23 USD) I don’t find anything cheaper than that. But there are various higher price points. Assuming none of those would be worthwhile on a crummy old computer like I have. Is there any specific RAID support I should look for?

I have only the most cursory knowledge of RAID but can tell it becomes important at some point.

But am I correct in my understanding that putting storage device in RAID decreases the total capacity? For example if I have 2x6TB in RAID, I have 6 TB of storage right?

Honestly, more than half my data is stuff I don’t care too much about keeping. If I lose all the TV shows I don’t cry over it. Only some of it is stuff I would care enough to buy extra hardware to back up. Those tend to be the smaller files (like documents) whereas the items taking up a lot of space (media files) are more disposable. For these ones “good enough” is “good enough”.

I really appreciate your time already and anything further. But I am still wondering, to what extent is all this helping me solve my original question which is that I want to be able to edit remote files on Desktop as easily as if they were local on Laptop? Assuming i got it all configured correctly, is GIMP going to be just as happy with a giant file lots of layers, undos, etc, on the Desktop as it would be with the same file on Laptop?

linuxPIPEpower OP ,

Thanks this comment is v helpful. A persuasive argument for NFS and against sshfs!

linuxPIPEpower OP ,

thanks I appreciate it. I’ve been around the block enough times to expect maximalist advice in places like this. people who are motivated to be hanging around in a forum just waiting for someone to ask a question about hard drives are coming from a certain perspective. Honestly, it’s not my perspective. But the information is helpful in totality even though I’m unlikely to end up doing what any one person suggests.

RAID is something I’ve seen mentioned over and over again. Every year or two I go reading about them more intentionally and never get the impression it’s for me. Too elaborate to solve problems I don’t have.

linuxPIPEpower OP ,

TBD

I’ve been struggling with syncthing for a few weeks… It runs super hot on every device. Need to figure out how to chill it out a bit.

Other than that I’ll look at both NFS and WebDAV some more. Then will come back to this page to re read the more intricate suggestions.

linuxPIPEpower OP ,

thanks for all the details! I’ve fairly recently done an FS migration that entailed moving all data, reformatting, and moving it all back. Mega pain in the ass. I know more now than I did at the start of that project, so wouldn’t be as bad but not getting into something like that lightly.

Though it might be the excuse I need to buy another 12 tb hdd…

linuxPIPEpower ,

Careful where you point that thing. I unintentionally disrupted someone’s life by introducing them to ventoy. Now they have been distrohopping like crazy because of how easy it is.

linuxPIPEpower ,

Use the website alternativeto.com to locate Linux versions of windows or Mac programs. Also if you find something on Linux but its not quite right, can find listed similar apps.

It has quite extensive coverage of GUI apps. Less so CLI. Certain niche areas are more comprehensive than others.

linuxPIPEpower ,

Idk what specific image was shown. But anything described as “anime girl” could have strong csam vibes assuming this grad school student is older than 11 themselves.

For some reason its normalized in some parts of the Linux community to have sexualized images of children.

Instagram Advertises Nonconsensual AI Nude Apps (www.404media.co)

Instagram is profiting from several ads that invite people to create nonconsensual nude images with AI image generation apps, once again showing that some of the most harmful applications of AI tools are not hidden on the dark corners of the internet, but are actively promoted to users by social media companies unable or...

linuxPIPEpower ,

the above comment was written by a person who’s lack of understanding of consent suggests they are almost certainly guilty of sex crimes.

linuxPIPEpower OP ,

That’s what I’m thinking!

I am asking a really basic question here. How do I find out about the drivers in the distro?

linuxPIPEpower OP ,

I think maybe if there are license issues the distros have different policies? You might need to do some kind of extra step to include certain drivers.

linuxPIPEpower OP ,

is there a way to find out for a given component? where to look?

filesystem, release notes, repositories? terminal tool will give me some clues?

linuxPIPEpower OP ,

try to find what kernel version support was added.

how to do this?

There’s exceptions however like proprietary drivers. While those drivers are becoming exceedingly rare, some distros will only ship with FOSS software,

don’t expect debian to ever work out of the box with nvidia

good news is I don’t think I have ever in my life owned anything nvidia.

You didn’t mentioned your component specifically but if your hardware doesn’t have mainline kernel support, is pretty good assumption it’s proprietary and will need to be handled separately with something like dkms. Check the distros documentation for their recommended approach.

thanks, I never heard of dkms before. I read the arch wiki, wikipedia, and made an attempt at the github repo (very long and over my head). The arch wiki only mentions nvidia. Is this something I need if I am certain nvidia is not the problem? Or is it a general thing?

Off the top of my head some components I’ve had problems with: touchpads, touch screens, wifi, ethernet, bluetooth, audio in, audio out, media keys. I have suspected others also like (onboard intel) GPUs but it’s a little harder for me to even pin those problems down to the hardware.

linuxPIPEpower OP ,
  • distros can have different kernel parameters
  • unloaded kernel modules
  • different kernel parameters
  • older kernel/packages
  • missing packages

how do I find out about these?

Are they specific to my system? Some kind of decision the installer makes? So I would investigate locally on the device?

Or will it be a general distro thing? Am I looking on their website to find out?

linuxPIPEpower OP ,

I’ve had the issue on laptops and desktops but I have more experience with laptops. Also you are correct that arch-based tend to work pretty well. But I don’t want to run arch on some devices because I do not plan update them regularly enough. I want a longer-term support distro. So in many cases I want to see what arch is doing that another isn’t.

Only noting to be fair: in some cases arch-type does worse. I have an old HP desktop which is the case that arch couldn’t see the ethernet connection. I could only use a USB-to-ethernet converter as PC doesn’t even have wifi. But then I installed Debian and the ethernet works fine through the card. I do not need to solve this specifically as I plan to keep debian. Just one of the many mysteries.

I could find a specific issue that I do want to solve but it’s such an ongoing thing I am hoping to learn the general principals rather than being spoon fed the answer. I’ll only be back next week with another one.

linuxPIPEpower OP ,

No to wayland.

I have used arch-based distros. They tend towards better support but not universally.

linuxPIPEpower OP ,

Linux distros I have tried include: ubuntus, debians, fedoras, opensuse, manjaro, endeavour, mint. No slackware, redhat, centos, gentoo, nix, kali, steam.

Every device I currently own is a refurb originally manufactured 5-15 years ago. It’s based on some combination of cheapness and hoping that things will be supported by them time I get my hands on them. I don’t have any requirement for blazing hardware.

Some of them are unsurprisingly annoying, like netbooks I picked up only because they were cheap and were reported to have linux successfully installed by people online. With these things, it seems that most of the features work just not all at the same time. I can choose between a smoothly-functioning trackpad in one distribution and bluetooth in another. But why? How do I compare them.

linuxPIPEpower OP ,

license issues of propietary drivers,

kernel or modules being slightly older and the driver is only in the newest kernel / modules bundle that didn’t make it into all distros yet

how do I find out about both of these?

linuxPIPEpower OP ,

But where do you start to look? Most distros have their config published in two places: /boot/config-<kernel version>, for any installed kernel, or /proc/config.gz (cat /proc/config.gz | gunzip to read), for your running kernel.

Thanks for understanding the question and providing a concrete answer of a place to look! I will do this. :)

linuxPIPEpower OP , (edited )

aside from the subject of the post: the ones I miss when it’s not available are git status/ignoring, icons, tree, excellent color coding.

Here I cloned the eza repo and made some random changes.


<span style="color:#323232;">eza --long -h --no-user --no-time --almost-all --git --sort</span><span style="font-weight:bold;color:#a71d5d;">=</span><span style="color:#323232;">date --reverse --icons
</span>

https://discuss.tchncs.de/pictrs/image/da20a584-30c5-4d4f-9c9b-98dce13b2756.png

Made some more changes and then combine git and tree, something I find is super helpful for overview:


<span style="color:#323232;">eza --long -h --no-user --no-time --git --sort</span><span style="font-weight:bold;color:#a71d5d;">=</span><span style="color:#323232;">date --reverse --icons --tree --level</span><span style="font-weight:bold;color:#a71d5d;">=</span><span style="color:#323232;">2 --git-ignore --no-permissions --no-filesize
</span>

https://discuss.tchncs.de/pictrs/image/faa9f734-3e34-40a5-9f38-7cdc5f92e673.png

(weird icons are my fault for not setting up fonts properly in the terminal.)

Colors all over the place are an innovation that has enabled me to use the terminal really at all. I truly struggle when I need to use b&w or less colorful environments. I will almost always install eza on any device even something that needs to be lean. It’s not just pretty and splashy but it helps me correctly comprehend the information.

I’d never want to get rid of ls and I don’t personally alias it to to eza because I always want to have unimpeded access to the standard tooling. But I appreciate having a few options to do the same task in slightly different ways. And it’s so nice to have all the options together in one application rather than needing a bunch of scripts and aliases and configurations. I don’t think it does anything that’s otherwise impossible but to get on with life it is helpful.

linuxPIPEpower OP ,

well I guess a way to test would be to create a new directory and copy or create some files into it rather than using a working directory where there are unknown complexities. IIRC dd can create files according to parameters.

Start with a single file in a normal location and see how to get it to output the correct info and complicate things until you can find out where it breaks.

That’s what I would do, but maybe a dev would have a more sophisticated method. Might be worth while to read the PR where the feature was introduced.

Also kind of a shot in the dark but do you have an ext4 filesystem? I have been dabbling with btrfs lately and it leads to some strange behaviors. Like some problems with rsync. Ideally this tool would be working properly for all use cases but it’s new so perhaps the testing would be helpful. I also noticed that this feature is unix only. I didn’t read about why.

it would be that du AND Dolphin filemanager would ignore those files, and eza would not. Which its hard to believe for me.

Although only 1 of various potential causes, I don’t think it is implausible on its face. du probably doesn’t know about git at all right? If nautilus has a VCS extension installed I doubt it would specifically ignore for the purposes of calculating file size.

I have found a lot of these rust alternatives ignore .git and other files a little too aggressively for my taste. Both fd (find), and ag (grep) require 1-2 arguments to include dotfiles, git-ignored and other files. There are other defaults that I suppose make lots of sense in certain contexts. Often I can’t find something I know is there and eventually it turns out it’s being ignored somehow.

linuxPIPEpower OP ,

For my part I think all this troublefinding and troublesolving is a great use of a thread. :D Especially if it gets turned into a bug report and eventually PR. I had a quick look in the repo and I don’t see anything relevant but it could be hidden where I can’t see it. Since you’ve already gone and found the problem it would be a shame to leave it here where it’ll never be found or seen. Hope you will send to them.

I also reproduce the bug by moving an ISO file into a directory then hardlinking it in the same dir. Each file is counted individually and the dir is 2x the size it should be! I can’t find any way to fix it.

The best I can come up with is to show the links but it only works when you look at the linked file itself:


<span style="color:#323232;">$ eza --long -h --total-size --sort</span><span style="font-weight:bold;color:#a71d5d;">=</span><span style="color:#323232;">oldest --no-permissions --no-user --no-time --tree --links LinuxISOs
</span><span style="color:#323232;">Links Size Name
</span><span style="color:#323232;">    1 3.1G LinuxISOs
</span><span style="color:#323232;">    2 1.5G ├── linux.iso
</span><span style="color:#323232;">    2 1.5G └── morelinux.iso
</span>

If you look further up the filetree you could never guess. (I will say again that my distro is not up to date with the latest release and it is possible this is already fixed.)

This should be an option. In https://github.com/Byron/dua-cli, another one of the other rust terminal tools I love, you can choose:


<span style="color:#323232;">$ dua  LinuxISOs
</span><span style="color:#323232;">      0   B morelinux.iso
</span><span style="color:#323232;">   1.43 GiB linux.iso
</span><span style="color:#323232;">   1.43 GiB total
</span><span style="color:#323232;">
</span><span style="color:#323232;">$ dua --count-hard-links LinuxISOs
</span><span style="color:#323232;">   1.43 GiB linux.iso
</span><span style="color:#323232;">   1.43 GiB morelinux.iso
</span><span style="color:#323232;">   2.86 GiB total
</span>
linuxPIPEpower OP ,

ooops you commented similar/same twice. I think this one was a draft. :)

linuxPIPEpower OP ,

Some of the distros actually just included an alias from exa to eza when the project forked. I didn’t even realize I was using eza for a long time!

linuxPIPEpower OP ,

I am inclined to agree with you. See my comment in cross post of this thread.

I’m just a home admin of my own local systems and while I try to avoid doing stuff that’s too wacky, in the context I don’t mind playing a bit fast n loose. If I screw it up, the consequences are my own.

At work, I am an end user of systems with much higher grade of importance to lots of people. I would not be impressed to learn there was a bunch of novel bleeding edge stuff running on those systems. Administering them has a higher burden of care and responsibility and I expect the people in charge to apply more scrutiny. If it’s screwed up, the consequences are on a lot of people with no agency over the situation.

Just like other things done at small vs large scale. Most people with long hair don’t wear a hairnet when cooking at home, although it is a requirement in some industrial food prep situations. Most home fridges don’t have strict rules about how to store different kinds of foods to avoid cross contamination, nor do they have a thermometer which is checked regularly and logged to show the food is being stored appropriately. Although this needs to be done in a professional context. Pressures, risks and consequences are different.

To summarize: I certainly hope sysadmins aren’t on here installing every doohicky some dumbass like me suggests on their production systems. :D

linuxPIPEpower OP ,

Nice! I’m sure they will appreciate your thorough report.

I wonder if they also plan to make an option about crossing filesystem boundaries. I have seen it commonly in this sort of use case.

Maybe all this complexity this is the reason why total dir size has not previously been integrated into this kind of tool. (Notable exception: lsd if you are interested.) I really hope the development persists though because being able to easily manipulate so many different kinds of information about the filesystem without spending hours/days/weeks/years creating bespoke shell scripts is super handy.

linuxPIPEpower OP , (edited )

Thanks! I always appreciate another tool for this. I tried to run it but have dep issues.

What is gwc? I can’t find a package by that name nor is it included that I can see.

Websearch finds GeoWebCache, Gnome Wave Cleaner, GtkWaveCleaner, several IT companies… nothing that looks relevant.

edit: also stumped looking for gsort. it seems to be associated with something called STATA which is statistical analysis software. Is that something you are involved with maybe running some special stuff on your system?

PS you missed a newline at the end before closing the code block which is why the image was showing up as markdown instead of displaying properly.

Change:


<span style="color:#323232;">    }```
</span>

to:


<span style="color:#323232;">    }
</span><span style="color:#323232;">    ```
</span>
linuxPIPEpower OP ,

oh of course there are abbreviated forms. I just used the long versions so that people who aren’t framiliar can follow what I am doing without having to spend 10 mins cross referencing the man page.

Likewise in the examples I used options that created a fairly very simple screenshot to clearly illustrate an answer to the question of what eza does that ls doesn’t.

I tend to use eza via a couple of aliases with sets of common preferences. Like in a git dir I want to sort by date. usually don’t need to see the user column, the size or permissions (except when I do). I do want to see the dotfiles. So I have an appropriate line as eg (eza git). A great companion to gs.

linuxPIPEpower OP ,

why not just add the options to it?

If you are asking me, personally, it’s because making any contributions to ls is far beyond my capacities and will remain that way for the forseeable future.

Personal deficiencies aside, would it even be a good idea to modify ls in this way? It seems to me that stability and predictability is a feature, not a bug. Basically you know how ls will work on every linux system. Adding all these features would turn it into something else and potentially introduce chaos. ls is tested on >millions of systems in every context; a known quantity. A feature set which is limited to the necessities avoids introducing bugs, flaws, security issues etc.

And once added, a feature probably shouldn’t be removed. In 2024 I love having git status optionally integrated into my ls-type tool. But in 2034 will git still be as ubiquitous? What about 2054? ls is for the ages. eza is for right now.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines