Entire drive/array backups will probably be by far the largest file transfer anyone ever does. The biggest I’ve done was a measly 20TB over the internet which took forever.
Outside of that the largest “file” I’ve copied was just over 1TB which was a SQL file backup for our main databases at work.
From an order of magnitude perspective, the max is terabytes. No “normal” users are dealing with petabytes. And if you are dealing with petabytes, you’re not using some random poster’s program from reddit.
I don’t remember how many files, but typically these geophysical recordings clock in at 10-30 GB. What I do remember, though, was the total transfer size: 4TB. It was kind of like a bunch of .segd, and they were stored in this server cluster that was mounted in a shipping container for easy transport and lifting onboard survey ships. Some geophysics processors needed it on the other side of the world. There were nobody physically heading in the same direction as the transfer, so we figured it would just be easier to rsync it over 4G. It took a little over a week to transfer.
Normally when we have transfers of a substantial size going far, we ship it on LTO. For short distance transfers we usually run a fiber, and I have no idea how big the largest transfer job has been that way. Must be in the hundreds of TB. The entire cluster is 1.2PB, bit I can’t recall ever having to transfer everything in one go, as the receiving end usually has a lot less space.
At the rates I’m paying for 4G data, there are very few places in the world where it wouldn’t be cheaper for me to get on a plane and sneakernet that much data
~15TB over the internet via 30Mbps uplink without any special considerations. Syncthing handled any and all network and power interruptions. I did a few power cable pulls myself.
How long did that take? A month or two? I’ve backfilled my NAS with about 40 TB before over a 1 gig fiber pipe in about a week or so of 24/7 downloading.
Yeah, I also moved from 30Mb upload to 700Mb recently and it’s just insane. It’s also insane thinking I had a symmetric gigabit connection in Eastern Europe in the 2000s for fairly cheap. It was Ethernet though, not fiber. Patch cables and switches all the way to the central office. 🫠
Most people in Canada today have 50Mb upload at the most expensive connection tiers - on DOCSIS 3.x. Only over the last few years fiber began becoming more common but it’s still fairly uncommon as it’s the most expensive connection tier if at all available.
We might pay some of the most expensive internet in the world in Canada but at least we can’t fault them for providing an unstable or unperformqnt service. Download llama models is where 1gbps really shines, you see a 7GB model? It’s done before you are even back from the toilet. Crazy times.
I’ve got symmetrical gigabit in my apartment, with the option to upgrade to 5 or 8. I’d have to upgrade my equipment to use those speeds, but it’s nice to know I have the option.
Not that big by today's standards, but I once downloaded the Windows 98 beta CD from a friend over dialup, 33.6k at best. Took about a week as I recall.
I remember downloading the scene on American Pie where Shannon Elizabeth strips naked over our 33.6 link and it took like an hour, at an amazing resolution of like 240p for a two minute clip 😂
In similar fashion, downloaded dude where’s my car, over dialup, using at the time the latest tech method - a file download system that would split the file into 2mb chunks and download them in order.
i’ve transferred 10’s of ~300 GB files via manual rsyncs. it was a lot of binary astrophysical data, most of which was noise. eventually this was replaced by an automated service that bypassed local firewalls with internet-based transfers and aws stuff.
Largest one I ever did was around 4.something TB. New off-site backup server at a friends place. Took me 4 months due to data limits and an upload speed that maxed out at 3MB/s.
I routinely do 1-4TB images of SSDs before making major changes to the disk. Run fstrim on all partitions and pipe dd output through zstd before writing to disk and they shrink to actually used size or a bit smaller. Largest ever backup was probably ~20T cloned from one array to another over 40/56GbE, the deltas after that were tiny by comparison.
In grad school I worked with MRI data (hence the username). I had to upload ~500GB to our supercomputing cluster. Somewhere around 100,000 MRI images, and wrote 20 or so different machine learning algorithms to process them. All said and done, I ended up with about 2.5TB on the supercomputer. About 500MB ended up being useful and made it into my thesis.
As the Anaconda installer of Fedora Atomic is broken (yes, ironic) I have one system originally meant for tweaking as my “zygote” and just clone, resize, balance and rebase that for new systems.
Remote? 10GB MicroWin 11 LTSC IOT ISO, the least garbage that OS can get.
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.