There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

What is the largest file transfer you have ever done?

I’m writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.

This has caused me to wonder what the largest storage operation you guys have done. I’ve taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.

nobleshift ,
@nobleshift@lemmy.world avatar

Professionally, Boeing near Seattle moves mind-boggling data. Personally,1500+ terabytes in a single 6 day jag, aboard a 29’ sailboat running off an inverter and 13 year old Dell hardware, completely off grid on anchor. It was a massive rescue operation, I did what I could with what I had where I was at.

I didn’t sleep well.

MajorHavoc ,

I’ll let you know… If it finishes.

avidamoeba ,
@avidamoeba@lemmy.ca avatar

~15TB over the internet via 30Mbps uplink without any special considerations. Syncthing handled any and all network and power interruptions. I did a few power cable pulls myself.

pete_the_cat ,

How long did that take? A month or two? I’ve backfilled my NAS with about 40 TB before over a 1 gig fiber pipe in about a week or so of 24/7 downloading.

avidamoeba ,
@avidamoeba@lemmy.ca avatar

Yeah, something like that. I verified it it with rsync after that, no errors.

d00phy ,

I’ve migrated petabytes from one GPFS file system to another. More than once, in fact. I’ve also migrated about 600TB of data from D3 tape format to 9940.

Laborer3652 ,

I have to copy ~8 TiB of backup files twice a year.

Krafting ,
@Krafting@lemmy.world avatar

Rsynced 4.2TB of data from one server to another but with multiple files

Davel23 ,

Not that big by today's standards, but I once downloaded the Windows 98 beta CD from a friend over dialup, 33.6k at best. Took about a week as I recall.

absGeekNZ ,
@absGeekNZ@lemmy.nz avatar

Yep, downloaded XP over 33.6k modem, but I’m in NZ so 33.6 was more advertising than reality, it took weeks.

pete_the_cat ,

I remember downloading the scene on American Pie where Shannon Elizabeth strips naked over our 33.6 link and it took like an hour, at an amazing resolution of like 240p for a two minute clip 😂

pete_the_cat , (edited )

I’m currently in the process of transferring about 50 TB from one zpool to another (locally), so I can destroy and recreate it.

I’ve downloaded a few torrents that were around 5 TB each, they’re PS4 and Xbox 360 game collections.

tedvdb ,
@tedvdb@feddit.nl avatar

Today I’ve migrated my data from my old zfs pool to a new bigger one, the rsync of 13.5TiB took roughly 18 hours. It’s slow spinning disks storage so that’s fine.

The second and third runs of the same rsync took like 5 seconds, blazing fast.

Neuromancer49 ,

In grad school I worked with MRI data (hence the username). I had to upload ~500GB to our supercomputing cluster. Somewhere around 100,000 MRI images, and wrote 20 or so different machine learning algorithms to process them. All said and done, I ended up with about 2.5TB on the supercomputer. About 500MB ended up being useful and made it into my thesis.

Don’t stay in school, kids.

mEEGal ,

You should have said no to math, it’s a helluva drug

boredsquirrel ,
@boredsquirrel@slrpnk.net avatar

Local file transfer?

I cloned a 1TB+ system a couple of times.

As the Anaconda installer of Fedora Atomic is broken (yes, ironic) I have one system originally meant for tweaking as my “zygote” and just clone, resize, balance and rebase that for new systems.

Remote? 10GB MicroWin 11 LTSC IOT ISO, the least garbage that OS can get.

Also, some leaked stuff 50GB over Bittorrent

seaQueue ,
@seaQueue@lemmy.world avatar

I routinely do 1-4TB images of SSDs before making major changes to the disk. Run fstrim on all partitions and pipe dd output through zstd before writing to disk and they shrink to actually used size or a bit smaller. Largest ever backup was probably ~20T cloned from one array to another over 40/56GbE, the deltas after that were tiny by comparison.

fuckwit_mcbumcrumble ,

Entire drive/array backups will probably be by far the largest file transfer anyone ever does. The biggest I’ve done was a measly 20TB over the internet which took forever.

Outside of that the largest “file” I’ve copied was just over 1TB which was a SQL file backup for our main databases at work.

cbarrick ,

+1

From an order of magnitude perspective, the max is terabytes. No “normal” users are dealing with petabytes. And if you are dealing with petabytes, you’re not using some random poster’s program from reddit.

For a concrete cap, I’d say 256 tebibytes…

ShittyBeatlesFCPres ,

You should ping CERN or Fermilab about this. Or maybe the Event Horizon Telescope team but I think they used sneakernet to image the M87 black hole.

Anyway, my answer is probably just a SQL backup like everyone else.

narc0tic_bird ,

Around 15 TB migrating to a new NAS.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines