There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

What is the largest file transfer you have ever done?

I’m writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.

This has caused me to wonder what the largest storage operation you guys have done. I’ve taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.

bulwark ,

I mean dd claims they can handle a quettabyte but how can we but sure.

psmgx ,

Currently pushing about 3-5 TB of images to AI/ML scanning per day. Max we’ve seen through the system is about 8 TB.

Individual file? Probably 660 GB of backups before a migration at a previous job.

avidamoeba ,
@avidamoeba@lemmy.ca avatar

~15TB over the internet via 30Mbps uplink without any special considerations. Syncthing handled any and all network and power interruptions. I did a few power cable pulls myself.

pete_the_cat ,

How long did that take? A month or two? I’ve backfilled my NAS with about 40 TB before over a 1 gig fiber pipe in about a week or so of 24/7 downloading.

avidamoeba ,
@avidamoeba@lemmy.ca avatar

Yeah, something like that. I verified it it with rsync after that, no errors.

ryannathans ,

We have DBs in the dozens of TB at work so probably one of them

neidu2 , (edited )

I don’t remember how many files, but typically these geophysical recordings clock in at 10-30 GB. What I do remember, though, was the total transfer size: 4TB. It was kind of like a bunch of .segd files (geophysics stuff), and they were stored in this server cluster that was mounted in a shipping container, and some geophysics processors needed it on the other side of the world. There were nobody physically heading in the same direction as the transfer, so we figured it would just be easier to rsync it over 4G. It took a little over a week to transfer.

Normally when we have transfers of a substantial size going far, we ship it on LTO. For short distance transfers we usually run a fiber, and I have no idea how big the largest transfer job has been that way. Must be in the hundreds of TB. The entire cluster is 1.2PB, bit I can’t recall ever having to transfer everything in one go, as the receiving end usually has a lot less space.

data1701d OP ,
@data1701d@startrek.website avatar
neidu2 ,

The alternative was 5mbit/s VSAT. 4G was a luxury at that time.

fuckwit_mcbumcrumble ,

Entire drive/array backups will probably be by far the largest file transfer anyone ever does. The biggest I’ve done was a measly 20TB over the internet which took forever.

Outside of that the largest “file” I’ve copied was just over 1TB which was a SQL file backup for our main databases at work.

cbarrick ,

+1

From an order of magnitude perspective, the max is terabytes. No “normal” users are dealing with petabytes. And if you are dealing with petabytes, you’re not using some random poster’s program from reddit.

For a concrete cap, I’d say 256 tebibytes…

Larvitz ,
@Larvitz@burningboard.net avatar

@data1701d downloading forza horizon 5 on Steam with around 120gb is the largest web-download, I can remember. In LAN, I’ve migrated my old FreeBSD NAS to my new one, which was a roughly 35TB transfer over NFS.

data1701d OP ,
@data1701d@startrek.website avatar

How long did that 35TB take? 12 hours or so?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines