This is a mirror of official site: http://jasper-net.blogspot.com/

How to: Copy very large files across a slow or unreliable network

| Tuesday, May 11, 2010
To prepare for the DevDiv TFS2010 upgrade we had to copy 8TB of SQL backups about 100 miles across a WAN link so that we could restore it on our test system.  The link speed was reasonably good and the latency fairly low (5ms), but when you’re dealing with files this big then the odds are against you and using sneakernet can be a good option. In our case it wasn’t an option and we had to find the next best solution.  In the end we were able to copy all 8TB over 7 days without having to resume or restart once.

The 8TB backups were spanned across 32 files of 250GB each which makes them a little easier to deal with.  The first problem that you’ll encounter when using a normal Windows file copy, XCopy, RoboCopy or TeraCopy to copy these large files is that your available memory on the source server will start to drop and eventually run out. The next problem you’ll encounter is the connection will break for some reason and you’ll have to restart or resume the transfer.

Fortunately the EPS Windows Server Performance Team have a blog post on the issue and a great recommendation: Ask the Performance Team : Slow Large File Copy Issues

The problem lies in the way in which the copy is performed - specifically Buffered vs. Unbuffered Input/Output (I/O).

Buffered I/O describes the process by which the file system will buffer reads and writes to and from the disk in the file system cache.  Buffered I/O is intended to speed up future reads and writes to the same file but it has an associated overhead cost.  It is effective for speeding up access to files that may change periodically or get accessed frequently.  There are two buffered I/O functions commonly used in Windows Applications such as Explorer, Copy, Robocopy or XCopy:

CopyFile() - Copies an existing file to a new file
CopyFileEx() - This also copies an existing file to a new file, but it can also call a specified callback function each time a portion of the copy operation is completed, thus notifying the application of its progress via the callback function.  Additionally, CopyFileEx can be canceled during the copy operation.
So looking at the definition of buffered I/O above, we can see where the perceived performance problems lie - in the file system cache overhead.  Unbuffered I/O (or a raw file copy) is preferred when attempting to copy a large file from one location to another when we do not intend to access the source file after the copy is complete.  This will avoid the file system cache overhead and prevent the file system cache from being effectively flushed by the large file data.  Many applications accomplish this by calling CreateFile() to create an empty destination file, then using the ReadFile() and WriteFile() functions to transfer the data.

Read more: Grant Holliday's Blog

Posted via email from jasper22's posterous

0 comments: