Thanks very much for the replies. It does seem that DFS should be
beneficial to us for these large files but at the moment we're
definitely not seeing the benefits - wondering if we're doing something
wrong with the setup, so here are the details:
"branch" server and "core" server, both running R2 enterprise edition.
Connected by a reliable 2Mb link. DFS replication group set up with a
target folder on each server. Staging quota for both folders set to
15GB. (Rob, we did start with the standard 4GB quota and it did cause
problems, you're right!)
Created a 6GB "recovery point" backup file using Symantec Livestate on
the branch server, and placed this in the DFS target folder. Allow
this to replicate to the core in its own time (this took several hours,
as expected, since it was the first file of its type to replicate).
Modified 500MB of data on the branch server, and ran another "recovery
point" backup, creating another 6GB file. Placed this in the DFS
At this point, we expected to see the benefits of cross-file RDC, on
the assumption that the two backup files would be similar at the block
level, and only the changes would need to replicate. However, the
second file took as long to replicate as the first. Either we're
misunderstanding the way cross-file RDC works, OR the Symanec Livestate
backups are actually completely different at the block level.
Does this make sense? Anyone have any thoughts?