Discussion:
R2 DFS and large file replication
(too old to reply)
nicky_mcc
2006-02-28 15:59:25 UTC
Permalink
We're trialling DFSR to replicate content (office files etc. generally
2MB and under) back from branch offices (2Mb links) and it looks great
- much better than "old" DFS!

However, we also tried to use it to replicate large symantec livestate
backup files (5-6GB) t and it really struggled with this - required
huge staging areas and took a lot longer than just robocopying the file
would have done.

Am I right in thinking that the RDC replication isn't really designed
for large files like this? If not, is there some way to tune it for
large files?

Thanks.
Jill Zoeller [MSFT]
2006-02-28 18:16:49 UTC
Permalink
Actually, RDC works really well for large files. However, unless you
prestage that big file on the target server, the whole thing will need to
replicate the first time. But if the file ever changes, then only the
changes will be replicated. And if at least one of the servers (in a pair of
replicating servers) is running Enterprise Edition, a feature called
cross-file RDC will be used if similar files exist, which means that even
for initial replication, the amount of data replicated will be reduced if
similar files already exist on the target.

You might want to check out our blog at
http://blogs.technet.com/filecab/default.aspx for additional guidance.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Want to learn more about Windows Server file and storage technologies? Visit
our team blog at http://blogs.technet.com/filecab/default.aspx.
Post by nicky_mcc
We're trialling DFSR to replicate content (office files etc. generally
2MB and under) back from branch offices (2Mb links) and it looks great
- much better than "old" DFS!
However, we also tried to use it to replicate large symantec livestate
backup files (5-6GB) t and it really struggled with this - required
huge staging areas and took a lot longer than just robocopying the file
would have done.
Am I right in thinking that the RDC replication isn't really designed
for large files like this? If not, is there some way to tune it for
large files?
Thanks.
Rob
2006-03-01 13:41:01 UTC
Permalink
Did increase your prestaging area size? I believe it defaults to 4 GB which
would cause problems with your 5-6 GB files. After you increase the
prestage size, and it does a full copy once to do the initial replication,
it should only copy the block level changes in the files which should be
awesome for your.

-Rob
Post by nicky_mcc
We're trialling DFSR to replicate content (office files etc. generally
2MB and under) back from branch offices (2Mb links) and it looks great
- much better than "old" DFS!
However, we also tried to use it to replicate large symantec livestate
backup files (5-6GB) t and it really struggled with this - required
huge staging areas and took a lot longer than just robocopying the file
would have done.
Am I right in thinking that the RDC replication isn't really designed
for large files like this? If not, is there some way to tune it for
large files?
Thanks.
nicky_mcc
2006-03-02 11:29:03 UTC
Permalink
Thanks very much for the replies. It does seem that DFS should be
beneficial to us for these large files but at the moment we're
definitely not seeing the benefits - wondering if we're doing something
wrong with the setup, so here are the details:

"branch" server and "core" server, both running R2 enterprise edition.
Connected by a reliable 2Mb link. DFS replication group set up with a
target folder on each server. Staging quota for both folders set to
15GB. (Rob, we did start with the standard 4GB quota and it did cause
problems, you're right!)

Created a 6GB "recovery point" backup file using Symantec Livestate on
the branch server, and placed this in the DFS target folder. Allow
this to replicate to the core in its own time (this took several hours,
as expected, since it was the first file of its type to replicate).

Modified 500MB of data on the branch server, and ran another "recovery
point" backup, creating another 6GB file. Placed this in the DFS
target folder.

At this point, we expected to see the benefits of cross-file RDC, on
the assumption that the two backup files would be similar at the block
level, and only the changes would need to replicate. However, the
second file took as long to replicate as the first. Either we're
misunderstanding the way cross-file RDC works, OR the Symanec Livestate
backups are actually completely different at the block level.

Does this make sense? Anyone have any thoughts?

Thanks again!

Nicky
Jill Zoeller [MSFT]
2006-03-02 16:54:24 UTC
Permalink
Can you confirm that either your branch or core server is running Enterprise
Edition?
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Want to learn more about Windows Server file and storage technologies? Visit
our team blog at http://blogs.technet.com/filecab/default.aspx.
Post by nicky_mcc
Thanks very much for the replies. It does seem that DFS should be
beneficial to us for these large files but at the moment we're
definitely not seeing the benefits - wondering if we're doing something
"branch" server and "core" server, both running R2 enterprise edition.
Connected by a reliable 2Mb link. DFS replication group set up with a
target folder on each server. Staging quota for both folders set to
15GB. (Rob, we did start with the standard 4GB quota and it did cause
problems, you're right!)
Created a 6GB "recovery point" backup file using Symantec Livestate on
the branch server, and placed this in the DFS target folder. Allow
this to replicate to the core in its own time (this took several hours,
as expected, since it was the first file of its type to replicate).
Modified 500MB of data on the branch server, and ran another "recovery
point" backup, creating another 6GB file. Placed this in the DFS
target folder.
At this point, we expected to see the benefits of cross-file RDC, on
the assumption that the two backup files would be similar at the block
level, and only the changes would need to replicate. However, the
second file took as long to replicate as the first. Either we're
misunderstanding the way cross-file RDC works, OR the Symanec Livestate
backups are actually completely different at the block level.
Does this make sense? Anyone have any thoughts?
Thanks again!
Nicky
nicky_mcc
2006-03-03 10:13:38 UTC
Permalink
Hi -

Yes, both the branch and the core server are running windows 2003 R2
Enterprise edition in our test lab.

Thanks,

Nicky
Jill Zoeller [MSFT]
2006-03-03 17:00:37 UTC
Permalink
Let me ask around. Like you, I would assume that cross-file replication
would help reduce what's replicated.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Want to learn more about Windows Server file and storage technologies? Visit
our team blog at http://blogs.technet.com/filecab/default.aspx.
Post by nicky_mcc
Hi -
Yes, both the branch and the core server are running windows 2003 R2
Enterprise edition in our test lab.
Thanks,
Nicky
Jill Zoeller [MSFT]
2006-03-03 17:04:20 UTC
Permalink
One other thing--have you run a health report after the second file
replicated? The health report will show you bandwidth savings.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Want to learn more about Windows Server file and storage technologies? Visit
our team blog at http://blogs.technet.com/filecab/default.aspx.
Post by nicky_mcc
Hi -
Yes, both the branch and the core server are running windows 2003 R2
Enterprise edition in our test lab.
Thanks,
Nicky
Jill Zoeller [MSFT]
2006-03-03 17:25:23 UTC
Permalink
OK, I think I might have an answer for you. According to one of our
developers, those files might already be compressed. RDC does not work well
with compressed files.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Want to learn more about Windows Server file and storage technologies? Visit
our team blog at http://blogs.technet.com/filecab/default.aspx.
Post by nicky_mcc
Hi -
Yes, both the branch and the core server are running windows 2003 R2
Enterprise edition in our test lab.
Thanks,
Nicky
Jill Zoeller [MSFT]
2006-03-03 20:17:03 UTC
Permalink
Some additional follow-up for you from our program manager.

A few things:

(1) Assume he has set staging to be over 4 GB -> should be around 12 GB
(2) Are the livestate backup files compressed? If so he can turn off
compression by renaming the files to *.zip before replicating. He should
also disable RDC on the connection and see if it improves things. Needs to
determine where the delay is - is it in staging on the upstream,
transmission or staging at the downstream...these can be traced by looking
at the debug log files (will need a PSS case)
(3) Also would be good to understand the application's interaction with the
file - does it close the file - does it keep a handle open etc. This might
be a reason but will need to be debugged by PSS.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Want to learn more about Windows Server file and storage technologies? Visit
our team blog at http://blogs.technet.com/filecab/default.aspx.
Post by nicky_mcc
Hi -
Yes, both the branch and the core server are running windows 2003 R2
Enterprise edition in our test lab.
Thanks,
Nicky
nicky_mcc
2006-03-06 14:36:41 UTC
Permalink
Hi, thanks for the responses. Taking each point in turn:

1. I didn't run a health check and the servers have now been rebooted,
but I'll try it again and do this.
2. We wondered if the compression might cause a problem, so we
deliberately created the backup files using the "uncompressed" option.
3. Staging area at both ends is set to 15GB.
4. The application doesn't keep a handle on the file once the backup
is created.

I will try the replication again with RDC enabled and disabled, and
look at the health checks afterwards.

One more thought: The "servers" are just workstations (Pentium 4 3GHz
processors, 512MB RAM) attached to an iSCSI SAN. I understand that RDC
can be heavy on I/O and CPU: could this be causing problems?
Jill Zoeller [MSFT]
2006-03-07 19:00:55 UTC
Permalink
It's hard to say whether CPU/disk I/O is the bottleneck here. If the files
were uncompressed, we would expect to see some savings via cross-file RDC
shown in the Health report. If you're continuing to see issues, I recommend
opening a support case for more in-depth troubleshooting.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Want to learn more about Windows Server file and storage technologies? Visit
our team blog at http://blogs.technet.com/filecab/default.aspx.
Post by nicky_mcc
1. I didn't run a health check and the servers have now been rebooted,
but I'll try it again and do this.
2. We wondered if the compression might cause a problem, so we
deliberately created the backup files using the "uncompressed" option.
3. Staging area at both ends is set to 15GB.
4. The application doesn't keep a handle on the file once the backup
is created.
I will try the replication again with RDC enabled and disabled, and
look at the health checks afterwards.
One more thought: The "servers" are just workstations (Pentium 4 3GHz
processors, 512MB RAM) attached to an iSCSI SAN. I understand that RDC
can be heavy on I/O and CPU: could this be causing problems?
nicky_mcc
2006-03-08 16:22:04 UTC
Permalink
Hi Jill - we seem to have resolved the issue.

Still not sure what the problem was in the first test but our best
guess is that there wasn't enough physical space free on the server for
DFS to make best use of the staging areas, even though the staging
quota was set appropriately.

I'll post what we did just for completeness (obviously these are just
our ad hoc test figures and shouldn't be treated as any kind of proper
performance data).
1. Cleared out all the uneccessary files from the target folders and
manually removed obsolete folders from the staging areas (though I
think this would have happened automatically).
2. Deleted all previous backup files from both of the servers.
3. Placed the first uncompressed backup file (6.5GB) on the branch
server and allowed it to replicate in its own time - took around 7 hrs
4. Placed the second uncompressed backup file, which was a backup of
the same server with around 400MB of changes, on the branch server.
This replicated in under 3 hours, and the DFS health check showed a 44%
replication efficiency saving.
5. We deleted all the backup files again and repeated the test using
compressed backup files - even this time, the health check showed 28%
replication efficiency savings.

thanks again for the help,

Nicky

Continue reading on narkive:
Loading...