Discussion:
DFS-R Questions
(too old to reply)
Chris
2006-07-19 22:04:03 UTC
Permalink
Good day,

I'm currently investigating using DFS-R in a 'Data Collection' scenario,
where our branch offices will replicate data back to our data center where
the data can then centrally be backed up and such. After setting it up in
our test lab though I had some questions I can not readily find answers for
and I'm hoping someone in this newsgroup can point me in the right direction.

1) In a data-collection scenario, where a branch office would be replicating
changed data back to a central server (and for arguements sake lets say the
replication schedule is set to only replicate the data in the evening) is
there anyway to determine which file(s) did not replicate during the
replication window? For example, locked files, open files, encrypted files,
etc. Can this report be run in any sort of automated fashion?

2) When using cross-site RDC is there any way to view or show it is actually
working? For example, using the DFS Management MMC you can run a report that
shows the benfits of remote differential compression on how much data is
replicated. I assume cross-site RDC benefits are included in that metric. Can
you seperate it though?

3) In the same scenario as question #1, if a user deletes a folder at the
branch office (and assuming it isn't available for recovery via the local
servers VSS) is there a way to force the main office's hub server to
replicate the folder back? My testing seems to indicate that unless I'm
restoring a different version of the folder and its files it will not
replicate it down to the branch office - this is expected though, else you
would never be able to delete a file at the branch office without it just
replicating back from the hub server. But is there a way to over-ride the
default behaviour, to in essence mark the existing folder on the hub server
as "authoritative", for lack of a better word, so DFS-R recognizes it as the
good copy and forces the branch server to replicate it back (or retrieve it
from the conflict and deleted items folder)?

Thanks,
Chris
Danny Sanders
2006-07-19 23:28:33 UTC
Permalink
Post by Chris
1) In a data-collection scenario, where a branch office would be replicating
changed data back to a central server (and for arguements sake lets say the
replication schedule is set to only replicate the data in the evening) is
there anyway to determine which file(s) did not replicate during the
replication window? For example, locked files, open files, encrypted files,
etc. Can this report be run in any sort of automated fashion?
I'm looking at Ultrasound to see if this will work. See:
http://www.microsoft.com/downloads/details.aspx?FamilyID=61ACB9B9-C354-4F98-A823-24CC0DA73B50&displaylang=en

MOM also has a module for DFS-R that reports to it. Not sure *what* it
reports.
Post by Chris
2) When using cross-site RDC is there any way to view or show it is actually
working? For example, using the DFS Management MMC you can run a report that
shows the benfits of remote differential compression on how much data is
replicated. I assume cross-site RDC benefits are included in that metric. Can
you seperate it though?
Not sure if the two above show this or not.
Post by Chris
3) In the same scenario as question #1, if a user deletes a folder at the
branch office (and assuming it isn't available for recovery via the local
servers VSS) is there a way to force the main office's hub server to
replicate the folder back? My testing seems to indicate that unless I'm
restoring a different version of the folder and its files it will not
replicate it down to the branch office - this is expected though, else you
would never be able to delete a file at the branch office without it just
replicating back from the hub server. But is there a way to over-ride the
default behaviour, to in essence mark the existing folder on the hub server
as "authoritative", for lack of a better word, so DFS-R recognizes it as the
good copy and forces the branch server to replicate it back (or retrieve it
from the conflict and deleted items folder)?
I have mine set up with only one way replication (from branch to hub not
vice versa) to prevent the accidental deletion of files off branch servers.
You might toy with Shadow copies on the branch server so *they* can retrieve
accidentally deleted files.

Which leads to the question of is it better to use Data Protection Manager
because it incorporates and takes advantage of Shadow Copies. This is what
it is designed to do. It *seems* to me MS added to DFS "because with a
little manipulation it *could* suffice"

Sorry I probably created more questions than I answered.
Post by Chris
Good day,
I'm currently investigating using DFS-R in a 'Data Collection' scenario,
where our branch offices will replicate data back to our data center where
the data can then centrally be backed up and such. After setting it up in
our test lab though I had some questions I can not readily find answers for
and I'm hoping someone in this newsgroup can point me in the right direction.
1) In a data-collection scenario, where a branch office would be replicating
changed data back to a central server (and for arguements sake lets say the
replication schedule is set to only replicate the data in the evening) is
there anyway to determine which file(s) did not replicate during the
replication window? For example, locked files, open files, encrypted files,
etc. Can this report be run in any sort of automated fashion?
2) When using cross-site RDC is there any way to view or show it is actually
working? For example, using the DFS Management MMC you can run a report that
shows the benfits of remote differential compression on how much data is
replicated. I assume cross-site RDC benefits are included in that metric. Can
you seperate it though?
3) In the same scenario as question #1, if a user deletes a folder at the
branch office (and assuming it isn't available for recovery via the local
servers VSS) is there a way to force the main office's hub server to
replicate the folder back? My testing seems to indicate that unless I'm
restoring a different version of the folder and its files it will not
replicate it down to the branch office - this is expected though, else you
would never be able to delete a file at the branch office without it just
replicating back from the hub server. But is there a way to over-ride the
default behaviour, to in essence mark the existing folder on the hub server
as "authoritative", for lack of a better word, so DFS-R recognizes it as the
good copy and forces the branch server to replicate it back (or retrieve it
from the conflict and deleted items folder)?
Thanks,
Chris
Ned Pyle [MSFT]
2006-07-20 13:09:16 UTC
Permalink
Good questions.
1) There's no report like that inbox. You'd have to script something or
parse the debug logs. DFSR's built in reporting is very broad - designed
more for 'you have a big problem' than 'you have a small problem'. I have
a feeling that our partners and other 3rd party ISV's are going to start
supplying this sort of of functionality, as it's currently an untapped
market. You can start your exploration of DFSR backlog WMI scripting by
checking out
http://msdn.microsoft.com/library/en-us/stgmgmt/fs/dfsr_wmi_classes.asp
and
http://msdn.microsoft.com/library/en-us/stgmgmt/fs/getoutboundbacklogfileidrecords_dfsrreplicatedfolderinfo.asp .
2) Again, it would be tracked in the debug logs. There's no separation in
the health diagnostic report though.
3) Absolutely. You can use WMIC to fence a folder/file(s) as authoritative
in order to push them back out. Every file and folder in the DFSR database
has a Fence value as part of its ID record. Fence values are set using the
Fence method of the DfsrReplicatedFolderInfo class in the
\root\MicrosoftDfs WMI namespace.
wmic /namespace:\\root\microsoftdfs path
dfsrreplicatedfolderinfo.ReplicatedFolderGuid='3b38ddc2-ffbf-428c-9853-71d2d2d65351'
call fence mode=2 isrecursive=true filepath='c:\sales'

where the replicated folder GUID can be obtained with something like -
dfsradmin rf list /rgname:salesrg /attr:rfguid

and where possible fence types are:

1 - Initial Sync - Initial fence value for non-primary member
2 - Initial Primary - Initial fence value for primary member
3 - Default - Default fencing value
4 - Fence - Fence with current time stamp

http://msdn.microsoft.com/library/en-us/stgmgmt/fs/dfsrreplicatedfolderinfo.asp

Sorta complicated, but doable.
--
Ned Pyle
Microsoft Enterprise Platforms Support

All postings on this newsgroup are provided "AS IS" with no warranties, and
confer no rights.


For more information please visit
http://www.microsoft.com/info/cpyright.mspx to find terms of use.
Good day,
I'm currently investigating using DFS-R in a 'Data Collection' scenario,
where our branch offices will replicate data back to our data center where
the data can then centrally be backed up and such. After setting it up in
our test lab though I had some questions I can not readily find answers for
and I'm hoping someone in this newsgroup can point me in the right direction.
1) In a data-collection scenario, where a branch office would be replicating
changed data back to a central server (and for arguements sake lets say the
replication schedule is set to only replicate the data in the evening) is
there anyway to determine which file(s) did not replicate during the
replication window? For example, locked files, open files, encrypted files,
etc. Can this report be run in any sort of automated fashion?
2) When using cross-site RDC is there any way to view or show it is actually
working? For example, using the DFS Management MMC you can run a report that
shows the benfits of remote differential compression on how much data is
replicated. I assume cross-site RDC benefits are included in that metric. Can
you seperate it though?
3) In the same scenario as question #1, if a user deletes a folder at the
branch office (and assuming it isn't available for recovery via the local
servers VSS) is there a way to force the main office's hub server to
replicate the folder back? My testing seems to indicate that unless I'm
restoring a different version of the folder and its files it will not
replicate it down to the branch office - this is expected though, else you
would never be able to delete a file at the branch office without it just
replicating back from the hub server. But is there a way to over-ride the
default behaviour, to in essence mark the existing folder on the hub server
as "authoritative", for lack of a better word, so DFS-R recognizes it as the
good copy and forces the branch server to replicate it back (or retrieve it
from the conflict and deleted items folder)?
Thanks,
Chris
Dan Boldo [MSFT]
2006-07-20 16:08:15 UTC
Permalink
1. You can see if there is backlog with the health report but it doesn't
list out which files are still pending. If there are sharing violations and
if you try to replicate files encrypted with EFS (don't do that :-)) than
the health report will list those as warnings. An easy way to see which
files are out of sync is to do a dir /s and then use windiff to see the
differences.

2. No. We look at all savings which includes RDC, RDC Similarity and basic
compression.

3. There is a way to do this by "fencing" files but I need to check if we
exposed it with good actionable documentation or if it is still being
worked on for the technical reference guide. As for restoring from deleted
items, you don't have to. The idea was to have the files somewhere on the
volume so RDC can kick-in and prevent over the WAN transfer of the data.

Thank you,

Dan Boldo
Microsoft Branch Office PM
This posting is provided "AS IS" with no warranties, and confers no rights.
Chris
2006-07-24 17:01:02 UTC
Permalink
Thank you for replying.

My initial tests may have been wrong. I'm noticing as I do more controlled
testing that performing a restore from NT Backup of files on the hub server
always seems to set those restored files as authoritative and they are
replicating out to the branch office server, even if the file on the branch
office server is newer or had previously been deleted. This is the behavior
I expected.

Is this correct? I notice in the event logs that the the DFS-R service is
stopped and started when doing a restore so is there any diocumentation
detailing how DFS-R handles restores from NT Backup (or comparable backup
programs)?

Thanks,
Chris
Post by Dan Boldo [MSFT]
1. You can see if there is backlog with the health report but it doesn't
list out which files are still pending. If there are sharing violations and
if you try to replicate files encrypted with EFS (don't do that :-)) than
the health report will list those as warnings. An easy way to see which
files are out of sync is to do a dir /s and then use windiff to see the
differences.
2. No. We look at all savings which includes RDC, RDC Similarity and basic
compression.
3. There is a way to do this by "fencing" files but I need to check if we
exposed it with good actionable documentation or if it is still being
worked on for the technical reference guide. As for restoring from deleted
items, you don't have to. The idea was to have the files somewhere on the
volume so RDC can kick-in and prevent over the WAN transfer of the data.
Thank you,
Dan Boldo
Microsoft Branch Office PM
This posting is provided "AS IS" with no warranties, and confers no rights.
Brian Collins [MSFT]
2006-07-28 23:03:32 UTC
Permalink
Hi Chris,

The replication activity you're describing is not what we would expect to
see following a ntbackup-based restore. Here is what you should be seeing
(again, using ntbackup, not a 3rd party app):

1) If you restore an entire volume which contains one or more replicated
folder such that the checkmark for the volume is blue (indicating all child
items selected), ntbackup should stop & restart DFSR, and all replicated
folders on the volume should undergo a non-authoritative restore. DFSR will
print eventlog message 1110 fairly quickly after restart, indicating it
processed the list of restored volumes, followed by the non-authoritative
recovery sync (and the posting of the relevant messages in the eventlog). If
the restored version of a file conflicts with the existing version on the
branch office server, the restored version should be overwritten; likewise,
if a restored file does not exist on the branch office server, it should be
deleted.

2) If you don't restore an entire volume, but instead simply restore the
replicated folder or some files within it (such that the checkmark for the
volume is gray, not blue), ntbackup will not stop/start DFSR nor mark the
volume as restored. As a result, DFSR will not undergo a non-authoritative
restore. Instead DFSR will treat the restored files as updates, meaning they
will replicate out to the branch server. In cases of conflict, the restored
versions should win; files that don't exist on the branch office server will
not be deleted, but should also replicate over.

This is the intended behavior and it is the observed behavior in our test
environment. If this is not actually occurring, we would be interested in
finding out some more information so we can determine why this is the case.

The rules for 3rd party applications are a bit different because they
involve our VSS writer. By default, all writer-based restores are
non-authoritative, though we expose the ability to mark them as
authoritative. It is up to the creator of the application to take advantage
of this functionality (and I would expect most commercial apps do).

Thanks,
Brian Collins [MSFT]
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Post by Chris
Thank you for replying.
My initial tests may have been wrong. I'm noticing as I do more controlled
testing that performing a restore from NT Backup of files on the hub server
always seems to set those restored files as authoritative and they are
replicating out to the branch office server, even if the file on the branch
office server is newer or had previously been deleted. This is the behavior
I expected.
Is this correct? I notice in the event logs that the the DFS-R service is
stopped and started when doing a restore so is there any diocumentation
detailing how DFS-R handles restores from NT Backup (or comparable backup
programs)?
Thanks,
Chris
Post by Dan Boldo [MSFT]
1. You can see if there is backlog with the health report but it doesn't
list out which files are still pending. If there are sharing violations and
if you try to replicate files encrypted with EFS (don't do that :-)) than
the health report will list those as warnings. An easy way to see which
files are out of sync is to do a dir /s and then use windiff to see the
differences.
2. No. We look at all savings which includes RDC, RDC Similarity and basic
compression.
3. There is a way to do this by "fencing" files but I need to check if we
exposed it with good actionable documentation or if it is still being
worked on for the technical reference guide. As for restoring from deleted
items, you don't have to. The idea was to have the files somewhere on the
volume so RDC can kick-in and prevent over the WAN transfer of the data.
Thank you,
Dan Boldo
Microsoft Branch Office PM
This posting is provided "AS IS" with no warranties, and confers no rights.
Continue reading on narkive:
Loading...