2007-08-11 10:27:31 UTC
We have recently setup a DFS topology between 2 sites (soon to be 3).
Everything is working as expected and replication of data sets between the
sites is good.
The problem from the customers perspective is that quite often the Head
Office (SITE1) and the Remote Site (Site2) are working on the same file
albeit from their respective data sets (Nearest server).
Obviously what then happens is that updates made to these files are not
always being written back correctly and edits made on one replica are being
detected as conflicts and moved to the conflict and detection folders.
As I understand DFS this again is basically correct when two users at two
sites are working on the same file from different server data sets DFS will
make the last saved file the winner and store the earlier saved file in the
conflict and detection folders.
Obviously the customer is not able to manage the conflicts themselves and it
will be hard for us as external IT to decide on their behalf which files
should be kept and which should be removed etc.
Are there any fairly simplistic user level tools that would enable them to
monitor this and manipulate the versions of the files in a better way, or is
there perhaps something I am missing completely here and a better way to
deal with this type of scenario exists?
It is a very good client of ours and we would hate after they have spend
many thousands of pounds on this system to start effectively loosing
valuable data because of the DFS methods deployed.
Your input would be much appreciated at this stage along with any
recommendations as to the best ways forward for them.
Please let me know if you need any further explanation of the problem - the
main reason for DFS was so people would always have access to the data even
if the internet links were down between the sites, this prevented us going
for other options such as terminal services.