Discussion:
DFS - Dealing with file conflicts
(too old to reply)
TimeTraveller
2007-08-11 10:27:31 UTC
Permalink
Hi All,

We have recently setup a DFS topology between 2 sites (soon to be 3).

Everything is working as expected and replication of data sets between the
sites is good.

The problem from the customers perspective is that quite often the Head
Office (SITE1) and the Remote Site (Site2) are working on the same file
albeit from their respective data sets (Nearest server).

Obviously what then happens is that updates made to these files are not
always being written back correctly and edits made on one replica are being
detected as conflicts and moved to the conflict and detection folders.

As I understand DFS this again is basically correct when two users at two
sites are working on the same file from different server data sets DFS will
make the last saved file the winner and store the earlier saved file in the
conflict and detection folders.

Obviously the customer is not able to manage the conflicts themselves and it
will be hard for us as external IT to decide on their behalf which files
should be kept and which should be removed etc.

Are there any fairly simplistic user level tools that would enable them to
monitor this and manipulate the versions of the files in a better way, or is
there perhaps something I am missing completely here and a better way to
deal with this type of scenario exists?

It is a very good client of ours and we would hate after they have spend
many thousands of pounds on this system to start effectively loosing
valuable data because of the DFS methods deployed.

Your input would be much appreciated at this stage along with any
recommendations as to the best ways forward for them.

Please let me know if you need any further explanation of the problem - the
main reason for DFS was so people would always have access to the data even
if the internet links were down between the sites, this prevented us going
for other options such as terminal services.

Regards

TT
Anthony
2007-08-11 11:21:49 UTC
Permalink
DFS-R is great for lots of things, but it can't solve the problem that the
same data can't be in two places at once.
If people at two sites really are working on the same data at the same time,
you need to have the data at one site. You can still use DFS to provide a
single namespace, but keep the data at the site where it primarily belongs.
Then you need to use Terminal Services if the access is too slow.
A file versioning procedure would help if people really are doing multiple
authoring.
There are a lot of other solutions, e.g SharePoint and WAFS, but you said
you need to get the DFS system to work,
Hope that helps,
Anthony -
http://www.airdesk.co.uk
Post by TimeTraveller
Hi All,
We have recently setup a DFS topology between 2 sites (soon to be 3).
Everything is working as expected and replication of data sets between the
sites is good.
The problem from the customers perspective is that quite often the Head
Office (SITE1) and the Remote Site (Site2) are working on the same file
albeit from their respective data sets (Nearest server).
Obviously what then happens is that updates made to these files are not
always being written back correctly and edits made on one replica are
being detected as conflicts and moved to the conflict and detection
folders.
As I understand DFS this again is basically correct when two users at two
sites are working on the same file from different server data sets DFS
will make the last saved file the winner and store the earlier saved file
in the conflict and detection folders.
Obviously the customer is not able to manage the conflicts themselves and
it will be hard for us as external IT to decide on their behalf which
files should be kept and which should be removed etc.
Are there any fairly simplistic user level tools that would enable them to
monitor this and manipulate the versions of the files in a better way, or
is there perhaps something I am missing completely here and a better way
to deal with this type of scenario exists?
It is a very good client of ours and we would hate after they have spend
many thousands of pounds on this system to start effectively loosing
valuable data because of the DFS methods deployed.
Your input would be much appreciated at this stage along with any
recommendations as to the best ways forward for them.
Please let me know if you need any further explanation of the problem -
the main reason for DFS was so people would always have access to the data
even if the internet links were down between the sites, this prevented us
going for other options such as terminal services.
Regards
TT
TimeTraveller
2007-08-11 12:26:36 UTC
Permalink
Hello Anthony,

Thanks for your input here, I have pretty much arrived at the same
conclusions as you in that the two datasets being manipulated are the root
of the issue, another problem that adds to this is that people often have
the document open and close it later in the day, this causes changes others
have made earlier to be removed in preference to the last saved document.

One of the main reasons for this setup was.

1. Data should be available at each site in the event that the internet
links were down - each site could continue working
2. Data access over the VPN link was two slow

My thoughts at the time were.
1. Terminal services would only be useful if the links were up
2. Accessing data over the link was too slow and again not highly available
if a link were down.

This led me to DFS as the best option, however the way this is being used
also adds to the problem.

Look forward to any other observations and ways forward from you good guys.

Thanks

TT
Post by Anthony
DFS-R is great for lots of things, but it can't solve the problem that the
same data can't be in two places at once.
If people at two sites really are working on the same data at the same
time, you need to have the data at one site. You can still use DFS to
provide a single namespace, but keep the data at the site where it
primarily belongs. Then you need to use Terminal Services if the access is
too slow.
A file versioning procedure would help if people really are doing multiple
authoring.
There are a lot of other solutions, e.g SharePoint and WAFS, but you said
you need to get the DFS system to work,
Hope that helps,
Anthony -
http://www.airdesk.co.uk
Post by TimeTraveller
Hi All,
We have recently setup a DFS topology between 2 sites (soon to be 3).
Everything is working as expected and replication of data sets between
the sites is good.
The problem from the customers perspective is that quite often the Head
Office (SITE1) and the Remote Site (Site2) are working on the same file
albeit from their respective data sets (Nearest server).
Obviously what then happens is that updates made to these files are not
always being written back correctly and edits made on one replica are
being detected as conflicts and moved to the conflict and detection
folders.
As I understand DFS this again is basically correct when two users at two
sites are working on the same file from different server data sets DFS
will make the last saved file the winner and store the earlier saved file
in the conflict and detection folders.
Obviously the customer is not able to manage the conflicts themselves and
it will be hard for us as external IT to decide on their behalf which
files should be kept and which should be removed etc.
Are there any fairly simplistic user level tools that would enable them
to monitor this and manipulate the versions of the files in a better way,
or is there perhaps something I am missing completely here and a better
way to deal with this type of scenario exists?
It is a very good client of ours and we would hate after they have spend
many thousands of pounds on this system to start effectively loosing
valuable data because of the DFS methods deployed.
Your input would be much appreciated at this stage along with any
recommendations as to the best ways forward for them.
Please let me know if you need any further explanation of the problem -
the main reason for DFS was so people would always have access to the
data even if the internet links were down between the sites, this
prevented us going for other options such as terminal services.
Regards
TT
Anthony
2007-08-11 15:52:18 UTC
Permalink
It sounds like there is a bit of a procedural problem too. After all, even
if the files were in the same place, if multiple people open and edit them,
and leave them open, there is no way to merge changes later. Depending on
the files there may be a procedural or a system solution to that.
SharePoint document library has the versioning feaures and checkout/in.
Maybe some sort of document management system is needed.
WAN links down is really not very common. People have TS for their finance
systems over VPN and people in accounts can't do any work if the line goes
down, but it is rare. If the business considers it sufficiently important,
they can pay for additional redundant links. If as a business you want
people from different sites to work on the same data at the same time, you
have to accept one of the available solutions.
Anthony -
http://www.airdesk.co.uk
Post by TimeTraveller
Hello Anthony,
Thanks for your input here, I have pretty much arrived at the same
conclusions as you in that the two datasets being manipulated are the root
of the issue, another problem that adds to this is that people often have
the document open and close it later in the day, this causes changes
others have made earlier to be removed in preference to the last saved
document.
One of the main reasons for this setup was.
1. Data should be available at each site in the event that the internet
links were down - each site could continue working
2. Data access over the VPN link was two slow
My thoughts at the time were.
1. Terminal services would only be useful if the links were up
2. Accessing data over the link was too slow and again not highly
available if a link were down.
This led me to DFS as the best option, however the way this is being used
also adds to the problem.
Look forward to any other observations and ways forward from you good guys.
Thanks
TT
Post by Anthony
DFS-R is great for lots of things, but it can't solve the problem that
the same data can't be in two places at once.
If people at two sites really are working on the same data at the same
time, you need to have the data at one site. You can still use DFS to
provide a single namespace, but keep the data at the site where it
primarily belongs. Then you need to use Terminal Services if the access
is too slow.
A file versioning procedure would help if people really are doing
multiple authoring.
There are a lot of other solutions, e.g SharePoint and WAFS, but you said
you need to get the DFS system to work,
Hope that helps,
Anthony -
http://www.airdesk.co.uk
Post by TimeTraveller
Hi All,
We have recently setup a DFS topology between 2 sites (soon to be 3).
Everything is working as expected and replication of data sets between
the sites is good.
The problem from the customers perspective is that quite often the Head
Office (SITE1) and the Remote Site (Site2) are working on the same file
albeit from their respective data sets (Nearest server).
Obviously what then happens is that updates made to these files are not
always being written back correctly and edits made on one replica are
being detected as conflicts and moved to the conflict and detection
folders.
As I understand DFS this again is basically correct when two users at
two sites are working on the same file from different server data sets
DFS will make the last saved file the winner and store the earlier saved
file in the conflict and detection folders.
Obviously the customer is not able to manage the conflicts themselves
and it will be hard for us as external IT to decide on their behalf
which files should be kept and which should be removed etc.
Are there any fairly simplistic user level tools that would enable them
to monitor this and manipulate the versions of the files in a better
way, or is there perhaps something I am missing completely here and a
better way to deal with this type of scenario exists?
It is a very good client of ours and we would hate after they have spend
many thousands of pounds on this system to start effectively loosing
valuable data because of the DFS methods deployed.
Your input would be much appreciated at this stage along with any
recommendations as to the best ways forward for them.
Please let me know if you need any further explanation of the problem -
the main reason for DFS was so people would always have access to the
data even if the internet links were down between the sites, this
prevented us going for other options such as terminal services.
Regards
TT
unknown
2007-11-14 12:03:00 UTC
Permalink
Hi there,

Our problem is quite similar.

We're wondering if there's a way to lock a file when it's in use, i.e make
it read only?

If a file is used by one user, the next user to open the file (while still
being used by the first user) will still be able to write - he wont be
notified.
Is there a way to avoid from this conflict?

A.

Hope
Post by Anthony
It sounds like there is a bit of a procedural problem too. After all, even
if the files were in the same place, if multiple people open and edit them,
and leave them open, there is no way to merge changes later. Depending on
the files there may be a procedural or a system solution to that.
SharePoint document library has the versioning feaures and checkout/in.
Maybe some sort of document management system is needed.
WAN links down is really not very common. People have TS for their finance
systems over VPN and people in accounts can't do any work if the line goes
down, but it is rare. If the business considers it sufficiently important,
they can pay for additional redundant links. If as a business you want
people from different sites to work on the same data at the same time, you
have to accept one of the available solutions.
Anthony -
http://www.airdesk.co.uk
Post by TimeTraveller
Hello Anthony,
Thanks for your input here, I have pretty much arrived at the same
conclusions as you in that the two datasets being manipulated are the root
of the issue, another problem that adds to this is that people often have
the document open and close it later in the day, this causes changes
others have made earlier to be removed in preference to the last saved
document.
One of the main reasons for this setup was.
1. Data should be available at each site in the event that the internet
links were down - each site could continue working
2. Data access over the VPN link was two slow
My thoughts at the time were.
1. Terminal services would only be useful if the links were up
2. Accessing data over the link was too slow and again not highly
available if a link were down.
This led me to DFS as the best option, however the way this is being used
also adds to the problem.
Look forward to any other observations and ways forward from you good guys.
Thanks
TT
Post by Anthony
DFS-R is great for lots of things, but it can't solve the problem that
the same data can't be in two places at once.
If people at two sites really are working on the same data at the same
time, you need to have the data at one site. You can still use DFS to
provide a single namespace, but keep the data at the site where it
primarily belongs. Then you need to use Terminal Services if the access
is too slow.
A file versioning procedure would help if people really are doing
multiple authoring.
There are a lot of other solutions, e.g SharePoint and WAFS, but you said
you need to get the DFS system to work,
Hope that helps,
Anthony -
http://www.airdesk.co.uk
Post by TimeTraveller
Hi All,
We have recently setup a DFS topology between 2 sites (soon to be 3).
Everything is working as expected and replication of data sets between
the sites is good.
The problem from the customers perspective is that quite often the Head
Office (SITE1) and the Remote Site (Site2) are working on the same file
albeit from their respective data sets (Nearest server).
Obviously what then happens is that updates made to these files are not
always being written back correctly and edits made on one replica are
being detected as conflicts and moved to the conflict and detection
folders.
As I understand DFS this again is basically correct when two users at
two sites are working on the same file from different server data sets
DFS will make the last saved file the winner and store the earlier saved
file in the conflict and detection folders.
Obviously the customer is not able to manage the conflicts themselves
and it will be hard for us as external IT to decide on their behalf
which files should be kept and which should be removed etc.
Are there any fairly simplistic user level tools that would enable them
to monitor this and manipulate the versions of the files in a better
way, or is there perhaps something I am missing completely here and a
better way to deal with this type of scenario exists?
It is a very good client of ours and we would hate after they have spend
many thousands of pounds on this system to start effectively loosing
valuable data because of the DFS methods deployed.
Your input would be much appreciated at this stage along with any
recommendations as to the best ways forward for them.
Please let me know if you need any further explanation of the problem -
the main reason for DFS was so people would always have access to the
data even if the internet links were down between the sites, this
prevented us going for other options such as terminal services.
Regards
TT
Anthony
2007-11-14 13:02:56 UTC
Permalink
No, that's just the way it works. You need to look for a different solution
if you need that. Most people use Terminal Services if people in different
places need to use the same data.
Anthony, http://www.airdesk.co.uk
Post by unknown
Hi there,
Our problem is quite similar.
We're wondering if there's a way to lock a file when it's in use, i.e make
it read only?
If a file is used by one user, the next user to open the file (while still
being used by the first user) will still be able to write - he wont be
notified.
Is there a way to avoid from this conflict?
A.
Hope
Post by Anthony
It sounds like there is a bit of a procedural problem too. After all, even
if the files were in the same place, if multiple people open and edit them,
and leave them open, there is no way to merge changes later. Depending on
the files there may be a procedural or a system solution to that.
SharePoint document library has the versioning feaures and checkout/in.
Maybe some sort of document management system is needed.
WAN links down is really not very common. People have TS for their finance
systems over VPN and people in accounts can't do any work if the line goes
down, but it is rare. If the business considers it sufficiently important,
they can pay for additional redundant links. If as a business you want
people from different sites to work on the same data at the same time, you
have to accept one of the available solutions.
Anthony -
http://www.airdesk.co.uk
Post by TimeTraveller
Hello Anthony,
Thanks for your input here, I have pretty much arrived at the same
conclusions as you in that the two datasets being manipulated are the root
of the issue, another problem that adds to this is that people often have
the document open and close it later in the day, this causes changes
others have made earlier to be removed in preference to the last saved
document.
One of the main reasons for this setup was.
1. Data should be available at each site in the event that the internet
links were down - each site could continue working
2. Data access over the VPN link was two slow
My thoughts at the time were.
1. Terminal services would only be useful if the links were up
2. Accessing data over the link was too slow and again not highly
available if a link were down.
This led me to DFS as the best option, however the way this is being used
also adds to the problem.
Look forward to any other observations and ways forward from you good guys.
Thanks
TT
Post by Anthony
DFS-R is great for lots of things, but it can't solve the problem that
the same data can't be in two places at once.
If people at two sites really are working on the same data at the same
time, you need to have the data at one site. You can still use DFS to
provide a single namespace, but keep the data at the site where it
primarily belongs. Then you need to use Terminal Services if the access
is too slow.
A file versioning procedure would help if people really are doing
multiple authoring.
There are a lot of other solutions, e.g SharePoint and WAFS, but you said
you need to get the DFS system to work,
Hope that helps,
Anthony -
http://www.airdesk.co.uk
Post by TimeTraveller
Hi All,
We have recently setup a DFS topology between 2 sites (soon to be 3).
Everything is working as expected and replication of data sets between
the sites is good.
The problem from the customers perspective is that quite often the Head
Office (SITE1) and the Remote Site (Site2) are working on the same file
albeit from their respective data sets (Nearest server).
Obviously what then happens is that updates made to these files are not
always being written back correctly and edits made on one replica are
being detected as conflicts and moved to the conflict and detection
folders.
As I understand DFS this again is basically correct when two users at
two sites are working on the same file from different server data sets
DFS will make the last saved file the winner and store the earlier saved
file in the conflict and detection folders.
Obviously the customer is not able to manage the conflicts themselves
and it will be hard for us as external IT to decide on their behalf
which files should be kept and which should be removed etc.
Are there any fairly simplistic user level tools that would enable them
to monitor this and manipulate the versions of the files in a better
way, or is there perhaps something I am missing completely here and a
better way to deal with this type of scenario exists?
It is a very good client of ours and we would hate after they have spend
many thousands of pounds on this system to start effectively loosing
valuable data because of the DFS methods deployed.
Your input would be much appreciated at this stage along with any
recommendations as to the best ways forward for them.
Please let me know if you need any further explanation of the problem -
the main reason for DFS was so people would always have access to the
data even if the internet links were down between the sites, this
prevented us going for other options such as terminal services.
Regards
TT
Continue reading on narkive:
Loading...