Discussion:
Staging file filter
(too old to reply)
Mike
2007-03-20 08:36:00 UTC
Permalink
I have an issue with DFS on our file server. We have 1.2 TB with some very
large files. The server performance has tanked since DFS started replicating.
I have tracked the issue to the staging of some very large files.

Mostly I can see some of the PSTs that we keep on the server. The files are
accessed by the clients and I believe DFS sees these files change every 25
minutes or so and stages them. This contant staging is causing all users to
complain. I see that I can filter a file to prevent it from replicating.

Is there a way to prevent the PSTs from staging in the first place?
Ned Pyle [MSFT]
2007-03-20 16:12:25 UTC
Permalink
You can block replication of PST files with the aforementioned filter, but
there's no control over staging except increasing the staging quota to avoid
any backlogs caused by quota exhaustion (do you see DFSR event log messages
of 4208, 4202?).

It is possible to turn off RDC, which will certainly lower CPU utilization
(I need more details on what tanking means :-) ). If the servers are all on
a fast LAN (100/1000Mb) and depending on the % of benefit you are seeing
from RDC, turning it off is sometimes actually faster - but this needs to be
tested, not just deployed into prod, as with very large files getting small
changes, this may not be the case.
--
Ned Pyle
Microsoft Enterprise Platform Support
This posting is provided "AS IS" with no warranties, and confers no rights.
Please read http://www.microsoft.com/info/cpyright.htm for more information.
Post by Mike
I have an issue with DFS on our file server. We have 1.2 TB with some very
large files. The server performance has tanked since DFS started replicating.
I have tracked the issue to the staging of some very large files.
Mostly I can see some of the PSTs that we keep on the server. The files are
accessed by the clients and I believe DFS sees these files change every 25
minutes or so and stages them. This contant staging is causing all users to
complain. I see that I can filter a file to prevent it from replicating.
Is there a way to prevent the PSTs from staging in the first place?
Mike
2007-03-20 20:13:15 UTC
Permalink
Hello Ned,

I do see these events. The issue is the volume is 1.2TB the share takes up
about 800GB. What is the recommended staging size for a share of this size? I
increased the size to 30GB but still having the performance issues.
Also the CPU usage is not an issue. I just see the disk utilization go
through the roof. The cpu is fine, unless task manager isn't showing me
something.

Mike
Post by Ned Pyle [MSFT]
You can block replication of PST files with the aforementioned filter, but
there's no control over staging except increasing the staging quota to avoid
any backlogs caused by quota exhaustion (do you see DFSR event log messages
of 4208, 4202?).
It is possible to turn off RDC, which will certainly lower CPU utilization
(I need more details on what tanking means :-) ). If the servers are all on
a fast LAN (100/1000Mb) and depending on the % of benefit you are seeing
from RDC, turning it off is sometimes actually faster - but this needs to be
tested, not just deployed into prod, as with very large files getting small
changes, this may not be the case.
--
Ned Pyle
Microsoft Enterprise Platform Support
This posting is provided "AS IS" with no warranties, and confers no rights.
Please read http://www.microsoft.com/info/cpyright.htm for more information.
Post by Mike
I have an issue with DFS on our file server. We have 1.2 TB with some very
large files. The server performance has tanked since DFS started replicating.
I have tracked the issue to the staging of some very large files.
Mostly I can see some of the PSTs that we keep on the server. The files are
accessed by the clients and I believe DFS sees these files change every 25
minutes or so and stages them. This contant staging is causing all users to
complain. I see that I can filter a file to prevent it from replicating.
Is there a way to prevent the PSTs from staging in the first place?
Ned Pyle [MSFT]
2007-03-20 23:58:44 UTC
Permalink
As big a quota for the staging as you can make it. We recommend the 9
largest files only take up 10% of the staging space quota. At LEAST! If you
can increase staging to say 100GB for a day or two as a test, that might
help us find the sweet spot.

Keep in mind that staging space is not preallocated (i.e. you don't lose
100GB of available disk by setting quota to 100GB); it's just a threshold
for the DFSR service to know when to start pruning older files. The longer
you can keep files in staging, the less disk trashing you will see. DFSR is
designed primarily around relatively static data, so if really huge files
(like PST's) are being opened, changed, and saved constantly (like PST's
;-) ), using DFSR to hold that data may not scale well enough on your
current disk system.
--
Ned Pyle
Microsoft Enterprise Platform Support
This posting is provided "AS IS" with no warranties, and confers no rights.
Please read http://www.microsoft.com/info/cpyright.htm for more information.
Post by Mike
Hello Ned,
I do see these events. The issue is the volume is 1.2TB the share takes up
about 800GB. What is the recommended staging size for a share of this size? I
increased the size to 30GB but still having the performance issues.
Also the CPU usage is not an issue. I just see the disk utilization go
through the roof. The cpu is fine, unless task manager isn't showing me
something.
Mike
Post by Ned Pyle [MSFT]
You can block replication of PST files with the aforementioned filter, but
there's no control over staging except increasing the staging quota to avoid
any backlogs caused by quota exhaustion (do you see DFSR event log messages
of 4208, 4202?).
It is possible to turn off RDC, which will certainly lower CPU utilization
(I need more details on what tanking means :-) ). If the servers are all on
a fast LAN (100/1000Mb) and depending on the % of benefit you are seeing
from RDC, turning it off is sometimes actually faster - but this needs to be
tested, not just deployed into prod, as with very large files getting small
changes, this may not be the case.
--
Ned Pyle
Microsoft Enterprise Platform Support
This posting is provided "AS IS" with no warranties, and confers no rights.
Please read http://www.microsoft.com/info/cpyright.htm for more information.
Post by Mike
I have an issue with DFS on our file server. We have 1.2 TB with some very
large files. The server performance has tanked since DFS started replicating.
I have tracked the issue to the staging of some very large files.
Mostly I can see some of the PSTs that we keep on the server. The files are
accessed by the clients and I believe DFS sees these files change every 25
minutes or so and stages them. This contant staging is causing all
users
to
complain. I see that I can filter a file to prevent it from
replicating.
Is there a way to prevent the PSTs from staging in the first place?
Mike
2007-03-21 04:08:16 UTC
Permalink
Do you know if DPM would work better for us or would I run into the same
issue? At the moment I have removed the entire user share from DFS. I can't
remove the PSTs without setting aside a project to do this.
Post by Ned Pyle [MSFT]
As big a quota for the staging as you can make it. We recommend the 9
largest files only take up 10% of the staging space quota. At LEAST! If you
can increase staging to say 100GB for a day or two as a test, that might
help us find the sweet spot.
Keep in mind that staging space is not preallocated (i.e. you don't lose
100GB of available disk by setting quota to 100GB); it's just a threshold
for the DFSR service to know when to start pruning older files. The longer
you can keep files in staging, the less disk trashing you will see. DFSR is
designed primarily around relatively static data, so if really huge files
(like PST's) are being opened, changed, and saved constantly (like PST's
;-) ), using DFSR to hold that data may not scale well enough on your
current disk system.
--
Ned Pyle
Microsoft Enterprise Platform Support
This posting is provided "AS IS" with no warranties, and confers no rights.
Please read http://www.microsoft.com/info/cpyright.htm for more information.
Post by Mike
Hello Ned,
I do see these events. The issue is the volume is 1.2TB the share takes up
about 800GB. What is the recommended staging size for a share of this size? I
increased the size to 30GB but still having the performance issues.
Also the CPU usage is not an issue. I just see the disk utilization go
through the roof. The cpu is fine, unless task manager isn't showing me
something.
Mike
Post by Ned Pyle [MSFT]
You can block replication of PST files with the aforementioned filter, but
there's no control over staging except increasing the staging quota to avoid
any backlogs caused by quota exhaustion (do you see DFSR event log messages
of 4208, 4202?).
It is possible to turn off RDC, which will certainly lower CPU utilization
(I need more details on what tanking means :-) ). If the servers are all on
a fast LAN (100/1000Mb) and depending on the % of benefit you are seeing
from RDC, turning it off is sometimes actually faster - but this needs to be
tested, not just deployed into prod, as with very large files getting small
changes, this may not be the case.
--
Ned Pyle
Microsoft Enterprise Platform Support
This posting is provided "AS IS" with no warranties, and confers no rights.
Please read http://www.microsoft.com/info/cpyright.htm for more information.
Post by Mike
I have an issue with DFS on our file server. We have 1.2 TB with some very
large files. The server performance has tanked since DFS started replicating.
I have tracked the issue to the staging of some very large files.
Mostly I can see some of the PSTs that we keep on the server. The files are
accessed by the clients and I believe DFS sees these files change every 25
minutes or so and stages them. This contant staging is causing all
users
to
complain. I see that I can filter a file to prevent it from replicating.
Is there a way to prevent the PSTs from staging in the first place?
Ned Pyle [MSFT]
2007-03-21 14:35:21 UTC
Permalink
DPM might be it, if you are primarily using DFSR as a backup centralization
product rather than a high availability product. I'm not an expert on it,
but we have them. :) You could also just use something like ROBOCOPY /MOT
/MON to keep that particular thrashy folder in sync with a central box.
Robocopy is great for a 2-server setup.
--
Ned Pyle
Microsoft Enterprise Platform Support
This posting is provided "AS IS" with no warranties, and confers no rights.
Please read http://www.microsoft.com/info/cpyright.htm for more information.
Post by Mike
Do you know if DPM would work better for us or would I run into the same
issue? At the moment I have removed the entire user share from DFS. I can't
remove the PSTs without setting aside a project to do this.
Post by Ned Pyle [MSFT]
As big a quota for the staging as you can make it. We recommend the 9
largest files only take up 10% of the staging space quota. At LEAST! If you
can increase staging to say 100GB for a day or two as a test, that might
help us find the sweet spot.
Keep in mind that staging space is not preallocated (i.e. you don't lose
100GB of available disk by setting quota to 100GB); it's just a threshold
for the DFSR service to know when to start pruning older files. The longer
you can keep files in staging, the less disk trashing you will see. DFSR is
designed primarily around relatively static data, so if really huge files
(like PST's) are being opened, changed, and saved constantly (like PST's
;-) ), using DFSR to hold that data may not scale well enough on your
current disk system.
--
Ned Pyle
Microsoft Enterprise Platform Support
This posting is provided "AS IS" with no warranties, and confers no rights.
Please read http://www.microsoft.com/info/cpyright.htm for more information.
Post by Mike
Hello Ned,
I do see these events. The issue is the volume is 1.2TB the share takes up
about 800GB. What is the recommended staging size for a share of this size? I
increased the size to 30GB but still having the performance issues.
Also the CPU usage is not an issue. I just see the disk utilization go
through the roof. The cpu is fine, unless task manager isn't showing me
something.
Mike
Post by Ned Pyle [MSFT]
You can block replication of PST files with the aforementioned filter, but
there's no control over staging except increasing the staging quota to avoid
any backlogs caused by quota exhaustion (do you see DFSR event log messages
of 4208, 4202?).
It is possible to turn off RDC, which will certainly lower CPU utilization
(I need more details on what tanking means :-) ). If the servers are
all
on
a fast LAN (100/1000Mb) and depending on the % of benefit you are seeing
from RDC, turning it off is sometimes actually faster - but this needs
to
be
tested, not just deployed into prod, as with very large files getting small
changes, this may not be the case.
--
Ned Pyle
Microsoft Enterprise Platform Support
This posting is provided "AS IS" with no warranties, and confers no rights.
Please read http://www.microsoft.com/info/cpyright.htm for more information.
Post by Mike
I have an issue with DFS on our file server. We have 1.2 TB with some very
large files. The server performance has tanked since DFS started replicating.
I have tracked the issue to the staging of some very large files.
Mostly I can see some of the PSTs that we keep on the server. The
files
are
accessed by the clients and I believe DFS sees these files change
every
25
minutes or so and stages them. This contant staging is causing all
users
to
complain. I see that I can filter a file to prevent it from replicating.
Is there a way to prevent the PSTs from staging in the first place?
Continue reading on narkive:
Loading...