The next article deals with the limitations of DFSR, NTFS, etc. Suffice it to say: no. I don't have it all on one partition. A single RAID array, (due to cost limitations,) but not one file system. DFSR won't handle 60M files in a single replication group; not even close.
For many people, simply replicating everything from a single volume would be fine; they don't have 60M files and 10TB of data. For myself, I have a folder under E:\ called “Shares.” On smaller sites, where I only have 4M or less files to worry about, this is a single volume, single replication group. They are usually less than about 2.5TB and work just fine.
On the main site, where there are 60M files and around 10TB to replicate, each of these shares is a replication group of its very own. On this site, each of these shares is actually a separate partition mounted into the “shares” folder. A little overly complex perhaps…but easy enough to administer as a single entity. (The backup crawlers for example simply see the whole lot as descendants of “E:\”)
As to the Jet database, it’s not the only issue. Remember that the DB behaves like an absolute turd if it can’t stuff itself entirely into RAM; additionally Bad Things Can Happen if DFSR is in the middle of replication, the Jet DB *has* managed to put itself entirely into RAM and the power goes out. You are correct as regards the file locking issue: it’s a problem.
The above are the reasons I view DFSR as “hobo high availability.” The /proper/ way to HA for file storage is to use two SANs that replicated at a block level. No questions about that. Not everyone has that luxury; however SANs are rapidly dropping in price. For the moment, buying 10 NASes to cover my 5 logical sites is still nearly an order of magnitude more expensive than my current “hobo high availability” solution.
If it makes you less concerned, my partner in crime is a strictly by-the-book systems administrator. His job is largely to restrain me. I MacGyver a solution to whatever problem is a burning fire on my desk at that particular moment. Where something might take hours, days or weeks to get operational or repaired “by the book,” my job is to make that happen in minutes. My fellow sysadmin then frown cannons me, and sets about poking holes in all of my theories and experiments.
We poke away at them and eventually arrive at workable, reproducible configurations. They are rarely best practice because we simply don’t have the resources to set up our network that way. They work though; and they work reliably. Tested through various failures both in the lab and in practice. DFS namespace for example is one of those that didn’t. (See post further down the thread on that.)
It’s why I write what I write; theory versus practice. In theory there is one way to do everything, all the time, and it works for everyone. In practice…I find that real world requirements and resource availability are different almost every single time.