Windows NFS was so terrible
Maybe it's better now but am not holding my breath. Several years ago I tried to deploy a pair of "appliances" from HPE that ran Windows 2012 Storage server specifically for NFSv3 for Linux clients (and maybe 1% SMB). I chose this because it was supported by HPE, it could leverage existing back end SAN for storage, and it was highly available. Add to that I only needed maybe at most 3TB of space so buying a full blown enterprise NAS was super overkill as nobody made such small systems. SMB NAS units(Synology etc) I didn't deem acceptable levels of high availability.
I figured my use case is super simple(file storage, not transactional storage), my data set is very small, my I/O workload is pretty tiny, it's fully supported so how hard could it be for Windows to fill this role?
Anyway, aside from the 9 hour phone support call I had to sit on while HPE support tried to get through their own quickstart guide (I was expecting a ~10 minute self setup process) due to bugs or whatever(they ended up doing manual registry entries to get the two nodes to see each other which were connected directly via ethernet cable), the system had endless problems from a software perspective. 99% of the software was Microsoft, the only HPE stuff was just some basic utilities which for the most part I don't think I even used outside of the initial setup.
I filed so many support issues, at one point HPE got Microsoft to make a custom patch for me for one of the issues. I never deployed the patch purely out of fear that I was the only one in the world to have the issue in question(that and it wasn't my most pressing concern of all the problems I was having). I should of returned the systems right away but I was confident I could work through the problems given time. I had no idea what I was in for when I made that decision. I was also in periodic contact with HPE back end engineering for this product though their resources were limited as the software was all Microsoft.
The systems never made it past testing, maybe 6 months of usage with many problems and support cases and workarounds and blah blah.. I designed the file system layout similar to our past NAS offerings from Nexenta(VM based), and FreeNAS (also VM based). There were 5 different drive letters, one for production, one for staging, one for nonproduction, one for backups(with dedupe enabled) and one for admin/operations data. The idea is if there is a problem on one of those it doesn't impact the others.
The nail in the coffin for this solution for me, was at one point the backups volume (which had dedupe enabled) gave an error saying it was out of space(when it had plenty of space according to the drive letter and according to the SAN). When this happened the entire cluster went down(including the other 4 drive letters!). The system tried to fail over but the same condition existed on the other node. I had expected that just that one drive letter/volume to fail, that is fine let the others continue to operate. But that's not what really happened, all data went offline. I worked around the issue by expanding the volume on the SAN and it cured it for a while, until it happened again, all volumes down because one got full. WTF.
I tried to figure out a way to configure the cluster so that it would continue to operate the other 4 drive letters while this one was down. Could not figure it out(and didn't want to/couldn't wait hours for support to try to figure it out). So I nuked the cluster. Went single node, from that point on if that drive letter failed(if I recall right anyway this was years ago) the other drives remained online. I assume the source of the "out of space" error was some bug or integration issue with thin provisioning on the SAN, but was never able to figure it out. I do recall getting a similar error from our FreeNAS from the same array, there was plenty of space but FreeNAS said there was not(filesystem was 50% full). This issue ONLY appeared on volumes with dedupe enabled (Windows side dedupe, and FreeNAS side dedupe). I've never seen that error before or since. It was an old array so maybe the array's fault. I don't mind that particular volume going offline (it only stored backups), but the final straw was that volume being offline should not of taken the rest of the system down(in the case of Windows cluster) with it.
But I had so many other annoying issues with NFS on Windows, I'm a Linux person so I didn't have high expectations for awesome NFS from Windows but again my use case was really light and trivial(biggest requirement was high availability).
In the end I migrated off of that Windows NFS back to FreeNAS again(had replication but no HA, so I never applied any OS updates to the system while it was in use, fortunately no failures either), before later migrating to Isilon. Makes me feel strange having 8U of Isilon equipment for ~12TB of written data(probably be closer to 7TB after Isilon overhead, more data than original because years have passed and we have since grown). But I was unable(at the time) to find a viable supportable HA NAS head unit offering to leverage shared back end storage. I was planning on going NetApp V-series before I realized the cost of Isilon was a lot less than I expected(and much less than V-Series for me at the time especially if you consider NetApp licensing for both NFS and SMB).
(When I first started out we used Nexenta's VM based system, and they officially supported high availability running on VMs. Initially this worked fine, however in production it was a disaster as the Nexenta systems went split brain on several occasions corrupting data in ZFS(support's only response was "restore from backups"), things were fine after destroying the cluster and going single node but of course no HA anymore)