Well that was pretty weak, but at least you started with youir prejudicial assumption "Well it's a file server for starters ...".
So lets try to have a proper go.
NetApp designed from day one to be a storage device - go look at all the components that are storage oriented versus those that are server oriented - and I will remind you that originally NAS meant Network Attached STORAGE not server. So what was required -
RAID protected disks
battery backed up cache
simple controller code without the ability to run external applications
management that only covered allocating space and access, not that used the data itself.
The big design difference was that that netApp chose to relocate the metadata responsibility of the file system from servers to the storage. And no it was not designed to be low end - even today it outperforms server based file systems by a big margin. Scale out clusters are however another area and we were contrasting EMC which does not do that either.
Secondly the systems have enjoyed 18 years of development, and Dave Hitz and others are on record about how well the basic design of the system lent itself to development, often to their delight and surprise. There are no penalties for serendipity.
No it is not the best in neverything it does, and there are a multitude of edge cases where a particular product is better in some respect. But there is no other single product which covers such a large section of market requirement with the same level of performance, protection and reliability, let alone versatility.
Are there weaknesses?. of course - but funnily enough some of those you mention are exactly that - and are not relevant in all cases, or easily managed.
" it was originally designed to be a low end file server, which I will accept it is very good at" as you must since the highest demand users from WETA/Industrial Light and Magic through Yahoo choose them first.
"WAFL is highly optimised for writes" correct - so it is weaker on sequential read (only if the file has been repeatedly modified and fragmented), excellent on random read (wide stripe plus greater time allocation to reading), standard on sequential write and excellent on random write.
"But it genuinely isn't so well suited to read heavy workloads. It does slow down as it fills up" depends on what you refer to - seq read agreed to above - the point being that it starts at a faster level than the competition - see http://blogs.netapp.com/shadeofblue/2008/10/finding-a-pair.html and really look at the graph.
I agree the clustering is not that clever - and for a SAN implementation rather clunky
"The dedupe is free for a reason" is really dismissive without any credibility - it works wonders for some applications and does little for others - granted, and you get it without any extra cost is a bad thing?
So what has emerged each time is repetitive banging on about how bad they are and must be avoided because of some particular behaviour or feature which is not as perfect as you would like it to be - so would I, but which you do not offer a better single solution.
So this is a call out - put something forward that you will defend as better all round