Windows 2012R2 is not Windows NT 3.1
And the Linux 4.x kernel is not the Linux 1.x kernel.
And Solaris 2.x was not SunOS 4.x.
And Mac OS X is not MacOS 9.
For most established software, be it server operating systems or storage array software, there is significant turnover of the code. Entire sections and modules are regularly rewritten.
ONTAP was completely rearchitected to take Spinnaker OS features into account. "D-Blade", "N-Blade", etc., these Data ONTAP C-Mode software constructs are from Spinnaker OS, not legacy Data ONTAP.
WAFL? If Data ONTAP C-Mode WAFL is unchanged from Data ONTAP 7-Mode WAFL, then you could just mount a 7G aggregate in C-Mode and serve data from it, but you can't. Something is different. Something changed. Something was rewritten.
What about SnapMirror in C-Mode vs. SnapMirror in 7-Mode? If it is the same thing, if it has not been rewritten, why is C-Mode so different? And if NetApp is just repackaging the same thing over and over, where did Q-Tree SnapMirror go? And what is this new version of SnapMirror I hear about in ONTAP 8.3?
I have heard from multiple software houses (OS, application, and storage software) that if you go back 3 major releases of a software platform, there is almost no common code--in other words, every three iterations results in effectively a complete rewrite--net new code. It is also common for certain modules to be rewritten with a new release, while others are simply maintained, only to be rewritten in the next major release. If the 3-release rule holds true to NetApp and C-Mode, then C-Mode 8.3 has been completely rewritten from C-Mode 8.0. That is probably correct. The same probably holds true for Windows 2012 and Windows 2000.
The reason legacy systems are legacy systems is because they require legacy compatibility. They make conscious decisions not to incorporate certain things that would break compatibility. It is not because their developers are not smart enough to come up with new ideas, or because their company's engineering leadership is not interested in new ideas. How many have had to create a custom Windows 2003 ISO image for a late model Intel server? Yeah, compatibility matters.
Second movers have second mover advantage. A startup, with no legacy to be compatible with, can make changes at the start, based on the examples of errors of others. But the luxury of no legacy is ephemeral.
As for WAFL, WAFL effectively is a Log-Structured File System. It came into being in the early 1990s at the same time as the major academic research into LSFSs was going on. Then NetApp had the second mover advantage over Auspex, which was limited in part due to its Berkley UNIX file system.
Today, all of the major AFAs and hybrid arrays use LSFSs: Nimble, Tintri, Pure, XtremIO, Kaminario, etc. LSFSs are the only way to effectively manage wear on NAND flash, to the point more FTLs on SSDs are also LSFS based. Are these later LSFS implementations slightly better? Sure. Does it provide significant differentiation and customer value? Doubtful.
For these AFAs, it really comes down to single container vs. multiple container architectures, global vs. non-global duplication, and supported protocols. And NetApp and HPE figured out they both have pretty good hybrid storage platforms that can be competitive with net-new start-up AFAs. And some of those net-new designs are already hitting architectural walls. And to break through those walls may require significant change, and disruption to their legacy. XtremIO already encountered this with the 3.0 software release.
There is very little radical improvement in storage. Every vendor offers sub millisecond response times. Every vendor offers some level of storage efficiency. Every vendor has good NAND wear leveling. The differentiation which was there two years ago has been commoditized.