Flawed analysis
Ammaross Danan is correct. But I would go further.
This single 10G Ethernet port won't replace sixteen 10G NICs. It'll replace sixteen 1G NICs. If a server has been upgraded to a 10G NIC in the first place, chances it it *needs* that kind of bandwidth. So this isn't nearly as painful to the NIC/switch vendors as the article suggests.
Putting DRAM on the far side of a PCIe link is totally boneheaded. CPU vendors are now putting memory interfaces on-die whenever performance is required. Even the new Atom has an on-socket DRAM interface, though this is for power savings (eliminating the power-hungry 945 northbridge) rather than performance. DRAM should be considered effectively part of the CPU in system design.
The only part of this that makes any sense is the shared disk storage. I wonder how wide the PCIe link to each server is? I hope it's at least x4, since only then will it compete effectively with 10G Ethernet and iSCSI. This assumes that there are enough disks and bandwidth demand to use that kind of performance, and that the application can make effective use of a shared storage pool. And that the storage management of this new box is up to the standards set by existing storage vendors - storage reliability and disaster recovery is critical.
Can it be retrofitted to existing servers? If not, I would dismiss it out of hand unless I was building something from scratch, and then I would want a thorough demonstration including disaster scenarioes.