
AMD already ahead of Intel on cost effective/efficient servers
Xeon is not what most enterprise wants and that's why it's dying.
The high-performance networking market just got a whole lot more interesting, with Intel shelling out $125m to acquire the InfiniBand switch and adapter product lines from upstart QLogic. Intel has made no secret that it wants to bolster its Data Center and Connected Systems business by getting network equipment providers to …
Intel can site may reasons why it would be interested in owning its own Infiniband business. The most important of them being that they can reduce the overall power footprint of computing blades based on Xeon or MIC tech by integrating the Infiniband cores into their own chipsets or even directly into their processors.
Also, as with Intel Ethernet controllers... which are for the most part insanely ahead of most other vendors in all the cool tech integrated in them (which also sadly there are no Windows or Linux drivers for most of it) Intel sells the chips and cards to anyone who wants them and on any platform.
The only logical reason to purchase this tech is to integrate it into their existing products. By either running it in parallel to QPI or by making it a QPI device directly, they can shave precious microseconds off the Infiniband latency and give it a direct DMA route into the CPU based memory controllers. By making it a parallel function to the QPI, effectively adding it directly to the Xeon processor itself, they can bypass most of the bus logic altogether, eliminate latency for re-encoding packets and even perform cache coherent memory operations by directly interfacing the Xeon ring buffer as a equal citizen to a processor core. This would allow ccNUMA style bus interconnection over distances.
There are some serious problems with this plan though. First of all, Infiniband switches are just too damn expensive because of the cost of the silicon. Intel can fix this by effectively chopping the cost of the fabric silicon. Oracle, Cisco and others charge a kings ransom for their Infiniband switches and it's because the Infiniband switching fabric chips are extremely overpriced. There is simply no reason Infiniband should cost more than $100 a port on a switch. Desktop switches for developers should be available for a few hundred bucks. Instead, Infiniband runs about $300 a port on the low end of the scale just for QDR.
It would be absolutely dazzling if Intel releases a 8 port switch for $200-$400 for 4x QDR. Of course, Oracle has Infiniband switches based on Mellenox with 13,000 ports... they design them about 2km from where I'm sitting right now. While I'm sure Intel could design those without a problem... they're most likely better off just selling the silicon to the switch makers and then integrating the bus itself into their chips.
PCIe and Infiniband are two totally different technologies. While companies like Dolphin Interconnect make their living by making Infiniband alternatives using PCIe, it's kind of a square peg in a round hole.
There is much noise to be made by employing PCIe switching fabric ICs to handle a multihosting environment, but it just doesn't work very well in the case of general purpose operating systems. Also, the way those switches work are really quite messy with regards to how they handle memory allocation. They're incredibly fast, but generally, they depend on a shared memory scenario that is 32-bit friendly and generally limits transactions to paged areas around 1 gigabyte in size. These systems are however substantially faster than Infiniband and unless Intel does integrate the Infiniband tech directly on QPI or even closer (such as attached to the Xeon ring bus within the CPU), will remain so. Infiniband is also a crap solution for peripheral interconnect as the signalling is far too complex to implement properly in a small peripheral. ASIC developers would shoot themselves before implementing a protocol even heavier than PCIe. The #1 reason cool new PCIe toys haven't been coming around as quickly as their PCI counterparts did is because small shops just can't design things based on it. It's just too complex and even with Xilinx and Altera doing everything imaginable to resolve that, just routing a PCB for PCIe requires far too much understanding of analog radio theory for the small guy to make it work properly.
Infiniband is substantially worse. Just for a laugh... figure out what the difference in lengths between the connectors in your $0.50 SATA-2 cable is. I think you'll learn that the precision of those cables are mind boggling and the fact they cost so little is a miracle. Infiniband makes those look like toys.
InfiniBand originated from the 1999 merger of two competing designs:
- Future I/O, developed by Compaq, IBM, and Hewlett-Packard
- Next Generation I/O (ngio), developed by Intel, Microsoft, and Sun
When Compaq took over Tandem in 1997, ServerNet was one of the gems it acquired, and thats how Infiniband directly inherited many ServerNet features.
Tandem Computers developed the original ServerNet architecture and protocols for use in its own proprietary computer systems starting in 1992, and released the first ServerNet systems in 1995. (http://tinyurl.com/899d9v5)
After 25 years, systems still ship today based on the ServerNet architecture, in both MPP and SMP systems.
http://h20223.www2.hp.com/NonStopComputing/cache/77408-0-0-225-121.html