Clarifications
Great article and excellent comments from every one. Let me shed some more light on some of the points mentioned …
In reference to redundancy (all of you touched on this subject)
In current deployments, a single Ethernet or FC top-of-rack switch is a single point of failure, this is why multiple Ethernet and FC switches are installed to provide redundancy and independent network connectivity paths. Virtensys fully supports these redundant deployments models as the VIO-4000 switches replace the traditional Ethernet and FC switches. Two or more VIO-4000 switches can be deployed in either active - active or active - passive configurations to provide servers with a full redundant path and eliminate any single point of failure. Furthermore, VIO-4000 switches also provide independent network and storage connectivity paths. Each adapter within a VIO-4000 switch is dual ported and each of the two ports can be placed on independent network path. Consequently, a single server can be given multiple, dual-port virtual I/O adapters that can be configured for greater Ethernet or Fibre Channel resiliency.
In reference’s to Gordon’s question regarding latency and overhead
The VIO-4000 switches perform the virtualization of the I/O adapter in hardware using switching and virtualization silicon designed by Virtensys and architected to support 16 servers simultaneously accessing the I/O adapters. The silicon switching is non-blocking and supports wire speed transfers. It has more than 1.5x the bandwidth required to support the 16 servers. The latency is also extremely low – lower than an Ethernet or IB switch. Doing the virtualization in hardware also enable the VIO-4000 switches to sustain the line rate of the I/O adapters with negligible overhead.
In reference’s to Ammaross and Jonathan’s point regarding the external memory
As you mentioned, the external memory is initially intended to be used as an extra / L3 cache memory for the CPU and servers. The density supported can be quite large.
In reference’s to Jonathan’s points
Even when servers are directly populated with 10G NICs, the bandwidth to the corporate networks is limited by the top-of-rack Ethernet switch’s uplink capability which is usually 2 or 4 10GE links, resulting in an uplink bandwidth of 20-40GB which is divided by all the servers, so servers don't really get 10G bandwith to the network. The Virtensys VIO-4000 switches populated with 2 dual-port 10GE adapters will provide the same uplink bandwidth at a fraction of the cost. The switches will also provide 20Gbps PCIe links to each server (double the bandwidth that the 10GE links provide).
The PCIe links between each server and the VIO-4000 switches are x4 lanes and support both Gen 1 (2.5gbps/lane)and Gen 2 (5 gbps/lane) PCIe speed providing the servers with up to 20Gbps of bandwidth to the I/O Virtualization switch.
The VIO-4008 switches use the LSI MegaRAID HBAs and use the Storage Management standards and capabilities established by LSI without any modifications.
The VIO-4000 series works with existing servers without requiring any modification to the server OS, I/O device driver or application. A “dummy” PCIe adapter is needed inside the server to convert the PCIe edge-card connector to a PCIe cable connector. It is the also the same when using the VIO-4000 switches with new server deployments.
Bob