Still behind Itanium.
HP Superdome scales to 16 cells and 64 sockets (that's 128 cores with the current dual-core CPUs, and 256 cores when Tukzilla hits next year), without the need for fiddly optical interconnects, just high-speed and wide-bandwidth crossbar switch backplane. So Itanium scales further and offers far better performance, and uses PCI-e or PCI-X card cages for greater interconnect options, including Infiniband and 10GbE.
HP VSE under hp-ux can already host virtualised Windows Server images as separate OS instances (along with Linux, VMS, and other hp-ux instances). Or, you can use harf partition for completely electrically isolated OS instances in the same frame and cluster either inside the frame or between frames, which is better resilience than software virtualisation.
The HP Integrity server range scales from a 2-socket blade via 2-, 4-, 8-, 16-, 32- to 64-sockets, which is much more choice.
HP Integrity comes with five-nines availability. If you want to go further you can go Integrity NonStop (also Itanium), with which some people I know claim they never have any downtime.
Even the FSC Itanium range has more to offer! This is just a cheap alternative to low-end Itanium, and existing large Xeon servers such as the 8-socket ProLiant DL765 would seem to offer a much cheaper and better tested means of hosting a large Windows virtualised instance, with two such servers offering better redundancy too. What happens when Dell, HP or IBM start putting six-core Xeons into their large x86 servers - bang goes more than half of the NEC/Unisys market. I know NEC and Unisys used to have a rep in the datacenter, but how long ago was that?