RE: "IVM does not scale compared to PowerVM" - LOL!
"....IVM has a limit of 8 threads per virtual machine. Thanks for posting the link to confirm this:
===> "Support of larger VMs (max: 8 virtual processor core)....." And how exactly is that an issue? Unlike IBM and AIX, hp has a carefully designed and integrated virtualisation and partitioning range, referred to as the Partitioning Continuum. The Continuum part is because it caters for needs from sinlge, sub-CPU instances, right up to SMP instances, across the Itanium range. Seeing as you obviously don't have a clue how IVM works, I'll try and explain and also how it plugs into the Continuum.
The hp approach is to look at what customers do with their servers - usually consolidation, or partitioning a system to allow more than one software stack to run without interfering with another. In the hp Integrity range, the smallest servers are single-, dual- and quad-socket designs that use single motherboards (blades or racked servers), and from there on up they use four-socket cellboards linked with switching backplanes (from the 8-socket rx7xx0 range right up to the 64-socket Superdome SD64). IVM is sub-CPU virtualisation.
Listen carefully now, the next bit is important. As the single motherboard systems currently scale to eight cores, having IVM also scale to eight virtual CPUs is a perfect match. Above eight cores, hp has a couple of options. The first is based around hardware partitioning with a cellboard being the unit. Essentially, you can plug one or more cellboards into one server instance in electrical isolation from the other cellboards in a server. In the Superdome, that means you can create up to sixteen four-socket instances, each running it's own and completely separate OS instance (and you can mix hp-ux, OpenVMS, Windows and Linux in the one frame in separate hardware partitions if required). Or you could lump the cellboards together in combinations to suit your project. For example, if you were consolidating four quad-socket rx6600 servers and four octo-socket rx7640s into one Superdome then they could be arranged as four single-cellboard hardware partitions and four dual-cellboard partitions. Just in case you ever do get around to reading any hp manuals, hp refer to the hard partitions as npars (don't ask me why!).
Just to make npars more granular, hp also offer the ability to turn Itanium cores on or off using TiCAP (which stands for Temporary increased Capacity or something like that). This allows you to turn on CPUs when required, called being "active" (such as when meeting the end-of-year report run) and then power them off - "deactivated" - when not needed (such as when competeing against Power). So, an npar can be any number of cores from one to eight (or one to sixteen if Tukzilla cellboards stay as four-socket designs).
So, I hear you mutter slowly, what if you need a partition with more than eight cores but not a nice round number divisible by four, and you don't have a variance in demand that would make TiCAP the best answer? The answer is hp's next step on the Partitioning Continuum, called Virtual Partitioning or vpars (slightly more obvious than npars, I admit). The unit for a vpar is a single CPU core - you can then scale up to as many active CPU cores as there are in your hardware partition. If you wanted, you could split a Superdome into one vpar of a single core and make the remainder another vpar (don't ask me why you would want to do this, but then you IBMers do come up with some laughable system designs).
So, to recap - if all you need is up to eight virtual CPUs, then go IVM, or a single npar and turn on and off CPUs using TiCAP. If you need a number larger than eight cores and your hardware structure suits splitting the server into systems along one or more cellboards then go npars. But if you want to split into odd numbers of cores or a server size not in quads, then go vpars. As a last note, you can run npars and then IVM inside an npar, or npars and then vpars inside an npar, but you can't mix vpars and IVM in the same npar. Of course, the fun doesn't stop there, as hp tools such as Global Workload Manager allow you to run a solution across many npars or even separate servers all at once. In short, the most flexible and comprehensive partitioning and virtualisation package out there.
So, the point you are FUDing as hard as possible in reality has zero practical impact. I'm sure all the hp salesgrunts would like to thank you for this opportunity to show the customers how little IBMers actually understand about IVM, Inetgrity or Integrity's partitioning capabilities.
/SP&L