Faster than Bare-metal?
All through the article he talks about this being for 'Legacy' stuff, that's usually code for the stuff that no-one understands but we dare not touch because we know the company depends on it.
Often the reason for virtualising this kind of load is to get it onto faster more supportable HW without changing the OS.
In this situation, by making better use of the new HW than the underlying guest OS can, we can see better performance than running the guest OS on native HW.
For example if you virtualise a 32 bit windows OS with a memory limitation of ~3.5Gb on a server with a large memory, then you could conceivably use the extra memory as a cache and reduce the I/O considerably.
If you have a legacy server running some old version of windows that can't make use of modern 10Gb ethernet or 8Gb San HBA's then again a virtualisation layer can get around this.
The ultimate example would be an old OS that simply won't boot on new HW where the existing hw is failing and can't be replaced because it's no-longer available.
The performance on existing hardware is none at all because the existing hw has died.
The performance on new hw is 'none at all' because the crappy old OS can't boot.
The performance on a VM on the new hardware is something, therefore better than 'bare-metal'.