Analysis compares badly set up datacenter to VM. Wrong idea...
The analysis in the article is valid only if the server OS is running using just ACPI and standard power management. If CPU frequency scaling is being used math is different.
Virtualisation frameworks are not very good in supporting CPU frequency scaling which is the biggest energy saver in a datacenter. If you virtualise you usually have to run the host OS at full CPU frequency. If you do not you can run it at on-demand scaling (if supported by OS). This for a Xeon setup may mean anything of up to 200-300W per 1U server or up to 100W per average blade.
As a result if you run under 5-8 VMs per server (which is the usual number), virtualisation does not really save a lot of energy compared to a well tuned correctly run datacenter. Any savings are mostly from "less iron", but not from "less energy".