And in other news....
An investigation into running a process in a container, vs running it as a user on the system directly, results in even lower power consumption.
It makes sense, because adding layers of abstraction reduces computational efficiency (more CPU cycles go to the system, vs the computation you want). It is the same reason some people forgo the OS and program the hardware directly, or even develop their own hardware (e.g. FPGAs).
The question is, whether the loss in computational efficiency is worth any benefits in management and automatic scaling out of resources. If you lose 20% of a nodes efficiency due to using virtualisation, but then you make it trivial to scale out to multiple nodes, then for some it is worth it (generally, Compute/electric power is cheaper than a persons time to manage it).
What the research does here, is give some numbers to the options, so people can actually work out whether it makes sense for them to go one way or another. It is useful to those who have to sit down and architect some large-scale infrastructure.