I guess either way
Wow, it's dead even (when I voted), over 1000 votes and it's 50/50!
I voted "agreed", but...
Virtual machines? Linux kernel on a VM is aware it's on a VM, it's tickless (so it's not generating interrupts and so slowing down the shared system when it's not doing anything) and the "virtual server" distros are pretty light. Running stuff on VMs is not too bad. But you have then a kernel running, virtualized disk access, then running through another kernel; multiple caches (both the VM and physical box will have a disk cache for example); you then have a in-VM scheduler and physical system scheduler... there's overhead there. But, the VM has it's own OS, there's no worrying over kernel differences or distro differences, or your app is for Linux but someone wants to run it on Windows. Security-wise, both the VM and the container are supposed to isolate everything, but the VM restricts the attack surface rather severely in terms of what can be done to the physical machine.
BUT.. containers have nearly 0 overhead, you are not having to pre-allocate RAM or disk space, there's nice controls for the disk, RAM, and CPU usage (VMs let you change # of CPUs and at least on VirtualBox reduce % speed of the cores VirtualBox exposes, but the containers have nice CPU usage control too.) Modern containerization solutions in linux, the containerized app has it's own /proc and /sys, it's own process list, and some kind of user id mapping stuff so you can "be root" inside a container, but have no special privileges outside it. it can have it's own view of CPUs and available RAM or have access to the whole thing, etc., and that can generally be changed on the fly. Conversely to the VM, the container does have direct access to the real system kernel, you've got the real system kernel as an attack surface instead of having to get through a VM kernel, bust through the virtualization and then try to dick with the physical system.
In the case of both, you have the disadvantage of not taking advantage of your distro's package mechanism, your distro is likely to update vulnerable libraries straight away, while a container or VM you are at the whim of whoever updating the whole package to replace vulnerable libs. But, there's plenty of containers and VMs where you simply don't expose them to the outside world, and it won't matter if you get these updates immediately or not.
Edit: technology maturity. VMs have been around on IBM kit since the late 1960s, of course it was kind of "rediscovered" in the late 1990s/early 2000s for use on PCs, UNIX servers, etc. Plenty mature technology by now. Containers, they can be quite flexible, UNIX has had "chroot jails" since like the 1970s; the enhancements to make a /proc, /sys, seperate process list for "top", etc. and give a better illusion of being on your own full system came out more like late 90's-early 2000s too. But besides the cloud providers, you have shared web providers that run this stuff on a massive scale quite successfully, it's well-understood and mature technology too. I've used a few "shared server" setups where you update your own kernel; they're using a VM; a few where it seemed just the same but no kernel to update (you were in a container.) They're good enough now that that was literally the only apparent difference, it seemed like I was on my own (these were like a $5-10/month plan) 1 or 2-core server with 512MB or 1GB of RAM.