started doing this in early 2014
(vmware customer since 1999) .. While investigating ways of improving performance and cost for the org's stateless web applications I decided on LXC on bare metal for them. My more recent LXC servers go a bit further in that they have fibrechannel cards and boot from SAN. My original systems just used internal storage. I still only use it for stateless systems, that is systems that can fail and I don't really care. I have read that newer version(s) of LXC and/or LXD allow for more fancy things like live migration in some cases?? but I have never looked into that. Management of the systems is almost identical to VMs, everything is configured via Chef, and they all run the same kind of services as the VMs. You wouldn't know you were in a container unless you were really poking around. Provisioning is fairly similar as well(as is applying OS updates), mostly custom scripts written by me which have been evolving bit by bit since around 2007. Fortunately drivers haven't been an issue at all on the systems I have. I recall it being a real PITA back in the early-mid 2000s with drivers on bare metal Linux especially e1000* and in some cases SATA drivers too(mainly on HP Proliants). I spent tons of hours finding/compiling drivers and inserting them into kickstart initrds which were then pxe booted. Only time in my life I used "cpio".
I adopted LXC for my main server at home back in ~2017 as well, which runs 7 containers for various things, but I still have VMware at my personal co-lo with 3 small hosts there with a couple dozen VMs on local storage. Provisioning for home stuff and management there is entirely manual, no fancy configuration management.
I do plan to do a migration for some legacy MSSQL Enterprise servers to physical hardware as well soon as the org is opting not to renew software assurance so licensing costs for a VM environment will go way up(as SA grants the ability to license just the CPUs for the VMs running SQL(regardless of number of CPUs in the VM environment), but you lose that when you stop paying for SA), simpler just to consolidate onto a pair of physical servers in a shared nothing cluster. I've never tried boot from SAN with Windows before but from what I read it should work fine..(yes I like boot from SAN, in this case each server will be connected to a different storage array).
I've never personally been interested in docker style stuff so have never gone that route(I do have to interact with docker on occasion for a few things and it's always annoying). Previous org played with kubernetes for a couple of years at least and it was a practical nightmare as far as I was concerned anyway, I'm sure it has it's use cases but for 95% of orgs it's way overkill and over complexity.