Re: I'm not even surprised.
Real-time memory encryption in server is a generally bad idea for a multitude of reasons.
1) It's a false sense of security. People will believe it offers some level of protection... it doesn't.
2) The memory controller would have to be issued keys from within each each session. These keys are theoretically shielded from the host system. If the guest operating system implements this technology... kudos for them. It means that direct attacks from VM to VM are taken care of.
3) Drivers loaded on the guest VM will have access to the encrypted memory as they run in kernel mode on the guest VM. This means virtual network, disk and graphics adapters will be able to access memory as unencrypted or issue memory requests to MMU to get access to whatever they want. So, a compromised driver can be an issue. If you read the source code for e1000, VirtIO Ethernet, VMXNET3 drivers in the Linux Kernel, you'll see that they aren't exactly hardened drivers for security. They're good device drivers, but VMXNET3 for example looks very pretty in code format, but that's because it's not particularly bogged down with silly things like bounds checking code.
4) "Bridges" used for performing remote execution on guest VMs will generally have to be available since this is how automation systems work. So, Powershell Remoting (WMI/OMI), QEMU Monitor Protocol, KVM Network Bridge, PowerCLI, etc... all offer methods of performing RPC calls on guest VMs from the host and in many circumstances, directly at the kernel level.
5) Hardware hairpinning is an option as well. PCIe (unlike PCI and older devices) operate entirely on memory mapped I/O (MMIO) which means that all communications with the system and with system memory are performed by using memory reads and writes. In bare metal hypervisors with proper hardware such as Cisco VIC, nVidia GPUs ,etc... the hardware is programmable, partitionable, and can execute code. An example would be to log into the Cisco VIC adapter via out of band management and run show commands for troubleshooting. The iSCSI troubleshooting commands in particular are quite powerful and would easily allow issuing memory reads and writes on the fly from a command line interface. In order to honor them, the MMU in the CPU would have to decrypt the requested memory. Of course, the MMU and OS driver could mark pages appropriately to allow access-lists on individual protected pages. But that's mute when we see point 6)
6) RDMA provides a means of extending system memory from server to server. This works by mapping regions of physical memory in each server to be accessed by hardware from other systems over devices like RDMA over Ethernet NICs or Infiniband HCAs. High performance systems like HPC systems, high performance file servers (like Windows with SMB Direct) and high performance Hypervisors like KVM and Hyper-V (ESXi is very notably not part of this as it sacrifices high performance for high compatibility) perform live migration over RDMA where possible. While it is theoretically possible to move the guest machines in encrypted states, it would be necessary to carry enough information from one server to the other during a migration to provide a decryption key in the new host to access the VM memory as it is moved. That means the private key would have to either be transferred in clear text or would have to be renegotiated through an hypervisor hosted API... providing a new key in clear text to the hypervisor...if only briefly.
The intent of encrypted memory was really really awesome, but extremely poorly thought out. It could have some benefits in places like containers where individual containers could be shielded from the host OS and they don't migrate. But there would still be critical issues with regards to where the decryption keys reside. Also, as containers generally ARE NOT bare metal, so the keys would have to reside on the container host instead.
Thanks for bringing this topic up though. Make sure you tell everyone who intends to depend on encrypted memory that it's at least 10 years and several Windows, Linux, Docker and hardware generations off from being meaningful. But make sure to tell them they should bitch to their vendors to make them support it ASAP. It will require an entire ecosystem (security in layers) approach to make this happen.