Is it not about the BASH vuln? Xen might be a side fix:
Amazon will tomorrow begin a bloody global reboot of its Elastic Compute Cloud (EC2) compute instances after it found a security bug within the Xen virtualisation platform. The rolling minutes-long reboots would be completed by 30 September. Amazon did not name the reason for the upgrade, widely thought to be a security issue …
AWS and Azure (and I assume Google) cloud services are not like a well built internal private virtualized environment. There is no vmotion or storage vmotion (or whatever the citrix and M$ version of this are). There is redundancy in networking (so we are told), but there is no separate path for storage or management.
The old AWS server has a single redundant 1Gb connection going out the server for everything - management, storage and regular network traffic. This is shared by all the VMs on that host. Few people in their right mind would set up a vsphere or hyper-v environment with these sorts of limitations. There are ways to pay for more bandwidth and IOPS with a 10Gb pipe, but those are significantly more expensive.
Lots of people make many assumptions about AWS based on what their own internal virtualization environment looks like, and they would be fools to do so. It is nothing like what one responsible person would set up, unless you were trying to be the cheapest cloud vendor in the world, then its whatever you could do to make it the cheapest possible setup.
They can - live migration is supported on Xen provided there's shared storage available so potentially there should be almost zero downtime.
However, the update from EC2 says this isn't possible, and until the details are released on 1st October we can't know if this is reasonable or not.
It does say that if your database is replicated correctly it's possible to reboot ahead of time and avoid downtime..
Biting the hand that feeds IT © 1998–2021