Dogfood
this situation suggests the Project has not eaten its own dogfood.
Hypervisors are not normal pieces of software as they are designed to run on physical hardware. Testing a hypervisor inside another hypervisor is not an accurate test.
The Xen Project – creator and manager of the open-source Xen hypervisor and associated tools – has warned its community of potential problems flowing from the imminent closure of the colocation facility it uses. “We regret to inform you that the Xen Project is currently experiencing unexpected changes due to the sudden …
Yeah, we know, we know. We just forgot to think it through, sadly. Although nested virtualization is a thing, you're right, that's not an accurate test.
Mea culpa. Now that we've read the documentation, we've tweaked the piece to better explain that the testing system involves a pool of bare metal equipment that can't be virtualized and easily migrated. If you spot something wrong like this, don't forget to drop corrections@theregister.com a note and we'll do our best to fix things up.
C.
If Xen is in peril, so is QubesOS. The security-oriented OS is basically Xen for the desktop. There used to be plans to support other hypervisors. High time to revive those plans!
https://forum.qubes-os.org/t/porting-qubes-to-hypervisors-other-than-xen-abstracting-the-functionality-early-stage/23478/3
I could never quite get along with Qubes.
But if you still need bare metal on desktop/laptop hardware then take a look at OpenXT - it's what became of Citrix's Xen Desktop.
It doesn't have the app granularity of Qubes, as it separate the OS's (or did, the last time I looked into it).
https://openxt.org/
I'm somewhat intrigued by the comment that "the Project is not sure its hardware would survive a move". In my time as a humble sys admin I've done quite a few moves. From full on DC lift and shifrt to a rather memorable moving of a (not exactly cheap at the time) Sun EU10000 which had to go out via the lobby to a truck backed up so the tail lift was over some steps (these steps: https://www.google.co.uk/maps/@51.5216978,-0.0846348,3a,75y,55.3h,89.54t/data=!3m7!1e1!3m5!1sPSRfkr6z5b_ukQ71JO8Rxg!2e0!6shttps:%2F%2Fstreetviewpixels-pa.googleapis.com%2Fv1%2Fthumbnail%3Fpanoid%3DPSRfkr6z5b_ukQ71JO8Rxg%26cb_client%3Dmaps_sv.tactile.gps%26w%3D203%26h%3D100%26yaw%3D120.699005%26pitch%3D0%26thumbfov%3D100!7i16384!8i8192?coh=205409&entry=ttu)
But I digress slightly. The most recent move was last year which included moving kit that was in some cases ~12 years old, with many interconnected dependencies, sometimes with the network kit going in at the same time and of course a lot of prod kit which had to be working Monday morning. Now it was a horrible few weeks and I don't wish to experience such a move again, but somehow we got through it. So I'd be very interested to know why they think their kit may not survive.
My thoughts precisely... I've moved quite a lot of old kit in my time and never had a problem which couldn't be solved.... OK there was one 15+ year old bit of kit which wouldn't restart but I seem to remember a new CMOS battery did the trick.
I'm also not sure why rebuilding their test setup on some newer hardware would be such a big deal. I can see how it might be more than a few hours work but they seem to have until October to get the work done....
I'm also kinda wondering what is up here, since they have until October to get this done, although it is possible that the leading body here is making a very concerted plea for assistance from some entity which will have the resources to accomplish the task, after having reviewed her project's resources and determined that they just don't have the bits they need.
At least part of OSSTest is a whole lot of devboards, because one of the things that you want to test fairly early on on newly-brought-up hardware is whether the virtualisation works.
Contracting with someone else's movers to move irreplaceable bare PCBs attached to a bit of wood with rack rails screwed to the side is something you'd be very unconfident would work perfectly.
(similarly, OSSTest is entirely a bare-metal thing because it's testing hypervisors, which want to run on bare metal and for which running under qemu on different hardware is not an adequate test)
The question of why all this stuff isn't literally on-premises rather than in a datacenter is an interesting one; presumably that something of the not-very-big and not-very-corporate scale of Xenproject has trouble owning a big enough bit of property.
I would have preferred that, but the new data center didn't allow it. We had to use their racks, because they were part of a system that all fit together. We also had to clean all incoming equipment in a separate room before we could bring it into the data center proper.
You missed two steps before a:
a-2. Power down.
a-1. Power up.
a. Power down.
b. ...
Always power down and then power up again in situ, to make sure your system will start before you move it. That way you start from a known baseline. Otherwise if the system refuses to start how can you be sure it is because of the move and wasn't a problem before? Updates may have been added that need a reboot but were never tested because of operational pressures. If you have to shut it down use the time wisely, and don't make life harder than it needs to be. :-)
>> "Considering that AWS built a zillion-dollar clown computing empire on Xen, they ought to host the project for life. Even if they don't use it themselves anymore."
AWS didn't just use Xen, they were also a major contributor back then when there were still people caring about Xen.
The fact that you imply that because AWS became huge during that time they should now be required to host what now is an obsolete hypervisor platform no-one really cares about anymore suggests that you don't really understand the concept of FOSS.
Besides the fact that the reason AWS got big in the first place might not be solely because of Xen, but simply because of the many other services they have been offering. And since changing to KVM they just got bigger.