back to article Meet the Proxinator: A hyperbox that puts SATA at the heart of VMware migrations

Every vendor capable of spelling "virtualization" has spent a good chunk of 2024 making a pitch for its products as a fine alternative for folks discomfited by Broadcom's takeover of VMware. Canadian outfit 45Drives has taken matters a step further by creating an entire product – the Proxinator – to lure those considering the …

  1. Ace2 Silver badge

    “It builds around Supermicro motherboards, but does all of its own thermal and mechanical design.”

    Because SuperMicro thermal designs are absolute trash.

  2. Grunchy Silver badge

    I configured my own for under $200

    It’s because there are all these cast-off servers being scrapped, I picked up 3x DL380P G8s and another 3x X3550 M4s. Every chassis is rigged with AT LEAST! 128GB ram per socket, and that’s 12 sockets. 8x 2.5” drives per chassis, the same recycler blew out a whole box of 256-to-512 GB SSDs, also practically give-away. I also got 3x Tesla P4s off eBay and did the virtualization trick (nVidia hounds me ever since to start paying them licensing fees, but I’m sorry guys, the virtualization trick worked!)

    I genuinely have no idea what I can do with this cluster. I stacked it all up in the spare bedroom in the basement, but just one of the DL380Ps howls more than I can tolerate. I also need at least 1 breaker for each pair of servers, but let’s see, all the basement bedrooms run off 1 breaker?

    So I blew another $30 and picked up an old Datto Alto 3 L3A2 (with single 3TB drive) which I configured as a single Proxmox node, running a single copy of Ubuntu Server. I set up iGPU forwarding of the 4205U Celeron for iSpy Agent, it monitors four IP security cameras with HEVC and about 35% CPU utilization. But also my NGINX web servers, a bulletin board, and a couple other things I forgot about. Oh right, that nVidia virtualization server, for one, plus NFS + Samba, etc.

    Well anyway, I think this crusty old Datto runs well under 15 Watts.

    I’m surprised anybody needs to pay anything to put together a Proxmox cluster, well, meanwhile the local recycler warehouse is positively bursting with castoff servers.

    (My local guys are at era.ca but e-waste repurposing is blowing up all over the world.)

    1. Anonymous Coward Silver badge
      Boffin

      Re: I configured my own for under $200

      Companies wanting to run it in production tend to want new hardware that comes with warranties etc.

      Failing hardware costs time to swap it out even if the hardware is cheap, so despite the face that you can run it on redundant surplus hardware it can still work out cheaper to buy new.

      In a domestic/lab environment however, it's well worth snapping up cheap hardware that companies don't want!

      1. Sudosu Bronze badge

        Re: I configured my own for under $200

        I consider it as a Redundant Inexpensive Array of Servers.

        If one dies the machines go elsewhere (or can be restored elsewhere) in the array and I swap out the box, reinstall proxmox and join the cluster. It does not take very long to do that and enterprise class servers are generally pretty reliable...even old ones.

        It is like a Backblaze strategy for VMs.

        I do keep my storage separate on OmniOS boxes though.

        I have not run into the AMD issue to date.

      2. Nate Amsden

        Re: I configured my own for under $200

        Hardware wise you can get NBD on site support for DL380Gen8 for under $250/year at least in the U.S., from several different companies(my last price from HPE for DL380Gen8+ ESXi support Foundation care was $3,259/server/year in 2018, or $1,852/server/year hardware/firmware only support(slightly different Gen8 config that did NOT run ESXi)). Of course that doesn't get you the software level support, but software hasn't changed on those in a while. And still some software is available (like iLO) without a support contract, last I checked(last year). I'm 2500 miles away from the hardware so remote support is a requirement for me. Datacenter has remote hands of course though it's far easier to have 3rd party HW support where they know the systems, and have the parts ready to go.

        If you want 4 hour on site support the cost is a bit more, I don't have an exact number for a Gen8 but I don't think it's even double the cost.

        I use a company called Service Express(which technically charges a monthly fee so you could do month-to-month or partial year contracts if you wanted, in my case they quoted a 2 or 3 year term, but their terms say it can be cancelled at any time with a 30-60 (forgot which) day notice or something), though have used 3 other companies in the past, all of them the costs were about the same, there's been a lot of consolidation in the space the past few years.

        Aside from the little bit annoying iLO flash dying out over time (https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c04996097 doesn't affect anything if you don't use their fancy provisioning stuff), Gen8 and even Gen9(which has the same iLO issue) has been super solid for me over the past decade. Gen9 suffers from storage battery failures, but none of my Gen9s have local storage so I just ignore those failures these days.

        1. Mr.Nobody

          Re: I configured my own for under $200

          Park Place Technologies has excellent third part hardware support, including on site techs. If you need bios updates than you need a support contract with HPE for something, but for the rest just use third party. Far, far less expensive than HPE as well.

  3. Justin Clift

    Proxmox 8.x VM migrations hang with Ryzen gear

    It's interesting they went with EPYC processors.

    In my testing of Proxmox 8.x (with Ryzen 5000 series processors), VM migrations would randomly hang forever about 25% of the time.

    That's with a 5 node setup using Ceph storage on the same Proxmox nodes. And it definitely wasn't a case of the network interconnects not being fast enough either.

    Saw a mention of an article some weeks ago where they had a similar hanging problem with Proxmox on AMD processors, apparently it was a recurring problem over the years. But there was some kind of patches to fix the problem this time around.

    I didn't investigate that solution in depth because I'd already finished up the Proxmox testing (result: fail), and didn't want to waste time setting up+testing the entire cluster again just on the hope maybe things were actually fixed.

    Has anyone else come across that hanging thing with AMD cpus on Proxmox?

    1. l8gravely

      Re: Proxmox 8.x VM migrations hang with Ryzen gear

      I've been playing (lightly) with ProxMox on an old UCS system with Intel CPUs and gobs of memory and network bandwidth. It seems to be working, but I haven't done any major work on it. I really should setup some big VMs, run 'stress' inside them, and then start doing migrations back and forth. So far I've been impressed, but not quite wild about how it all hangs together. I wish they would split the interface up a bit more to make the VMs be more central to the display of what's going on. But that's more of a what am I used to type thing.

      As for AMD VMotion problems, that's news and a bit worrying. I wonder what's causing the problem? So what made you fail proxmox, besides the VMotion problems? How did you think of it overall? And if not ProxMox or VMware, what other options have you looked at?

      I've also got another client with a TrueNAS storage cluster and .... it's Active/passive. And failover takes quite a chunk of time. So I'm really hesitant to do upgrades and failovers. My old Netapp boxes (which this replaced) just keep chugging along due to some legacy CIFS volumes which just don't quite work when trying to Robocopy them off the Netapp (old 8.x in 7-mode... sigh) but I do have to say Netapp makes damn nice bulletproof gear. It just runs and runs and runs. And it shows.

      1. Justin Clift

        Re: Proxmox 8.x VM migrations hang with Ryzen gear

        > what made you fail proxmox

        Entirely the hanging of VM migrations. It's a central requirement for us, in order to have a better approach to updating host systems without downtime for hosted clients.

        The testing was on three separately set up clusters over two months, 1x 3 node cluster, 2 x 5 node clusters. All using Ryzen 5xxx series gear (mostly 5950x).

        The hanging of migrations never even leaves any kind of logs for diagnosis. And cancelling a hung migration *kills* (as in stops, like a power off) the VM in question rather than letting it continue on running where it was.

        So, using Proxmox is just not a possibility. At least until they have VM migration being reliable. We're just sticking with the existing approach for now, which is pretty much "schedule downtime and update the hosts". Ugh.

        One possibility might be to try separate clusters of hosts vs storage. Just in case it was some interaction of having both Ceph and the VMs running on the same boxes (hyperconverged style).

        I *might* put some time into testing that, just in case, as having a working solution would be really useful. :)

      2. Justin Clift

        Re: Proxmox 8.x VM migrations hang with Ryzen gear

        Took some time today to throw together a new testing setup, and so far the VM migrations are now being reliable.

        This time it was 5 nodes for just Proxmox Ceph storage (no hosting of VMs on those), with another 3 nodes for the hosting of VMs (also in Proxmox).

        Doing 30 migrations (manual UI button clicking), and they all worked without issue. No random hangs.

        That bodes well, so maybe Proxmox is an optional after all. Will need to put some time aside for properly testing it again in the near-ish future. :)

  4. This post has been deleted by its author

  5. Androgynous Cow Herd

    This article is not buzzword compliant.

    In 2024, all news articles require an "AI" angle.

    "Integrator crams Open Sores code they didn't write or contribute to onto overpriced hardware to achieve egregious profit margin" would have been a good headline, though.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like