back to article Veeam bets on more VMware alternatives, including Red Hat and China’s Sangfor

Backup software vendor Veeam has thrown its weight behind more alternatives to VMware. On Wednesday, the company announced both version 13 of its flagship Data Platform product and a plan to support at least 13 hypervisors. In the first half of 2026, Veeam will add support for XCP-NG, HPE’s Morpheus VM Essentials, Citrix’s …

  1. Nate Amsden Silver badge

    how much of this is real?

    Because Veeam doesn't support HPE VM Essentials directly, they instruct you to install agents on your VMs: https://www.veeam.com/kb4737

    1. Anonymous Coward
      Anonymous Coward

      Re: how much of this is real?

      Agent based at the moment, but they are adding native backup for multiple hypervisors. We migrated from vSphere to XCP-NG and are using agent backup at the moment. But have been keeping an eye on the progress they have been making towards native backup.

      1. This post has been deleted by its author

  2. Anonymous Coward
    Anonymous Coward

    Like Proxmox but…

    … our chaps in the engine room say that the lack of support for SAN and other gubbins is a problem. Anyone had any experience in this area?

    1. Kurgan Silver badge

      Re: Like Proxmox but…

      This is mostly a "we have always done it like this" attitude. Proxmox is not Vmware, its storage layer is different and has different features. This means that you cannot just do like you always did before. Which is indeed an issue once you have spent a lot of money on a system (servers + a fucking big storage) that is not the right way of doing it in Proxmox.

      So while "the proxmox way" (zfs local filesystem with replicas, or ceph) works, it's impossible to convert the old hardware to the new use.

      This is why Proxmox is scrambling to make its software compatible with "the Vmware way" of doing storage, and as of today it's failing badly (file system corruption happens on its implementation of LVM over ISCSI)

      1. sedregj

        Re: Like Proxmox but…

        "it's impossible to convert the old hardware to the new use".

        Not always. You can whip the SSDs out of a Dell and put it into the hosts. I did this for a customer last year. They now have a three node ceph cluster and an empty chassis.

      2. Nate Amsden Silver badge

        Re: Like Proxmox but…

        Maybe not impossible(?), just stupid. If I were to use Proxmox in this way(I have never used Proxmox) I could export unique LUNs to each of my systems (assuming Proxmox includes fibrechannel drivers and hopefully MPIO?) and even if the volumes are not shared between hosts, and could format local ZFS on each node or whatever to do stuff.

        I do find it interesting/sad that it seems none of the open source hypervisors can handle shared storage like VMware? I just checked last night for Xen after reading an article here about that again. They technically support GFS2 but apparently no storage migrations allowed with it. Ubuntu LXD (their container/VM solution) does not support GFS2, Proxmox does not last I checked, and I saw a comment here recently from someone trying HP VM Essentials that DOES support it and had major issues.

        Then I realized something I forgot a long time ago, I think it was vSphere 4.0 that introduced VAAI, which included "sub LUN locking"(hardware accelerated by the storage), Broadcom calls it "Atomic Test & Set (ATS), which is used during creation and locking of files on the VMFS volume". Of course that was over fifteen years ago now and the competition still doesn't have reliable locking, hopefully this race to improve the alternatives fast tracks this for GFS2 or the creation of a new file system. Though at the time that feature was limited to Enterprise Plus licensing, unsure if that ever changed.

        Dug up an old blog post I wrote on the topic from September 2010 - http://www.techopsguys.com/2010/09/07/vsphere-vaai-only-in-enterprise/

        So perhaps some org (HPE is probably best positioned given they sell a lot of SAN stuff) can implement this feature on the open source side, since the hardware out there has already supported this technique for such a long time already, would be ideal if they could just leverage the existing support. Maybe they can't get all of VAAI, but this one part seems super important.

        I remember getting into arguments with my then manager in 2008, we were a very small vmware shop, just a few ESX 3.5 servers only using standard licensing - no vCenter, no clusters etc. He didn't want to pay the cost for VMware, and wanted to use the free Xen included with our CentOS systems instead. I pushed back saying Xen wasn't good enough IMO (I had no experience with Xen just a "feeling"). I remember at one point he called me a "pussy" out on our open floor plan office for not wanting to try Xen. I laughed it off. I left the company not too long after. He directed my former co-workers to then deploy Xen. They tried, for a month, and gave up. The critical failure was they could not get a 32-bit CentOS system running on a 64-bit Xen hypervisor it would just lock up every time, and they couldn't find a fix. Our standard was 32-bit for some of our apps as running them in 64-bit mode literally caused memory to explode.

        Many years later I reached out to him just to say hi or something and he eventually apologized saying I was right, he was wrong, Xen(at the time anyway) was a piece of crap compared to VMware.

        But wow, that was literally 17 years ago. It's just shocking that Xen and others still are that far behind on basic things like storage. There's a reason the last version of vSphere that excited me was 4.1, I want a rock solid foundation for my systems, all of the bells and whistles on top are far less important to me. vSphere was one of the most solid pieces of software I've ever used, with the caveat that I ran conservative configurations and stayed well behind the curve (running 4.1, 5.5 6.5, and 6.7 at least a year past EOL before upgrading).

        1. Kurgan Silver badge

          Re: Like Proxmox but…

          I'm not a Vmware expert at all, but I think the issue here is that every open source solution is based on KVM. They are all fancy UIs over KVM. And since KVM lacks a storage backend that allows for the same storage to be shared between multiple hosts, no fancy UI can make it happen. And if I get it right vmware uses a file system (vmfs) that can indeed be mounted at the same time by multiple hosts. This is its big advantage, and this is why it's easy in Vmware to share a single storage between multiple hosts.

          So now KVM is working on an LVM on iSCSI solution that should allow for multiple access (it's not "mounting") because you don't actually mount anything, it's not a file system. But it seems it's still not good enough. Anyway, once this thing works, then every fancy UI will work, too.

          Then it will even be possible to share a storage between Vmware and Proxmox, by simply creating 2 LUNs and assigning each one to a different hypervisor cluster. And at that point you can actually manage to move machines and then add LUNs to the LVM group to "eat up" the space left by the now-unused Vmware LUNs.

      3. ptoscani

        Re: Like Proxmox but…

        Not exactly as you frame it, there are so many stories of companies that have already migrated and are extremely happy, just search on Reddit or similar. Proxmox Virtual Environment is rock solid and 20 years on the market and you can of course reuse your existing hardware including SAN, but the open source approach is like you described more focused on open technologies like zfs and ceph , but you can run this on old hardware as well without limitations. Everything is included HA, HCI, SDN you name it and that's for a dime based on their pricing.

        Just try it out you will be surprised and you can even reach out in their forum to ask questions for free. Their new datacenter manager is very ambitious as well for bigger deployments and they even have their own backup solution, but you can as well stick with Veeam or alike.

        Looks like the best option for a migration if you want to avoid a lock in scenario in the future.

  3. brym

    Forgetfulness?

    I know it was mentioned here once, but I've read so many articles where people complain about VMWare and the rising costs, blah blah blah - we all know the story - and can't help but to also notice a trending forgetfulness around suggestions being made of other potential solutions, but almost everyone seemingly sleeping on KVM.

    KVM and VMM (Virtual Machine Manager) make a potent and capable combination, and are entirely free. Kind of blows my mind that more people either don't appear to know about it or don't appear use it.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon