back to article VMware teases replacement for so-insecure-it-was-retired P2V migration tool

VMware has quietly announced a beta of vCenter Converter – a tool it withdrew earlier this year over security concerns. vCenter Converter is a tool that converts physical servers into virtual machines and can automate the process so that users can convert multiple physical machines. The tool also allows hot cloning of source …

  1. Missing Semicolon Silver badge

    Used P2V once

    With ESXi6, to virtualise an old Windows Server that nobody had time to rebuild.

    Worked first try, and the server ran as a VM for several years afterwards.

  2. botfap

    Im surprised they are bothering with this...

    ...VMWare's recent moves all seem to have been to antagonise customers, not help them. Maybe the Broadcom acquisition will be a good thing for VMWare customers? Only joking

    P2V is a very niche market nowadays and almost exclusively used to virtualise legacy systems that companies have lost the staff and experience they need to reconstruct such legacy services. Its no good for VDI and virt dev workstations because the resulting image isnt compatible with the corporate base images so huge amounts of storage are wasted. If you really need P2V then you can still use the 3rd party KVM and XEN P2V tools to generate images and import them into VCentre. Its not optimal but VMWare's own P2V never was either. It almost always required some cleanup on complex systems.

    VMWare has always been a prickly, customer hostile, company to deal with (with the exception of the first few years before the first corporate sell off to EMC in 2004). We were VMWare customers for the better part of 15 years, starting with GSX Server in 2001 and moving to ESX shortly afterwards. Every release, a core feature seemed to vanish only to be replaced with an additionally charged optional extra. The final straw for us was blocking the 3rd party Veeam backup software from working on the entry level edition because VMWare wanted a slice of their income and to sell their own backup options

    Its also way too expensive now. In order to get modest functionality you need the VMware vSphere Enterprise Plus subscription. The essentials subscription is only of use for a single physical server setup, 2 or 3 maybe at a push, there is no hot migration or other basic functionality included. For a small dev and support outfit like ours with about 30 physical servers you are looking at about £120K per year (30 x £4200 ish) just to lease the virtualisation layer. Its just not good value for money, especially when the support is so poor. Thats 2 decent infrastructure guys or gals who could do a great job of migrating you to opensource and free KVM based solutions, configured exactly as your company needs it. So thats what we did and despite a couple of hurdles in the early days, the end result is much more efficient and easier to manage than VMWare's offerings. We also have infrastructure people with spare time to help the devs in big deployments. Invest in your own company peeps, not other peoples

    1. Anonymous Coward
      Anonymous Coward

      Re: Im surprised they are bothering with this...

      This is a good take. Sounds like your company has its act together.

      It's interesting (read: appalling) how some companies will fork over buckets of money to some big corporation for their so-called enterprise product, but won't spend (invest) that money in their own people and expertise to do the job properly.

      1. sten2012

        Re: Im surprised they are bothering with this...

        It's a good take and you're right, but there is absolutely a consideration that you could pretty much drag and drop any vmware experienced engineer into to the team if they leave, but a specific in house solution, KVM or not, will mean fewer people to fill the role and more time to get up to speed for newcomers.

    2. Nate Amsden

      Re: Im surprised they are bothering with this...

      Just curious what are the specs of those 30 physical servers? 30 physical servers could literally host thousands of VMs, so the cost really isn't that bad at all for Enterprise+, and at least at the moment the cost of Enterprise+ hasn't changed in at least the past 10 years(~$5k/socket, doesn't take into account inflation even). Although of course now they are limiting the license to 32 cores, so if you have a 64 core CPU then you of course need 2 licenses. But still that's a damn good value I think.

      Now of course if you are running small systems, then there is less value. But if you're running at least 30+ cores and 300GB+ memory (I have had this config for going on 8 years now originally with dual Opteron 6176(24 cores)->6276(32 cores), newer systems will be 64 cores and 768GB memory at least) it's a good value.

      Now if you add in the other shit, beyond the basic hypervisor that's where I lose interest. vRealize, NSX, and the ever massively increasing list of addons(checked VMware's site and was overwhelmed by the number of products they have that I don't care about) that I have no interest in(and so don't know the cost). I do remember at one point pricing vRealize because my senior director was interested (I was not), for our servers at the time it was going to be $250k I think(don't recall the # of servers at that point in time, it was less than 30 though). I said I'd rather buy another half dozen VMware hosts(at ~$30k+/pop with licensing) then get that product. His main want was something that could predict future capacity needs, and he heard that product could do that (I don't know if it can/could but I wouldn't trust it or anything else that could predict that regardless given our custom application stack which in my experience such stacks can change capacity requirements in an instant with a new version of code, so you really need solid performance testing not some magical tool that will extrapolate past performance and predict future, because the app is ever changing).

      LogicMonitor is by far my favorite tool for vSphere monitoring, I even have it able to report real time vCPU:pCPU ratios, and CPU Mhz for everything(otherwise not available out of the box), and tons of cool custom dynamic dashboards(and it's super easy to use).

      Obviously the hypervisor market has matured a lot in the time since but your comment reminds me of a situation I was in at a company back in 2008. We were a very basic VMware customer, no vmotion, no vCenter just the most basic licensing, back when you had to buy licenses in pairs because VMware didn't support single CPU systems (and their licensing didn't really take into account multi cores). ANYWAY, my director at the time hated paying for VMware (we had licenses for maybe half a dozen 2 socket systems it really wasn't much). We were a CentOS shop mostly, and some Fedora as well at the time. He wanted to use Xen because it was free. He hated the VMware tax. I disagreed, and we got into this mini argument on the floor (open floor plan office). I'll never forget this because it was just so weird. He said something along the lines of he didn't think I wanted to run Xen because I was a pussy. (used that word exactly). I didn't know how to respond(and don't recall how I did). But anyway I left the company a few months later I think(it was on it's way to going out of business anyway). Right after I left he directed the rest of my team to ditch VMware and get on Xen. Ok so they did, well they tried. After a month of trying they gave up and went back to VMware. They had an issue with Xen(on CentOS) and running 32-bit CentOS VMs. It didn't work. Don't recall the problem they had but no matter what they tried it didn't work. We leveraged some 32-bit systems at the time just for lower memory usage. I suppose they could of ditched all 32-bit and gone everything 64-bit but for whatever reason they didn't and instead dropped their Xen project and went back to VMware.

      I didn't like that director for a long time after, we got into another big argument over Oracle latch contention(which I was proven right again in that situation as well). But we made up over email several years later. He apologized to me, and we are friends now(though not really in close contact).

      But the hypervisor is core, it's the most important bit, has to be good quality, stable, fast, etc. I think VMware still owns that pretty well. Granted if you stay on the bleeding edge(vSphere 7 was a shitshow I heard), you may have issues, I don't stay on the bleeding edge(still ESXi 6.5 in production baby, re-installing to 7.0U3 soon though, not going to risk an "upgrade"). also support is shit that is true, though for me it doesn't matter too much my configuration is quite conservative as a result I have hardly ever needed support over the past decade. Really blows my mind how well it works.

      Been using VMware since 1999 when it was a linux-only desktop product.

      1. botfap

        Re: Im surprised they are bothering with this...

        In simple core terms we span from 16-80 cores per node. We have 39 "servers" in total with 8 of them being hot standby (one for each type of deployed hardware). Our oldest hardware is dual socket, Haswell era Xeons (E5-2675 v3 iirc), 2 x 16 cores. We have a couple of slightly newer gen, single socket, 20 core Xeon 6138Ps. Our latest and greatest are single socket EPYC 7713P's, 64 core, very good value and performance. We also have some prior gen 32 core EPYC's which were our first introduction into AMD servers. Our primary build farm is made up from 12 x Threadripper 3990X 64 core, self built boxes. Not server CPU's of course but they provide better performance and lower cost than the equivalent EPYC's thanks to them being commodity hardware and having higher clock speeds (though its a product line that seems to have been cancelled now). Our ARM build servers are Solidrun LX2 Honeycomb boxes, 2 x separate 16 core servers in a single 1U chassis. We are currently evaluating a Gigabyte Ampere Altra Q80-30 80 core ARM server which is a huge step up in ARM performance but Im not convinced about value yet

        As you can see, its a very mixed non enterprise topology hardware wise and we are far from enterprise level budget wise! We try to reuse wherever possible. Our infrastructure is split into 5 main regions; internal admin, build farm (X86+ARM), client build services, CI testing and storage. Each of these have different performance characteristic requirements

        -For example our entire internal admin systems (sales, support, finance and admin) and its 9 VM's can run comfortably on a single EPYC 7713P 64 core box. We spread them over 2 with automatic fail over but they can all run on a single box should there be a hardware failure

        -Our internal build systems need as many cores as possible at as high speed as possible but dont use a lot of RAM comparatively speaking so Threadripper 3990X's fit the bill perfectly here. On the ARM side the LX2 was the only commercially available option at the time

        -Client build services are a mixture of 64 and 32 core EPYC's split over 2 generations. CPU performance isnt as critical as our internal build systems (I pay my own staff, not clients!) and the RAM requirements are much higher than you can pack into 3XXX Threadripper. We basically over provision the CPU cores because most of these jobs are submitted for overnight building, ready for the next day

        -CI is a hodgepodge of repurposed, old build servers from Intel and AMD of various core sizes. Performance isnt critical here and in a push we can steal some cycles from the build farm

        -Storage is done on the old Haswell era Xeons

        I have no issue with VMWare from a technical point of view. Its definitely the most noob friendly ecosystem and its solid and reliable in the most. Its the constant redefining of the product into multiple, chargeable SKU's and the corresponding invoices that I had to pay that pissed me off. If Im the IT director at a bank then Im playing it safe and I would go VMWare, its off the shelf with an abundance of certified bods but has a price tag to match it. I dont care about the price in that situation. As a small to medium sized dev house I need better value, much better value. I also need stability, I dont have the resources to constantly throw at the upgrade cycle

  3. Mr.Nobody

    I agree with all of your points. We are a long time VMware customer, but only pay for support to get the ability to upgrade seeing as support is almost worthless. I say almost because I had issues moving to VDS this year and opened a ticket (hadn't opened one in three years as it was always a waste of time) and I got someone with a clue. He didn't fix my issue, but he pointed me in the right direction.

    I am waiting for the day the PHBs say they won't pay for support anymore, and frankly I don't blame them. We never use the new features. I looked at the vSphere 8 features list, and quickly concluded we would gain nothing by upgrading.

    What platform did you migrate to?

    1. botfap

      We initially went to Citrix Xen Server as I had previous good experience with it in small, single server setups and we were already using Citrix VDI for our Windows sales, finance and admin desktops. We used a hybrid of the commercial version on internal business systems and systems hosting customer data and the free, open source edition for dev servers and the build farm. It wasnt terrible but we had some reliability and compatibility problems with the commercial edition. Things like hot migrate only working between servers with identical CPU's, missing support for some 10Gb ethernet adaptors and quite a few "undocumented behaviours" as Citrix liked to call them. We ended up having to write a lot of glue code and modify a lot of Citrix core system behaviour (which technically invalidated our support contract) to pull everything together. It was just about usable but required a lot of maintenance and fire fighting and Citrix support made VMware look amazing. When V7 arrived in 2016 it broke a lot of our glue code and fixes and Citrix changed the VDI licensing making it much more expensive. Given how much support we were doing internally and how little we were getting from them, I decided to hold off on migration and start an internal project to look at a full migration to open source, even on the admin, sales and finance Windows desktops

      So 6 months and lots of brainstorming, testing, training and recruiting later we had a plan to move to KVM on Ubuntu LTS. Ubuntu LTS VDI desktops to match the developer ecosystem and deploy the 3 Windows finance apps that we couldn't find quality alternatives for as RemoteApps using Windows Terminal Server. We employed a new bod who was a KVM specialist and trained up two of our best support peeps to be his cover. Smart move as he moved back to Arizona after 5 months, unable to cope with the UK winter! The actual migration was pretty smooth and trouble free and the whole thing was done over a weekend

      We ended up with KVM/libvirt using virt-manager as a GUI which is simple but capable for our purposes. We ended up writing a web based monitoring and alert system and a chunk of glue code to get things to behave exactly as we wanted it. There are now open source web based monitoring and management tools for KVM but ours fits us perfectly so we will stick with it until we really have to change it. More recently we moved away from Ubuntu on the Desktop to Linux Mint, for both developer and admin systems, due to continually increasing problems in the way Ubuntu packages it default desktop but that was a pretty straightforward migration

      Very, very happy with what we have now, its rock solid stable, easy to maintain, does exactly what we need it too, is very flexible and reduced our 3rd party software licensing costs from about £245K per year down to about £19K! It was a bit of a rigmarole to get here but if we had to deploy it all again from scratch in a disaster recovery situation then we can do it in under 24 hours including restoring backups

      1. coredump
        Pint

        Well done. [icon]

        Only curious: did you consider other opensource solutions at all, e.g. Proxmox or oVirt?

        1. botfap

          I never looked at oVirt, it didnt come up on our radar at the time, Im not sure it was in any usable state then. [Java rant redacted]. We did briefly look at Proxmox but ruled it out quickly due to it being a bit of a mess at the time and a lack of support on ARM hardware. We were already in the process of deploying some test ARM kit and wanted to make sure that if we deployed production ARM64 servers in the future then we didnt need to change our infrastructure to deal with it. Proxmox is basically KVM (with LXC in later editions) anyway so there were no real advantages there, only negatives. Proxmox still doesnt have ARM support and we now deploy ARM64 based servers both internally and for clients

          We also looked at pure open source XEN server, which was a little ahead of the feature set of KVM at the time. But with KVM being accepted into the Linux kernel as an official component, the writing was on the wall as to which open source virt layer was going to get the lions share of quality developer time in the future so I bet on KVM and it seems to have been the correct play so far

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like