Nutanix is lock-in simply because it's a proprietary stove-piped solution. Your data sits in a silo in a data format inaccessible by tools and protocols other than what the vendor allows you. They certainly won't help you get it out when - as you say - one easily migrates them to just another "hyper-converged" vendor. And they certainly won't open the data path wide enough to make it easy for a competitor to enable smooth migration. Your point has yet to be proven - I have not seen whitepapers and success stories around and certainly haven't heard such a story from anyone sharing it at a conference. It's still way to early to tell.
Despite this, you are eventually locked-in because the vendor dictates the use-case and the path to grow and expand the solution. Want to use Nutanix storage for something else than virtualization? You want to re-purpose all that storage you bought for something like an enterprise dropbox? You want to use it to store massive log data? You want to use for analytics using HDFS? You want to use it to run containers? Or built a storage-as-a-service platform? Good luck with that.
What you describe as "no lock-in" is only true if you look at just a single use case - virtualization with merely two hypervisors - Hyper-V and ESXi. As far as I can see Nutanix does not have a credible KVM story. They offer no certification for OS on KVM. No one writes drives for their KVM platform. They don't contribute to the Linux kernel either. It may be interesting for a test bed for a home-brew solution but not for a production environment.
Standards don't prevent lock-in either. There are so many examples where this is visible, it's even hard where to start. But take OVF as an example. A true standard to make moving VM between hypervisors and deployments at ease. Can you export your VM from ESXi in OVF and seamlessly import into Hyper-V or KVM-based stacks? No. There are vendor extensions that prevent this. Happy editing that XML stuff. Good luck with relying on third-party migration tools that cover only 60% of all possible configuration permutations.
Also, Open-Source by definition does not prevent lock-in - true. The more you adopt it, the more you change the ecosystem the more your processes, tools and skills are gravitating towards this model. That's also sort of a lock-in, right. But at least it's an ecosystem that fosters open architectures and by definition everything to build an interface which does not exist yet is available to everyone.
Your example to seamlessly migrate VMs between hypervisor is an example of the degree of obsolescence of the architecture you are trying to improve.
This challenge has been addressed otherwise already. New workloads don't rely on vMotion - certainly not between different hypervisors - they rely on scale-out. You can see this with containers and OpenStack. Both approaches don't offer this feature (at least not in the beginning and certainly not as a major feature) but both see a tremendous uptake in adoption and maturity. Your proprietary appliance however does not cope well with this because it's geared towards a legacy architecture. This won't go away tomorrow - but the industry is moving away from it at an accelerated pace.
Here is a real example of seamless migration that makes business sense and saves actual money: An open-source storage solution can be migrated even off-premise, into something Amazon EBS based. Because it runs on top of an x86 standard OS that is there since forever. It relies on the interface that the OS provides - which is much more ubiquitous than your storage hypervisor tied to ESXi.
It can also be scaled-out to provider platforms, enabling hybrid deployments or quickly providing burst capacity. Try doing this with your hyper-converged. The convergence ends at the datacenter row.
Your understanding of open-source innovation is also incomplete. If you need a feature that does not exist yet you are free to contribute. Good luck begging to your proprietary vendor.
If your contribution however does not get accepted either your implementation does not live up to the quality levels of the project or you are trying to implement something for which other solutions have been found already and you are just not seeing it yet. There is an aspect of meritocracy there that ensures high-quality implementation and common-sense.
It certainly has nothing to do with money (at least not primarily, however money is also what pays developers) and there are many examples where small, very small companies have significantly driven open source projects - in fact it's the norm.
I guess what I am saying is: you are basically right: Nutanix, Simplivity and Maxta are certainly doing their job - in their space (on-premise standard x86 virtualization using ESXi with manual administration and a static IT environment with no self-service capability). This space however looses relevance. And your believe in being more independent and flexible by relying on this is only true as long as you don't leave that space.
In reality however you keep spending money on this - and Nutanix and the like are certainly not cheap - therefore cementing your legacy architecture lock-in even more whereas you should spend money in resources and skills to adopt true open systems and architectures. It's like evolve or die for a dinosaur.
Paypal recently completed their migration to OpenStack - they are completely VMware-free. They completely rely on software-defined storage and software-defined networking. No need for costly Enterprise Plus licenses and crude storage appliance solutions or SDN appliances. They have freed enough resources doing this so they are now able to look at the next level - workload mobility. Containers enable that, not x-hypervisor vMotion. They enable even more - true agile IT operations and fast-paced feature development for much shorter turn-around.
All this was possible because of innovations based on top of an OS. I don't think it would have been enabled by tools and products from vendors that don't have an OS.