
Back in the last century
We had infrastructure that had small numbers of large system that controlled their own resources, be it memory, CPU, storage, or networking, with software components optimizing the use of resource. It ran on hardware that had enhanced RAS capabilities, and was quit expensive. Call this Stage 1.
Since then, we've been through:
Stage 2. Multiple smaller systems, each controlling their own resources, but they were cheaper.
Stage 3. Rolling all storage for these multiple systems into centralized storage solutions to make storage more flexible
Stage 4. De-duplicating the storage systems, so that the multiple OS files (and really only these files) would not have multiple copies wastefully stored
Stage 5. Virtualising all these multiple systems onto larger servers 'to save money and reduce wasted CPU and memory through resource sharing, and putting it on expensive systems with enhanced RAS.
Stage 6. Replacing the SAN with software defined storage systems.
Stage 7. Moving your communication infrastructure into the virtualised environment.
Stage 8. Virtualising the software defined storage systems into the enhanced RAS systems
So where are we.
We will now have infrastructure that has small numbers of large system that control their own resources, be it memory, CPU, storage, and networking, with software components optimizing the use of resource. It runs on hardware that has enhanced RAS capabilities, and is quit expensive.
All we appear to have done is replaced the OS with a hypervisor, moving everything one rung up the ladder, and we now have the traditional OS fulfilling the same function as the application runtime environments.
The next step will be to replace the traditional OS with a minimal runtime (hmmm, is that what containerization is all about), and we will have reinvented the Mainframe!
I've added the joke icon to try to deflect all of those of you who will try to point out the difference in detail between mainframes and hyperconverged systems.