Yes 3 more releases
They are called maintenance releases...
XIV is indeed DEAD
IBM’s XIV storage array has three more product releases coming, and is not going away, although it is not having a fourth generation version developed, the firm has claimed. Eric Herzog, CMO and VP of marketing and management for IBM storage and software-defined infrastructure, contacted us about our XIV-going-away story and …
This post has been deleted by its author
Are we being ironic or just showing the benefits of our classical education?
"XIV is dead! Long live Spectrum Accelerate* and A9000/A9000R"
I prefer Hans Gruber's line though....
*Spectrum Accelerate runs the XIV code on x86 servers with disks (including spinning disk) - so you can continue to roll your own disk based XIV out of your preference of x86 vendor's kit if you have an irrational fear of flash storage and don't want to buy the next gen (and renamed) XIV appliance......
"This has been coming for so long now, XIV's architecture was already old when it was supposedly new."
Yeah, disagree. No one was doing the pseudo-clustered (triple mirror) type storage architecture. People were using that EMC RAID 5, 6 stuff. I guess if you worked at Google it was probably outdated when it arrived, but otherwise it was like nothing you had seen before.
They never did triple mirroring, at least not of customer data, they were running a scaleout cluster with distributed raid 10 mirroring. The main difference vs something more SME like LeftHand networks was that this was delivered as a pre-engineered appliance and pushed at highend FC customers.
Remove the Rack, UPS and switching and you were left with a bunch of servers running a distributed volume manager. Thinking about it, XIV is probably one of the shortest lived yet successful enterprise array's on the market.
IMHO Moshe's reputation and IBM's subsequent marketing and install base made it seem new fangled, when in reality it was a one dimensional system from day one.
I don't think anything you wrote was a secret. I used to sell XIV at IBM. We very explicitly stated exactly what you stated. It is distributed grid based volume manager... with 15 Intel servers and some UPSs tied together with Infiniband. The hardware is pure commodity. It was a great story. Much better than the previous model where there a bunch of, unnecessary, custom ASICs which add cost, complexity and slowed down the hw refresh cycle. XIV could refresh as Intel refreshed. 15 controllers are probably going to beat two controllers in nearly every case, even with some ASICs. It was one dimensional in the sense that it always managed storage the same way, reliably, with fairly high performance, and unparalleled ease of management. Shame IBM is apparently doing away with it.
I think you maybe imbibed to much of the XIV koolaide you could have 2 controllers or a hundred and it wouldn't have gone any faster because the bottleneck was the 180 near line drives the system was limited to. Ditto for adding ASICs.
The story was commodity hardware pricing, fast refresh and innovation. The reality proved to be more like commodity hardware at enterprise prices and a refresh rate no faster than anyone else including those with custom hardware.
Innovation wise XIV got infiniband to replace Ethernet, doubled drive capacities with each release providing a subsequent lower cache per TB ratio, and an SSD read cache added just as the move to all flash really got underway. None of which seemed particularly innovative or really addressed the true bottleneck.
It did have a nice looking GUI though.
The plus side is you didn't have to deal with raid because there was only one variant available so it came preconfugured, the flipside being capacity overhead was huge as were the environmental requirements vs anything even semi modern.
Hence why I said it was old before it's time and a one dimensional system, the reality being all of the above were simply incremental improvements to a very rigid architecture, But it seems IBM have now managed to take the better bits and make use of them in something a little more flexible and hopefully future proof.
Here is why the successor of the XIV doesn't bear the XIV name... Disclaimer: IBMer here.
In case no one has noticed, the "A" in A9000 stands for Spectrum "A"ccelerate, which is the software runnning that cluster. It's the same software family that runs it predecessor, the XIV Gen3 cluster.
Similarly, the "V" in V9000/7000/5000 stands for Spectrum "V"irtualize, the stretched cluster storage software, and the 2 S in Elastic Storage Server (ESS) also stand for Spectrum Scale, the third in the cluster software trio.
For hyperconverged deployments on storage-rich x86 servers we use bare Spectrum Accelerate (for ESXi) or Spectrum Scale (any native platform, not limited to hypervisors). VASA & OpenStack are supported. For higher stability & performance goals, we recommend the appliances (FC/IB/CAPI rather than Ethernet/IP). A single name for both variants would do.
>>> BUT <<<
The original XIV data distribution schema has been designed for large nearline disks plus SSD caches. That doesn't make sense in the All-Flash era, so we changed the data distribution layout - and with it, the name.
Plus we noticed two roadblocks in all-Flash x86 cluster storage:
First, off-the-shelf x86 nodes are not up to the task of driving dispersed RAID for large packs of dense SSDs at desirable latencies. It's like putting race horses in carriage harnesses. NVMe fabrics will resolve some of that, in the meantime we use InfiniBand.
Second, even the best SSDs eventually get depleted under enterprise workloads, and we want to avoid too many component failures at once. Also we preferred a design *without* opaque 3rd party SSD firmware mimicking disk drives, which brings serious limitations in lifetime / garbage collection control / health binning control etc.
The A9000 therefore leverages FlashSystem's Variable Stripe RAID developed at Texas Memory Systems. Think "Variable Stripe" as "self-healing", a feature known from the XIV - but with RAID-5 efficiency. The overall data distribution schema runs on a 2:1 ratio of x86 nodes to Flash drawers, or even 3:1 when it's just one pod (for lack of workload entanglement, among other things).
With this design, the A9000 runs global deduplication PLUS real-time compression at latencies suitable for SAP R/3 and Oracle databases. Which compress nicely at [up to] 5:1, by the way. Anyone else?
[up to] is the legal disclaimer, your mileage may vary.
Biting the hand that feeds IT © 1998–2021