back to article The Incredible 4PB Hulk: EMC monsterises VMAX

EMC has gained top datacentre dog bragging rights with a coming 4 petabyte VMAX 40K storage array, storing 60 per cent more than HDS's biggest VSP array and 74 per cent more than IBM's DS8000. This is possibly one of the last massive primary data arrays before flash takes over the primary data storage universe*. EMC says its …


This topic is closed for new posts.
  1. Nate Amsden

    XIV really?

    XIV really doesn't compete with anyone at the moment does it? I mean it taps out at under 200 disks. Unless your talking about some sort of XIV+SVC combination.

    I've drilled 3PAR over the past 3 years as to why not support more usable capacity, their response is there isn't a market demand for it, the average 3PAR system in the field has about 400TB of raw (or usable I forget, I assume raw) capacity on it. I guess people don't load up their arrays. When I got my T400 back in 2008 I immediately had the first two nodes maxed out for capacity(150TB raw - but not maxed out for I/O), so was always curious why they didn't push capacity further. I guess if the customers aren't pushing for it then there isn't a need to develop it.

    As to EMC - don't you find it curious that they haven't gone beyond 8 engines at this point? I mean going back to when they introduced it they were saying how they were going to have many more engines. But for some reason even on this new big 40k they stuck with 8? I'm certain, especially when loaded up with SSD that a VMAX would need far more than 8 engines to really drive SSD performance, since your easily talking about 10s of millions of IOPS potentially anyways.

    3PAR has stated they don't intend to go beyond 8 controllers for the foreseeable future, instead on the V-class they doubled up on the ASICs to about double node performance and massively increased memory capacity.

    Certainly feels like EMC went back to the drawing board and reconsidered their original plans for a massively scaled out VMAX.

    HDS's SPC-1 performance on the VSP was pretty disappointing (though SPC-2 was very impressive), I wonder if this new 40k VMAX will make EMC confident enough to post numbers on that platform, I'm not holding my breath though.

    Question remains do those 8 engines have enough processing capacity to drive the full I/O of the back end spindles ? For VSP it seems the answer is no, at least for random I/O. Wouldn't surprise me at all of VMAX was in the same boat.

    Wake me up when EMC gets distributed sub disk raid, with the massive nearline drives coming in the future, traditional whole-disk RAID is going to lose out pretty fast with long rebuild times and higher risk of double/triple/more disk failures.

    But I'll give them some credit for at least upgrading the CPUs they were using, bout time. Though I'm not up to speed on the latest Intel code words, it seems Westmere is already 2 years old. Day late, dollar short? Oracle at least is pushing quad socket 10-core Intel procs on their systems. Get with the times EMC.

    up late to do a software upgrade on one of my 3par boxes..time to go to sleep.

    1. Anonymous Coward
      Anonymous Coward

      Re: XIV really?

      "...Wake me up when EMC gets distributed sub disk raid, with the massive nearline drives coming in the future, traditional whole-disk RAID is going to lose out pretty fast with long rebuild times and higher risk of double/triple/more disk failures...."

      It's a while since I've used Symms, but I was under the impression that the hypers which make up metas were raided across multiple disks, which don't have to be the whole of the disk in question. Or have I misunderstood what you mean by whole disk RAID.

    2. J.T

      Re: XIV really?

      CPUs: Most of the industry was still on Westmere or earlier. And no, Oracle is not using 10 core, they're using 6-core. When your storage system is just a bunch of servers with an infiniband connection, yes, you can leverage processors faster. It doesn't change Oracle's lack of proactive sparing, obscenely small usable vs raw, price (10 million for 200TB raw), or EXTREME heat and power.

      3PAR has an incredibly niche system. They don't do low end, they don't do middle, they have a very strong place in the lower end of the high end market. While they have enjoyed a technology lead in storage "software" with leading with thin provisioning, storage tiering, zero space reclaim, etc... they are now owned by HP which has had questionable success with development.

      Why didn't they go past 8 engines? I have no idea, but they got to 4PB without expanding. Which leads me to laugh at "Certainly feels like EMC went back to the drawing board and reconsidered their original plans for a massively scaled out VMAX."

      I mean, I realize you're referring to their earlier statements that they can address 255 engines, but when you just threw down a system with over 2x the capacity of everything else, why introduce that complexity when you can use it to come up with other uses for engines (the oft rumored here VMWare engines for example)

      I'll wait to see the benchmarks, but it does appear that EMC knew they would have to drive more spindles, as all of the new cores are dedicated to the back end.

      "EMC gets distributed sub disk raid, with the massive nearline drives coming in the future, traditional whole-disk RAID"

      1. Sub disk: FAST VP

      2. You can put SAS,SATA, SSD into their Arrays: massive nearline with 3TB SATA

      3. Whole-disk RAID: Welcome to an industry issue

    3. Anonymous Coward
      Anonymous Coward

      Re: XIV really?

      I have about 500 TB usable of XIV. XIV is actually really great if you have a small staff. The array scales to 243 TB with Gen3 which isn't world beating but it is not a big deal as you can manage several XIVs from the same management counsel and I don't have any workloads that would stretch beyond 243 TB... nor does most anyone else. XIVs are really fast too for the cash with their distributed cache and CPU grid down at the disk module level, like 20,000 IOPS on big, slow SATA drives with no tuning or SSD or anything other than turning it on and connecting hosts. It probably isn't fair to compare it to VMAX for scale, but it definitely blows away older Sym boxes in raw performance and is way, way easier to manage.

    4. Anonymous Coward
      Anonymous Coward


      Symm has done sub-disk raid for a LONG time. Not sure what exactly you're talking about. Fact checking before spewing FUD is always a good idea.

    5. Hard_Facts

      Re: XIV really?

      Hope VMAX-40K catches up with 3PAR's 450K SPC-1 IOPS (with 1920 odd disks)....

      With VMAX-20K probably it couldn't come close enough & hence we didn't see any SPC-1 on 20K...

      Here comes Big Fat VMAX-40K with 3400 disk, don't know how big would be the market for this mammoth, but this unrealistic configuration may at-least help it match-up / catch-up on benchmark numbers to make "A Speeds & Feeds" statement".

  2. Twit

    So what

    Marketing figures, only designed to do one thing catch headlines not solve customer problems. NetApp FAS6280 can host 4.3PB of disk, so a 4PB array is hardly revolutionary.

    1. J.T

      Re: So what


      Maximum Raw Capacity 2,880TB

      Maximum Drives 1,440

      Max Luns 4,096

      System Capacity therefore includes.....

      1. SFC

        Re: So what

        Configuration rules for a FAS6280 and a FAS6280 in an HA environment


        Maximum system capacity (in TB)

        Note: Disk drive capacity listed is the largest shipping disk drive supported on the date of the last update to this page. 4320

        With the current shipping 3TB drives it's 3.4PB. When 4TB drives are released, or with a v-series, it goes up to 4.3.

    2. Rage against adverts

      Re: So what

      I'm with you. This author has had a serious sip of EMC cool aid. What a load of malarky.

      HDS and NetApp both have deployments greater than the numbers stated in this advertorial.

  3. rock2ku

    It amazes me that performance claims are still be made by a company who never posts them. Also...with FLASH ALREADY HERE...what percentage of companies out there actually have a use case for this "HULK?" Disapointed with this annocuement to say the least. They seem to miss the point...AGAIN!

  4. Anonymous Coward


    "sneak peak" sneak peek, surely...

  5. Anonymous Coward
    Anonymous Coward

    55 PetaByte zfs installation:

    IBMs new future supercomputer with 20 TeraFlops will use Lustre. Lustre does not handle 55 PB of disks really well because of the filesystem ext3, so they are porting zfs to Lustre. Now zfs will be the filesystem instead of ext3.

  6. Man Mountain

    Who would buy one of these and fill it with 2TB 7.2k drives to hit the 4PB claim? Seriously?? That's one bloody expensive content store, especially when EMC (and everyone else) has better solutions for that sort of requirement.

    1. J.T

      You fill it with mostly 3TB SATA, but then enough SSD and SAS/FC to provide enough IOPS while allowing FAST VP to move old/seldom used data to the "trash" storage

  7. Michael Duke

    I agree with several posters above, you do not buy VMAX and fill it with NL-SAS disks. If you want a large content store and you are an EMC customer then Isilon is the right solution.

    These boxes will move with a 3/4 tier architecture (SSD, 15K SFF SAS, 10K SFF SAS, 7.2K NL-SAS), multiple engines and massive redundancy, THAT is why you buy VMAX.

    This play is mainly as I see it to fight against HP/3Par in the service provider space, where 3Par has the loins share of the big guys doing XaaS (See the VMAX SP details the other day) and as a growth platform for existing DMX/VMAX customers.

  8. Anonymous Coward
    Anonymous Coward

    What about the Fujitsu ETERNUS DX8700 S2?

    I am suprised that the ETERNUS DX8700 S2 isnt´t listed as a competition. As far as I know, the system can scale up to 4.6PB with the 3TB 3.5" drives and maximum drive number of 3072 with 2.5" drives...

This topic is closed for new posts.

Other stories you might like