back to article Nutanix digs itself into a hole ... and refuses to drop the shovel

Nutanix has dug itself into a hole by trying to occupy the moral high ground with regard to performance testing of hyperconverged systems, saying it's in favour of transparency, yet pulling out of an independent StorageReview test of its product following poor results. About a month ago, Nutanix (Lukas Lundell) and VMware ( …

  1. Anonymous Coward
    Anonymous Coward


    Didn't realise the H in HCIA stood for hypocritical...

  2. Trevor_Pott Gold badge

    We’re committed to working with independent third-party evaluation labs like Storage Review to compare our solution against any hyperconverged product using comparable hardware and a comprehensive and representative testing methodology.

    The current generation of methodologies does not adequately represent how hyperconverged solutions perform in real-world customer environments. We feel strongly that utilising outdated test tools and methodologies would not provide customers interested in hyperconverged solutions with relevant and indicative data.

    As indicated by Lukas, we’re building an open, comprehensive test suite for this category that we feel will help customers better understand the performance of hyperconverged solutions. We’ll demonstrate it at the Nutanix booth at VMworld and will release it in September so anyone in the industry can use it.

    In the meantime, we’ll continue talking to Storage Review and any other third parties about working together on a review that will benefit both the industry and customers evaluating hyperconverged solutions.

    Bullshit. Bull fucking shit. Bullshit of the highest order. Liar, liar pants on goddamned fire.

    Maybe you are building a test suite, but it sure isn't "open". Open would mean that you included the community in the development process and worked with other vendors in the space. I can absolutely believe you're cranking out a test suite that will make Nutanix look amazing, but is it going to test your weaknesses as well as your competitor's strengths?

    Look, Nutanix, you've been a pain in the ass to even try to engage with to get reviews done, though that isn't to say I don't appreciate being allowed to use a cluster in your remote POC lab for a week to test some of my own workloads. It was a start, but given the vitriol of the debate with VMware, it isn't enough.

    I realize I'm small potatoes, but I've been entirely willing to work with you to come up with a viable methodology that both you and VMware would agree to. I can get the rest of the industry to agree to play and you know it. I've even offered multiple times to do the testing (or rally the troops) for free. Ther are others who can do so as well.

    Now, as stated above, I'm a completely irrelevant small fry here. There are bigger names with bigger followings who command more money than I. People like, oh, Storage Review. Or you could pick Howard Marks. Or The Other Scott Lowe. Or any of a dozen trusted, highly competent and capable analysts or vExperts who have reputations for independence.

    You haven't done this.

    Now, you're not alone in this. VMware are a bunch of knobbly ponces refusing to play ball here too, but the rest of the field is absolutely not giving independent testers the run around. SimpliVity, Maxta, Scale...frigging NodeWeaver for $deity's sake. You are being out-legitimized by an SMB HCI vendor who is just passing their 100th customer!

    I don't care if you don't think I'm independant enough, or that guy is, or that other guy over there. Pick one. More than one, preferably. Let the community know about it. We'll all jump down Chuck Hollis' throat and make the bugger send his stuff to the same party. I'll work on the other vendors personally and we'll finally get both a standardized set of tests for HCI agreed upon by all vendors and a baseline we can all work from.

    It was cute for a while that getting independent reviews was "tricky". New market. We get it. But you're a behemoth now, and HCI has moved from "product" to "feature". It's not new. It's not sexy. And it's time to quantify, compare and educate.

    Nutanix, you and VMware are holding back the entire hyperconvergence space with your constant back and forth shitfighting and the bipartisan refusal to simply get this crap solved in an objective manner. There are bigger issues with HCI than performance. We need to address those and that takes a focus on education, not bickering.

    Remove head from sphincter. Both of you. And let's please get on with the business of making storage, compute and networking better for everyone.

    1. Naselus

      ...and that'd explain the slight obsession thing I mentioned in the other thread. I'd be a bit obsessed with any company that was this much of a pain in the ass on a daily basis.

      Also, I do think you're entirely in the right on this one, obviously. Moreover, it's a bit surprising that Nutanix think they can get away with shit like this. VMWare less so - they're already sitting on an entrenched, dominant position in their traditional sector and can leverage that over the review sites in the same way that all the big players do, but Nutanix's only strong position is in the laughably immature HCI space, which might look entirely different within six months.

      They really need to be winning sympathy and pointing to the old guard's cumbersome and demanding attempts to force reviewers into using metrics which are best for them, rather than actively competing to find out who can get away with being the biggest dickhead to the 3rd party bench markers that are, ultimately, going to make or break their product.

      I get the feeling Nutanix are getting a bit ahead of themselves. They're not the Microsoft of HCI yet, because HCI is nowhere near mature enough to HAVE a dominant player, and the other new boys in the space are only really behind Nutanix right now based on funding and time-to-market. First mover advantage is overrated, there's no indication that Nutanix has any real quality advantage, and their much-vaunted customer base remains tiny compared to real established players in established techs - they're celebrating 800 enterprise deployments on their website, ffs. If that's your big news, you're not in a position to start dictating the field. VMWare have 500,000 deployments and THEY aren't in any position to dictate the field either. Time for Nutanix to reel in the overgrown ePeen and grow up; a reviewer boycott could still put them out of business at this stage.

      1. Trevor_Pott Gold badge

        I'd be a bit obsessed with any company that was this much of a pain in the ass on a daily basis.

        Except that's crazy. I can't think or be like that. I'd go mad in very short order.

        There are a bunch of storage and virtualization companies that are decent to work with, but make no mistake most tech companies are horrible to work with. Working with tech companies is my job. No matter how dickish they are, it is my duty to my readers to suck it up and work with every vendor, regardless of my personal feelings.

        No one is 100% objective, but it is my job to try as hard as I possibly can to be so. That means I can't allow myself to become "obsessed" with any company.

        Though, following on from the previous thread and discussion, Nutanix is a big deal and you need to learn to deal with that. They are a huge company that is actually selling rather a lot of gear to a number of different clients. They will be around for a long time. And unlike many others they are more than their initial base product (hyperconvergence) and are putting a lot into R&D.

        I know you want to dismiss them - and HCI in general - as irrelevant. Too bad. They're not. Nutanix may be a pain in the ass, but they're here to stay. Saying this isn't "obsession", it's objective assessment of the facts.

        1. Naselus

          "I know you want to dismiss them - and HCI in general - as irrelevant. Too bad. They're not. Nutanix may be a pain in the ass, but they're here to stay. Saying this isn't "obsession", it's objective assessment of the facts."

          I think you're misrepresenting my position on this (or maybe conflating it with the ACs who also joined the other discussion). I don't want to dismiss HCI, which is basically going to completely change how my job works. It's not irrelevant; it's the most important thing to happen in enterprise IT since virtualization, bar none. All that massive disruption that Cloud promised, which mostly never really happened, was nothing compared to where HCI is going, since HCI is going to change how we do things both on- and off-premises; data that I would never in a million year let AWS or Microsoft put in their cloud infrastructure, I will still migrate to HCI machines sat in my own server room. It will be huge. But that's the point - it WILL BE. It isn't yet. It needs 3-4 years more before it's truly ready, and I don't know anyone who's outright replacing their existing infrastructure with HCI yet; there's some dabbling, and there's expansion of existing infrastructure, but not one of Nutanix's customers has gone out and binned the ESXi hosts and replaced the netapp filers they already had. The Ultimate Infrastructure Machines are not yet synonymous with infrastructure itself, even though I agree that they will be by 2020.

          As to Nutanix as a big deal, selling a lot etc... not so much. At least, not yet. They're the biggest fish in the HCI pond atm, but that pond is still in the process of being dug out, and the fish are all in their infancy, even Nutanix. They're not 2004 VMWare yet; a company that was not just doing something different, but that no-one else had the slightest clue how to do at all. Presently, Nutanix aren't offering much that, say, Simplivity don't - and the established players from elsewhere are in a much better position to jump on the HCI bandwagon than they were for virtualization.

          In short, VMWare in 2004 or so was single-handedly disrupting the whole data center, with Netapp disrupting storage in a complementary way at the same time. We could all see that these two companies were going to be Big Shots for a decade or more coming up. Nutanix are not in that happy position - they're part of a coming wave of disruption, but that wave is made up of lots of companies fighting over the same space, and no one company can dictate the direction it's going to go (yet - if history is anything to judge by, then in six months or a year we probably WILL see one company ruling the roost - but it's not certain to be the current big dog). I don't think Nutanix are going to go bang and collapse overnight, but I also don't see them pre-ordained to take pole position in that marketplace as yet - the starting pistol has barely gone off in HCI, and trying to guess who'll be sat on top of the pile before half the companies have even brought their tech to market is premature.

          1. Cloud 9

            "Nutanix aren't offering much that, say, Simplivity don't" ..... As this filthy thread is already littered with profanity - I'll just add the word "Bollocks" here in response to this snippet.

            Other comments that fall under this classification:

            "It needs 3-4 years more before it's truly ready" .... says who and why? Justify your fantasy timescales here. 3-4 years in IT is a lifetime these days.

            "but not one of Nutanix's customers has gone out and binned the ESXi hosts and replaced the netapp filers they already had" .... And you know this how? Are you some kind of infrastructure spy / ninja? ... Companies are buying new HCI and leaving their old crap in the DC because, what? They have money to burn on escalating maintenance of old kit?

            Yes - Nutanix have to step up to the plate and join in with the whole storage testing malarkey in order to join in with the obligatory public chopper measurement (and they should get on with this sooner rather than later) but suggesting that they could still be irrelevant is frankly ridiculous.

            1. Anonymous Coward
              Anonymous Coward


              Sound like some have nothing to offer but naysaying. Simplivity is mom and pop and lacks the ability to scale anywhere near Nutanix. When I saw the statement "Nutanix aren't offering much that, say, Simplivity don't" I knew there was something wrong. I suspect that like the false reports of Cisco buying Nutanix about a month ago, some are feeling threatened and slinging as much FUD around as possible.

              Nutanix is a huge game changer. The more folks try to harm them,. the better they get.

          2. Trevor_Pott Gold badge

            We'll have to disagree here. I don't believe VMware are going to be in the pole position, because I don't believe VMware have an awareness of the market required to make the cultural changes that will allow them to take that position. Nutanix, for all their flaws, are the dominant play by a country frigging mile. And they are not lax. You seem to have a real hate on for them, but there's nothing at all to indicate they will crumple up and die, as you seem to hope.

        2. Alan Brown Silver badge

          " most tech companies are horrible to work with"

          Name and shame please.

          As a customer who spends the best part of a couple of million each year, I prefer to avoid that kind of shit.

          Anyone with large enough requirements knows the experience of systems which work fine on the bench or under mild load but hit a knee point and seriously crap out as real-world load is applied. We also know what happens when vendors refuse to deal with problems and run away when critical systems start breaking (I'm looking at you, HP and Suse)

          It's better to give such outfits a wide berth BEFORE they end up eating several man-years worth of effort having to nursemaid fragile setups.

          1. Trevor_Pott Gold badge

            @Alan Brown

            There is a difference between "difficult to work with" from a technology side and "difficult to work with" from a people side. Lots of companies have decent-to-good tech but miserable people. Plenty of companies have middling-to-miserable tech but great people.

            Great tech can make up for miserable people and great people can make up for miserable tech. The exact mixture that works for one company may not work for another because requirements for uptime, support responsiveness an other such things can vary dramatically.

            The biggest warning sign I can give is to take a good look at the executive layer. Especially of small companies. If the executives - most critically the CEO and CTO - are "high touch" individuals, you're in trouble. The worst thing in tech is an engineer CEO who won't let the various division heads (sales/marketing/QA/channel/etc) do their jobs unhindered.

            High-touch CEOs are a screaming alarm bell warning about oncoming icebergs.

            Tech is a tricky business, and I find more companies getting the "people" part of it wrong than those that get it right. Oddly enough, getting the "technology" part right seems easy. There are lots of companies with great technology. It is in managing staff, customer and community expectations - and coping with extremes of emotion from all sides - that tech companies fall down.

            Unfortunately, too many in tech think that "the human factor" is irrelevant. Until, of course, it isn't. At which point it's probably too late.

  3. Marc 25

    Two vendors in a p*ssing competition about how to test their products to make them look the best. And shock, horror, shock, they don't agree! Who'd have thunk it! should be doing their own benchmarking, irrespective of what VMWare and Nutanix think, surely that the definition of "Independent"?

    1. Trevor_Pott Gold badge

      Where is storage review - or any of us - going to get the money to buy a Nutanix node on the open market? Also, you do realize that VMware will sue you into oblivion if you publish test results that they don't approve of.

      I tried to work with VMware. My lab is here. I built much of it in order to test Maxta and other HCI vendors. I tried to get permission from VMware to test VSAN. They didn't want me to test on VSAN unless I replicated their internal configurations exactly, including CPUs much faster than i could possibly afford.

      I told VMware that I couldn't do that. Money was a very real issue. I was subsequently given a not-at-all-subtle warning against YOLOing testing on VSAN.

      Now how am I, or Storage Review, or any of the other analysts supposed to afford to buy a Nutanix cluster? Or EVO:RAIL? Or SimpliVity? If the vendor doesn't play ball and send a unit in for testing we just can't do it. (Unless they are software-only. Most of us have or are building HCI-compatible labs that can do software-only solutions for multiple vendors.)

      Nutanix and VMware are the HCI companies that are hard to work with regarding reviews and testing. The rest have proven to be amazing. (Though to be fair, Nutanix has a great relationship with Storage Review that they don't seem to have with many others, so go Storage Review!)

      Make of that what you will.

      1. This post has been deleted by its author

      2. Alan Brown Silver badge


        "I told VMware that I couldn't do that. Money was a very real issue. I was subsequently given a not-at-all-subtle warning against YOLOing testing on VSAN."

        Thanks. That's the kind of information we need.

        I'll be making some phone calls in the morning to advise that a couple of pending contracts have VMware removed form consideration. If they're pulling this asshattery on reviewers then we can be pretty sure they're the kind of vendor which is "difficult to work" with when things go pear-shaped.

    2. Alan Brown Silver badge

      " should be doing their own benchmarking"

      You forget that outfits like StorageReview are dependent on:

      1: Equipment loaned from the vendors

      2: Advertising

      This has always been a problem for these kinds of sites and magazines (in paper days). Them what are being reviewed have always tried to dictate conditions and in most cases we simply didn't hear about it.

      #1 could be addressed by buying the stuff, but bad reviews of kit leads to reductions in #2, so they'd need to be a subscription model which simply isn't viable in most cases.

  4. Marc 25

    What I make of it, is that there's never ever been such a thing as an independent reviewer in any marketplace (not just IT).

    Until the reviewer buys the product for themselves they are bound by the wishes of the vendor and have to agree to vendor approval of any article written about them. I suggest anyone wishing to be recognised as an independent reviewer should have a fairly sizable budget, unless of course, you're reviewing USB keys and hardrives.

    1. Trevor_Pott Gold badge

      In theory I agree, but there's two problems.

      1) Where do you get the money for a "sizable budget"?

      2) When reviewing technology products you are bound by the EULAs of those products - especially in the United States - which often state that you are not allowed to review that product without permission.

      You don't own VMware's ESXi just because you bought a license. And they can come after you with a fist full of lawyers if they don't like what you write.

      We all have to make compromises in order to review things. The compromise I choose to make is that I will sit on endless briefings and play politics and try to work with vendors to find testing regimens that both meet their requirements and that I, in my professional capacity, feel adequately represent the product.

      I don't let vendors push me around on my reviews and water them down. If I find bad things, I report that.

      Unfortunately, it also means that sometimes vendors exercise their right not to engage with me or to prevent me from publishing. So there are hardware and software items which I have reviewed which never got published. I don't like it, but that is a better choice than compromising my ethics and publishing cherry-picked reviews.

      Now, I don't have the clout or pull of Storage Review, or Howard Marks or any of the other big names. I still have to fight and claw and politic. But there are people out there who absolutely do try their damnedest to be independent. Hans De Leneer's take on this is really worth reading, as he discusses this concept at further length.

      The short version is: no, nobody is truly independent because our laws prevent such independence. Beyond that, the part where there are no independently wealthy people willing to spend a few million a year buying and testing equipment is a damper on absolute independence as well.

      Within the constraints of those two issues, however, I (perhaps egotistically) like to think many of the reviewers available in the storage and virtualization space do a damned fine job of maintaining their objectivity.

      I say the above not only as a reviewer and a writer, but as an editor for my own technology outfit who has had to go to the mattresses for one of my writers. That writer bought a device with his own cash, wrote a review that absolute panned the device, and the company freaked out. Fortunately, there was no EULA item that allowed a legal avenue of attack at the time. But there are few things that will make you sweat quite so much as having to play that game of chicken, I promise you.

      1. DeepStorage

        Independence comes at a cost

        It's not just EULAs. Early in my career I wrote a scathing review of a really lousy modem in PC Magazine. The company sued for libel. Of course the truth is an absolute defense in libel cases, as long as you have $50,00 US to pay your lawyer.

        The good news is that Bill Ziff paid the lawyers and indemnified me but it was a wake up call.

        We've had projects go south, usually because the vendor asked for more than their system can do. What happens then depends on the vendor. Usually it means either redefining the project to be a consulting job on how they can fix their gear and nothing gets published.

        1. OtherScottLowe

          Re: Independence comes at a cost

          And that's a GOOD thing for everyone. Sure, they may not get their awesome review, but they did learn about a weakness in their product that they can fix, which is good for their customers and, ultimately, good for the company. Just squashing a review, though, without any thought into how to turn it into a positive isn't good for anyone.

          And good on Bill Ziff in paying for that. It's a shame that they came after you at all. If their product sucked, people needed to know.

        2. Alan Brown Silver badge

          Re: Independence comes at a cost

          "Of course the truth is an absolute defense in libel cases, as long as you have $50,00 US to pay your lawyer."

          Which is why defamation litigation is first and foremost a matter of who has deeper pockets.

          It's the _only_ area of law where everything is turned on its head if you're accused (You have to prove innocence, vs the plaintiff having to prove everything) and is the modern equivalent of witchcraft trials.

      2. Alan Brown Silver badge

        "When reviewing technology products you are bound by the EULAs of those products - especially in the United States - which often state that you are not allowed to review that product without permission."

        I suspect that such a term would never hold up in a USA court, if someone had the resources to fight it.

  5. Platypus


    "We're committed to working with 'independent' third parties who will accept (explicit or covert) remuneration to run whichever benchmarks we want however we want them to ensure that our products prevail in 'objective' tests."

    I've been in the storage game a while. I have (to my shame) worked at companies where I got to see just how 'independent' most test labs and analysts are. Good will and integrity didn't pay for those Porsches I saw in the parking lot, folks. This is just a new player, not a new game. I can't help but wonder whether some of the anger is because this new player is overdoing it so much that they've brought unwelcome attention to everyone else hiding under that same rock.

  6. Doogie Howser MD

    Howard Marks?

    Why is a convicted drug smuggler reviewing storage kit? Bottom fallen out of the weed market?

    1. Trevor_Pott Gold badge

      Re: Howard Marks?

      Storage Howard Marks has a mightier beard than Weed Howard Marks.

      Also a wizard hat. The wizard hat is important.

  7. Trevor_Pott Gold badge

    E-mail reply from Chuck Hollis of VMware

    Chuck Hollis of VMware has read this comment thread and sent me an e-mail. His opinion and views on the matter - and on my comments above - are valid and deserve to be included in this dicussion. I am reproducting the e-mail chain here.

    Chuck Hollis to Trevor Pott

    Hi Trevor

    It was interesting to read your recent comments on The Register regarding the latest Nutanix snafu.

    But I think you've completely mis-understood (and mis-represented) our stance on performance testing. We encourage it, not discourage it.

    We've published oodles of our own data. We've published data from customers. We've encouraged to publish. Etc. etc. etc. The more the merrier.

    All we ask is a chance to review the configs and methodologies prior to publication -- which has been VMware's policy for many, many years. Lots of people are new to this testing thing.

    We plan to release an easy-to-use testing tool (based on VDbench) to help make it easier for folks to test hyperconverged clusters with a variety of IO profiles. You, of course, are free to use it -- as will anyone else.

    Or use your own tools. Have at it -- really!

    However, we don't have much of a budget to send people free hardware. We're tapped out for the year, unfortunately, so you'd have to round up your own four-node config that conformed to the VMware VSAN HCL and design guidelines. Dell may be willing to play, or perhaps HP or similar.

    Nor do we generally pay for reviews, as that's a slippery slope.

    I hope you understand our position here, and can perhaps soften some of your comments to more accurately reflect reality?


    -- Chuck

    Trevor Pott to Chuck Hollis (reply)

    While your take on this does not reflect my experiences with VMware in this regard. We appear to have dramatically different understandings of the meaning of "chance to review the configs and methodologies prior to publication". I view independent reviewing – especially of software solutions like VSAN – to be fair game if you test multiple options on the same hardware. Doubly so if the individual components are on the HCL.

    VMware seems to disagree, and has insisted that individual components being supported isn't good enough: the whole of the thing must meet the desired qualities. Slower CPUs, for example, are apparently not okay.

    That said, I don't have to agree with your take on this for it to be valid. I have my view and I have expressed it. It is entirely possible that my views or understanding is wrong, and I'm willing to admit that possibility.

    I will publish your e-mail in the comments as it is entirely valid that you get the change to rebut what I have said, along with this response. The readers will decide.

    For the record: I never wanted – and don't really want – extra hardware to do testing. I will absolutely test whatever hardware comes my way, but for the love of $deity I have 10x as much server widgetry as I could ever conceivably use. I've also not asked to be paid for reviews by you or by Nutanix. I've offered several times to do independent testing for free in order to help put this debate to rest.

    What I want – all I've ever wanted – is the chance to test hardware, software and services that I think my readers or my clients (or preferably both) will care about. I want to dig to find the truth of the gear that real systems administrators use, because it is those sysadmins that I feel a kinsip with, and it is those sysadmins that I feel I serve.

    Parting thoughts

    It is worth discussing the issues surrounding vendor control over reviews via an exercise of their legal rights. I believe it is perfectly valid for VMware to want to review the configuration and methodology of a review of their software. I don't believe, however, that they should have the opportunity to deny things just because they won't show that software in the best possible light.

    It is absolutely valid to test non-optimal configurations and report the results of that testing. In the real world, lots of people live outside pre-canned, certified solutions. HCLs exist for a reason: they are a recognition of this fact and a publicly visible list of not just entire servers that are certified, but individual components, for those who are colouring outside the lines a little.

    I view VMware's VSAN team as spectacularly hard to work with in a way that the rest of VMware isn't, specifically because of the level of control they insist on having over reviews. VMware's VSAN team don't seem to view their efforts as an attempt at control, but as an attempt at quality assurance and review integrity.

    If I am being honest, then I cannot say that I have the answer to which view is right. My views are deeply rooted in my own past as an SMB sysadmin, which is tied to a need to know how things work when you can't afford to pay top dollar (and high margins) for everything. I feel that is a world that needs to be quantified, and I spend most my year trying to answer those questions for other sysadmins.

    VMware's views are influenced by their own needs, but I must admit their take is objectively no less valid. I think readers should read all of this. Not just this thread, but many of the other threads that are associated on various blogs across the virtualization blogosphere.

    I am one voice with one set of experiences. There are other voices with other points of view. Decide for yourselves. Test for yourselves.

    I look forward to using both VMware and Nutanix's testing tools in my future HCI testing just as soon as they become generally available.

    1. Pancakes

      Re: E-mail reply from Chuck Hollis of VMware

      Dear Mr Pott,

      Might it be so that because, in your own words, you are a nobody, that both Nutanix and VMware don't really want to spend valuable marketing $$'s on you performing tests that no sizeable crowd is ever going to read?

      It's not like Ferrari sends it's latest model to me for a testdrive, don't they? They send it to Jeremy Clarkson instead.

      This is just my polite way of saying that you maybe should tone it down a little?

      I'm not saying Nutanix handled the SR thing in a good way, but neither do I like the sad attempts by VMware to try to make themselves the good guys, which they clearly aren't.

      And as such I'm not basing my purchases on FUD from any website, or person but just test the stuff myself before any PO leaves my desk.

      1. Trevor_Pott Gold badge

        Re: E-mail reply from Chuck Hollis of VMware

        Might it be so that because, in your own words, you are a nobody, that both Nutanix and VMware don't really want to spend valuable marketing $$'s on you performing tests that no sizeable crowd is ever going to read?

        Absolutely. I 100% accept this as plausible, and I don't honestly take issue with either or both companies deciding that I am irrlevant.

        I absolutely do take issue with them not working with the more important members of the independant testing community, and I haven't talked at all here about what they tell me about interacting with either company.

        This is just my polite way of saying that you maybe should tone it down a little?

        And as such I'm not basing my purchases on FUD from any website, or person but just test the stuff myself before any PO leaves my desk.

        Where did I ask you to base anything you're doing on what I wrote here? I asked that you - and everyone else - test for yourselves. I asked that you ask hard questions. I am listing here my issues, just as others are starting to do, in the hopes that when it comes time for you to make purchasing decisions you take the time to remember these events and you to a more rigorous POC than maybe you otherwise would have done.

        This isn't about my ego. Nobody with self-esteem as low as I have can really have much of an ego. This is honestly about just wanting to do well by others. I'm sorry you feel offended by that.

        If I'd wanted to make a gigantic mess out of this I could have posted an article on The Register and put this in front of 9 million readers. As it is, less than 1% of The Register's readership uses the comments section.

        By choosing to talk about this in the comments section of an article I know that the major players at both companies - as well as most of the independent testing community - will read, I am restricting the impact of my being shouty whilst still making my point to the right people.

      2. OtherScottLowe

        Re: E-mail reply from Chuck Hollis of VMware


        I'm dismayed to see your reaction to Trevor, particularly shrouded in what amounts to a personal attack from behind a pseudonym. Trevor doesn't give himself enough credit. When it comes to this space, he is as respected as anyone out there. Sure, Ferrari may not send him a car to test drive, but you can bet that Bentley will, if we stick to the same metaphor.

        You may not have a need to rely on advice of others to help guide you in what can be a significant purchase decision for many, but not everyone in every organization has the luxury of the time that full proof of concept testing can take. That's where people like Trevor comes in. Yes, Trevor is outspoken, but 100% of the time, he backs up his arguments. Whether you disagree with him or not, it's tough to not respect that. He really puts stuff through its paces in a way that many others don't, and he doesn't pull his punches.

        I believe that this discussion thread could have stayed more civil and focused on what is really a gaping hole in the data center enterprise solution market and that is around standardized, real world, reasonable performance testing. Again, it's fantastic that you can do your own testing; for others, there are people like Trevor helping them make sense of the games that are played by many of this companies.


        Scott Lowe (Other Scott Lowe)

        1. Anonymous Coward
          Anonymous Coward

          Re: E-mail reply from Chuck Hollis of VMware

          "Yes, Trevor is outspoken, but 100% of the time, he backs up his arguments."

          Not really, he backs up his arguments about 50% of the time and is so hypocritical that if you made an exact clone of him, 75% of the posts here would be him and his clone going at it and when he would say something his clone did not like, the clone would accuse him of working for Nutanix while using profane language instead of staying on point and really backing it up.

          1. Trevor_Pott Gold badge

            Re: E-mail reply from Chuck Hollis of VMware

            1) "Profane" language can be used for either emphasis or to provoke a response. It works well in both cases.

            2) "Profane"? What era are you from? What was it like watching them invent the steam engine?

            3) Yes, I like arguing. Especially with people who like to jump right in on personal attacks.

            4) There are rather a lot of people on these forums who post on behalf of their employers. There are also a bunch who are irrational brand tribalists. I see no reason to treat either category as anything other than hostile.

            By all means, post things I disagree with. In case you didn't notice, I not only admit that I can be wrong, I tend to point out where and when I feel it is possible that I am wrong, and I will even post information from external sources when I feel that information has come to light which brings my own dialogue into question. (See: posting Chuck's e-mail as an example).

            Just because I don't think you are right about your inane blitherings - or that I troll you because you're a douche - doesn't mean I am somehow unaware of my own fallibility or am unwilling to admit it. It really just means I think you haven't clue one what you, personally, are prattling on about.

            Also: fuck, shit, ass, and cockmongling cuntpotato! Just because you like the profane.

            *Smoochie boochies*

      3. Alan Brown Silver badge

        Re: E-mail reply from Chuck Hollis of VMware

        "And as such I'm not basing my purchases on FUD from any website, or person but just test the stuff myself before any PO leaves my desk."

        If you're in a position to do actual real-world testing before you issue a PO, then you've got a mightier budget than most.

        Benchmarks exist because we can't do that. Using the same benchmarks across a range of devices means they face a level playing field. If a vendor's kit can't handle that and they start dictating that their special little flower has its own unique benchmark then alarm bells should be ringing amongst buyers.

        Of course, the flipside would be to accept Nutanix' benchmark requirements and run all the other kit on the same loads. If the kit performs badly with standardised benchmarks then it's quite like it that whilst it works ok with their preferred set, the others wil do even better.

        As a potential purchaser I'm far more alarmed by Nutanix (and VMware) attempts to dictate the terms of the review. This is a pretty good indicatation of the kind of support I (won't) be getting when things go titsup.

        As such the far more important thing to take away from the story is the vendors' attitude to failure than the performance of their equipment.

  8. Anonymous Coward
    Anonymous Coward

    Great timing when you're attempting an IPO

    The self inflicted wounds by this group that pop up about once/twice a year are a great indication of what their future will hold if they ever were to go public. The company thats full of assholes, sounds great for a comedy skit but doesn't work so well when a little sunshine is sprinkled their way. If they can't show a base level of transparency when it comes to testing their gear, has to mak one wonder how legit their internal books are. They are the poster child for the worst of Silicon Valley smug douchebags.

  9. Cloud 9

    Testing shmesting ..

    I'm curious here .. do modern day storage tests accurately reflect what end users find valuable in products such as HCI? (genuine question - not cynical rhetoric).

    Most storage vendors these days seem to be able to shoehorn in enough flash to crank up IOPS and stamp down latency to the point where I can reflect on the bigger picture.

    It's the the scope of the product - the flexibility - the breadth of relevant features (not bells and whistles but the things that deliver real value). So when Nutanix can deliver storage efficiency features like compression / dedupe and then come in with cluster wide erasure coding etc, that's money back in my pocket. And it's hard to properly metricise things like the ability to run seamless non disruptive code upgrades - or management simplicity or speed of deployment etc etc. Same goes for EVO:RAIL ..

    If these features are brought in under the testing microscope then great - but if it's all about how many sequential 64k blocks I can dump out to disk then the test warps the total value of the product. So there are legitimate reasons for getting the measure of the tests right - otherwise they get reduced to corporate propaganda paper waving exercises.

    Vendor hair pulling ground fights are so far away from the real world end user conversations that I'm used to these days. This whole debate does have an air of the early to mid 2000s about it to me.

    1. Naselus

      Re: Testing shmesting ..

      "I'm curious here .. do modern day storage tests accurately reflect what end users find valuable in products such as HCI? (genuine question - not cynical rhetoric)."

      Depends who comes up with the test. I'm rather more inclined to think Storage Review will devise a test that comes somewhere close to matching my day-to-day usage profiles than the vendors' own metrics will.

      If questions about the validity of the testing regime need to be raised, then those questions should be coming from the end consumers (i.e., you and me) telling the reviewers that the information they're providing is no longer useful, rather than from the vendors declaring that the metrics that they do poorly on aren't important anymore - especially when those vendors have no agreement on which measures are more important.

    2. thames

      Re: Testing shmesting ..

      "do modern day storage tests accurately reflect what end users find valuable in products such as HCI?"

      This is exactly what I was thinking. To bring an analogy in, I'm working on a project which while it has nothing to do with HCI, is entirely performance related. There are about a 100 different aspects to it. I wrote benchmarks for it, and I could cherry pick results of anywhere from 20% faster to 300 times faster from them. Anything from the entire range of numbers is valid, depending upon what the user wants to do with it. I ended up just picking an arithmetic average of all of them, and listing all of the results in a table in the documentation. There's really no one right answer.

      We see something similar from the web browser Javascript wars. Each vendor has their own set of benchmarks where their browser does particularly well. It's not just that each vendor is puffing up their browser. It's also because each of them has a different idea of what features really matter and they create benchmarks which reflect those different visions and direct optimisation efforts in that direction.

      I can imagine that with HCI, there are many, many, different aspects to it, with different visions of what matters most to the market they are trying to address. Sometimes one really big user or client will use their megaphone to get the things they want pushed forward, even if they don't matter to everyone else.

      I would not be surprised of what really mattered most in the HCI market is a good balance of features and ease of administration, rather than just raw performance numbers.

      Perhaps Trevor could do an article on the benchmarks in question, and how well what they tested reflected the sorts of things which customers actually cared about.

      1. Trevor_Pott Gold badge

        Re: Testing shmesting ..

        Perhaps Trevor could do an article on the benchmarks in question, and how well what they tested reflected the sorts of things which customers actually cared about.

        Well, I was going to. But both VMware and Nutanix have potentially disruptive offerings coming out in the near term. I think I'll wait until those land, then throw a month or two at it.

  10. Nick Dyer

    The shady truth of the storage industry

    <Storage industry rant>

    This whole charade, whilst bad for Nutanix (and a little for VMware) is actually exposing the little-known truth in the storage industry for many years: pretty much every vendor has set testing criteria of synthetic tests to make their product shine against competitors which are effectively pushed onto unsuspecting customers who believe they have their best interests at heart (spoiler: they don't).

    Examples are EMC XtremIO & Pure, who have manipulated IDC's "Flash Storage Testing Guide" (a truly independent testing guide) in order to make everyone else but them look horrendous, to people like Tintri that deploy a VM full of synthetic tests which will fully dedupe in their flash tier and give unrealistic performance experiences.

    Another great misleading example is Pure with their "average 32k block is best for performance testing" BS. If you don't know you're being mislead, then sadly it's taken as gospel from viewed industry giants - so we at Nimble used real customer data to debunk that particular myth:

    It's about time the industry as a whole standardised on real-world tools to give customers experiences at 0%, 20%, 50%, 80% capacity, and with variable, mixed workloads... but it's up to customers to demand that requirement, rather than accepting enforced test plans from a vendor.

    </Storage industry rant>

    1. virtualgeek

      Re: The shady truth of the storage industry

      Disclosure - EMCer here.

      The comments here are my own opinion. I'm sure they are influenced by where I work, but I don't speak for EMC.

      Nick, this has also been a point of frustration for me. It's surprising that no really good, really comprehensive storage benchmarking suite ever has emerged. Well - surprising isn't the right word. A shame perhaps.

      Why is it not surprising? I think the reality is that when you dig really, really deeply, it is perhaps the hardest domain of low-level infrastructure to create good direct comparisons and comprehensive value assessments.

      Unlike compute and network - the persistence layer has a very complex set of completely unrelated parameters.

      - IO sizes have an effect

      - protocol access has an effect

      - bandwidth, latency, IO per second - these are all "metrics of performance"

      - the variability of data services (and implementation of those data services) are all over the map.

      - persistence media wildly varies

      - and since unlike compute and network that don't persist, system level behaviour is non-linear over time (whether it's literally time, or behaviour variability as system parameters like utilization vary).

      If this sounds like "wah wah - storage is hard", maybe it is :-) But consider the following:

      Read Anandtech's comprehensive Skylake review here:

      Now, look at HOW much benchmarking, with various tools was required to build a comprehensive picture of the CPU (an insanely complex system) performed. Now - imagine that every time the test was run - the results varied (non-linearity in system response). OOPH. Hard.

      And, BTW, that is FAR from an exhaustive list of the things that are determinants of system response, and are a function of system non-linear behaviour.

      Even the best tools require the independent tester to be fairly knowledgeable, and many (storage review's track record kind of speaks for itself)

      And of course - those statements are all true whether it's a hardware tightly coupled, loosely coupled, or non-coupled software stack. (Hyper converged implementations invariably use a non coupled software stack.

      For eons we found that since we are the leader (hate us or love us, we are the biggest player) - benchmarking has always been a losing game.

      BTW - the IDC doc you reference? They wrote that on their own. We (and Pure, and I'm sure everyone else) gave input, and IDC can choose to ignore, incorporate, that's purely up to them (and in my experience, IDC has high integrity, and is very data driven)

      We re-entered the SPC-1 and SPC-2 game - frankly because we realized we're tilting at windmills. Benchmarks are a fact of life - perfect or not.

      My personal perspective on this has not changed though. I think that good product stand on their own. I think that sunshine and total transparency are the best policy. In this modern era where social media kinda is a mechanism for self-correction, people catch games, people catch bad logic. The best protection is to be open and transparent. I know the industry as a whole (us included) haven't always acted that way.

      I'm going to continue to strive to remove the EULA language that exists in a lot of our stuff and VMware's stuff that talks about "running it by us" before publishing.

      BTW - if people want to download and benchmark, and post their findings using our freely (no limits) downloadable software (;, and many others) I will fight for their right to test, play, post. I can't speak for VMware, but I know that similar dialogs are occurring there.

      ... And I will fight to get the EULA terms changed.

      P.S. Trevor please don't kill me in a rage-filled response :-)

      1. Trevor_Pott Gold badge

        Re: The shady truth of the storage industry

        @Virtualgeek: great response. Truly. I have nothing negative to say to that, it's absolutely spot on. It's why I insist on running real world test with workloads I know inside and out (from having run them for 11+ years in production) alongside the benchmarks. There's a lot more to testing storage than synthetics. (See; iSCSI microburst issues with switches; something we don't have standardized tests for yet!)

    2. @storarch

      Re: The shady truth of the storage industry

      Hi Nick,

      Satinder Sharma here from Tintri.

      As a company, we are always willing to let the customer as well as any reviewers test the VMstore with any type of workload that they want. I am not sure what you are referring to there. We do try to educate customers on benchmarks that just send zeroes to the storage (that get eliminated by zero detection techniques as well as get 99% compressed). We love those but we always educate customers to run real workloads Vs running any benchmark based synthetic workloads.

      I agree with your point on storage vendor using these type of tests during PoCs but that doesn't exclude Nimble and some of its SEs. I can't even remember how many times I have seen Nimble SEs go in and run SQLIO and IOmeter tests full of zeroes and even promote SQLIO as something that generates load similar to a real SQL workload.

      We are all in for independent tests done anytime.,2-731.html

      In fact Trevor Pott (who is quite active in his commentary here) can validate that as well.



      1. Trevor_Pott Gold badge

        Re: The shady truth of the storage industry

        I also just want to back up what Satinder is saying. Tintri have been absolutely amazing about testing their units. They've given me a completely free hand. (I hope to have the review out this Monday, as a matter of fact.)

        I have found some flaws with Tintri's implementation. But I've found a crazy amount of good. Tintri has not shackled me with restrictions on testing or on publishing. They've let me toss a unit into production, run every synthetic I can on it, and abuse it in every way. They've made an SE available to me for any questions and shown me how they prefer to benchmark things, but not insisted this be the only path.

        I've learned a lot about storage from them. Just as I have from every really good storage company I've worked with. They have fantastic engineers who have taken the time to get really in depth on things I don't understand, or flat out get wrong.

        (Side note: I will disagree with Satinder on the utility of SQLIO. Even full of 0s, it's great for testing the network portion of shared storage, and it is also possible to replace the all-0s file with a randomly-generated one so that you are hammering with more than just 0s. I find it a useful tool, if used correctly. That said, Tintri's "Tingle" load generator is actually pretty cool, and a useful item that the whole industry should be using.)

        Another thing that Satinder said is important here: education. Of customers and of reviewers. You can't review storage properly if you honestly think you have nothing new to learn. Each storage offering is different. Not only that, but tools to generate and test load are constantly evolving.

        Many vendors - like Tintri - do an excellent job of educating, so it behooves anyone (customer or reviewer) who is doing testing to really listen through the various presentations.

        The truth is that there is a lot of good storage out there. Hyperconverged, scale out, object and legacy alike. There are a lot of great companies peddling that storage. More to the point, the market for storage is huge, and continually growing.

        We shouldn't need to have the petty rivalries that have developed, or be getting bogged down in who is allowed to review what by which means. We should be educating people as to which test are best to simulate (or test) what components of storage. We should be verifying our synthetics with real world workloads. And we should all be absolutely open and honest about the results because it is how we all - vendor, reviewer and customer alike - learn, adapt, and ensure the next round of products are better than the last.

    3. Alan Brown Silver badge


      Deduping is one of those things which causes so many problems (particularly in terms of demand on ram) that you'd better be bloody sure you want it, because it's hard to undo.

  11. Anonymous Coward
    Anonymous Coward

    A few points...

    1) Trevor, great feedback but this isn't about you. You have the benefit of writing a post on The Register any time you want so I don't understand why you would empty your laundry on a message board. Grow up.

    2) About the test: VMmark (the test Storage Review uses) is a VERY GOOD measure of real world environments and is as close as you are going to get in a test environment. Those who say you cannot effectively test for real world results are spewing vendor FUD. You clearly haven't used VMmark.

    3) Nutanix response to the testing is abhorrent and they deserve all the backlash they get. It speaks about the company, their technology, their culture and yes, their employees. There is never a shortage of Nutanix employees talking badly about the competition and, like roaches, they disappear when the light is cast on them.

    1. Trevor_Pott Gold badge

      Re: A few points...

      1) Trevor, great feedback but this isn't about you. You have the benefit of writing a post on The Register any time you want so I don't understand why you would empty your laundry on a message board. Grow up.

      Why would this be about me? Where did I say it was about me? In pretty much every single post in this thread I have stated explicitly that I am a nobody and that I both understand if vendors don't want me to test things and am entirely okay with that, so long as there are other, more important, and - most critically - credible independent testers who are allowed to do the testing.

      I don't see how relating my experiences makes this "about me". It is simply providing more data.

      If you average all the readers I have across all the places I write I have an audience of about 15 million. That may not be a lot, but it's enough that I could have been much louder and more dickish about this issue. Still, I felt that the discussion needed - and does need - to be had.

      I know from experience that if I reply to a major article in The Register those comments will be read by relevant people at those companies. Social media teams are actually quite good these days. So I chose this method because of the limited scope of impact it would have while still getting my point across to the relevant people. It seems like an acceptable compromise.

      2) About the test: VMmark (the test Storage Review uses) is a VERY GOOD measure of real world environments and is as close as you are going to get in a test environment. Those who say you cannot effectively test for real world results are spewing vendor FUD. You clearly haven't used VMmark.

      Where did I say VMmark wasn't good? It's not the be-all and end-all of tests, but it sure is a great synthetic! I heartily approve of its use as one part of a larger suite of tools.

      3) Nutanix response to the testing is abhorrent and they deserve all the backlash they get. It speaks about the company, their technology, their culture and yes, their employees. There is never a shortage of Nutanix employees talking badly about the competition and, like roaches, they disappear when the light is cast on them.

      VMware is not remotely immune to talking smack about competitors...even when they aren't willing or able to fully back it up. The whole industry is a clusterfuck of egotism and douchebaggery.

      Hence the need for independent testing.

      P.S. If you're going to cast aspersions on someone have the genitals to use your real name.

      1. Anonymous Coward
        Anonymous Coward

        Re: A few points...

        There you go again Trevor - only point 1 was about you, the other two points were not about you or anything you said. Thanks for validating my point that IT'S NOT ABOUT YOU.

        Also, thanks for hijacking the message board and diluting the good discussion points. I like your posts and (usually) your comments but you have to chill out, it's really not about you.

        1. Trevor_Pott Gold badge

          Re: A few points...

          If you deride someone by name then go on to attempt to deride others without listing their names it helps for clarity to either be explicit that you aren't continuing your derision of the first person or to clarify whom you are now deriding.

          It's also generally considered good form to use your real name when you deride someone, otherwise you really do just come across as nothing more than a petty Anonymous Coward.

          Also: this "message board"? It's my back yard. I'll do what I like on my own lawn mate. Go get your own.

          1. This post has been deleted by its author

            1. Trevor_Pott Gold badge

              Re: A few points...

              Again, you are mistaken that this as is all about you. This isn't your lawn and your aren't a part of The Register, you write guest contributed, unpaid content and you comment on the message boards.

              I'm not a part of The Register? I have 418 articles published here. I've been writing here for over 5 years. At what point are you "part of" a publication, hmm?

              Also: my articles are unpaid? That's news to me. And my bookkeeper. And my 4 employees. Because it seems to me we invoice The Register for rather a lot of money. Which is nice. As it does things like pay our mortgages.

              It's a hot topic and you want to wave your flag, we get it.

              It's a boring topic that the overwhelming majority of Register readers don't give a flying fuck about. Some do, but there's only about 800K - 1M that seem to care enough to poke their noses in on this, and fewer still who care to comment.

              And again, you're wrong, I really don't want to "wave my flag" here. People like you whoa re assholes on the internet, make it a very unpleasant topic to write about. I've gotten death threats because I have written something that someone doesn't like; most of the negative feedback begin from the zealots that inhabit the storage industry.

              I don't even like storage. I got sucked into being a storage blogger/analyst/whatever-the-fuck-I-am entirely against my will. And once sucked in, I learned fast. Now people see me as "knowledgeable" on the topic and seek me out at an ever increasing rate for advice.

              But I hate storage. I really, really do. It's boring and the people are mean.

              There are much better things to write about. Things that actually interest me. DevOps. SDN/NFV. Compute hardware. Above all else: security. These are my actual passions. They also "get the clicks" as it were.

              Sadly, storage needs a shit disturber or twenty. Your own douchetastic response is exactly why. Zealotry and misinformed ad homenim too often take the place of reasoned discourse, as your perpetual firehose of haterade so ably demonstrates.

              But it's not even necessary, you have a voice here and you post regularly on The Register. I'm not trying to shut you up, I'm simply suggesting you chill out and let others chime in without sucking all of the oxygen out of the room by responding to every single comment.

              But you are trying to shut me up. That is exactly what you are doing. You feel somehow that you, personally, have a right to dictate when and where I should be allowed to speak. What gives you the right to determine the context of my speech? And why shouldn't I be allowed to participate in discussions both from an official platform (as a writer for The Register) and from an unofficial platform (as a commenter on The Register)?

              The various mediums available to me - numerous places where I publish my articles, Twitter, my own personal blog, various comments sections, forums and message boards - all offer me the chance to approach topics in various ways. Some allow me to advance my personal opinion in a more unbridled fashion than others. Some have a mass audience while some a more select one.

              There is an entire internet available for you to vent your hate and spew forth opprobrium. Yet here you are, on my digital lawn, trying to tell me what to do.

              Given the context there is only one appropriate response: go fuck yourself, asshole.

              And maybe, just maybe, you should actually add something useful to the conversation. If anything is sucking the oxygen out of the room it is your worthless personal attacks and pitiful demands for censorship.

              The route to people valuing your opinions is to contribute something meaningful, not restricting who can talk until yours is the loudest voice left. If your ego needs satiating, satisfy it somewhere else.

      2. Anonymous Coward
        Anonymous Coward

        Re: A few points...

        Including this comment, there are 30 comments on this board. ****11 are from Trevor**** WTF?

        1. Trevor_Pott Gold badge

          Re: A few points...

          Including this comment, there are 30 comments on this board. ****11 are from Trevor**** WTF?

          It's really not that hard to understand: The Register is my digital lawn. I've been a commenttard - and quite frankly, troll - about these parts for roughly a decade. That gold badge my posts sport? Only 10 of us have them.

          In addition, I write for The Register, so I have even more reason to hang out on the forums. Add in the fact that storage and virtualization have been my areas of research and specialty for the past 3-ish years and, actually, it would be pretty odd if I weren't all over this like white on rice.

          Now, normally, I'd make a few pithy comments and leave. Some people made replies worth replying to, so that ups my count a bit.

          Now you, you seem to get angry if I post. I"d say I'm sorry you don't like me, but the truth of it is you're really quite being a dick, so I'm actually quite happy that I upset you. It's not like someone is forcing you to read the forums. Or The Register. Or to sit in front of the computer at all.

          It's not like you are forced to acknowledge my existence or tolerate my opinions. You have infinite choices regarding how you might ignore me. You can shape and craft your own world so that no dissenting opinions enter your consciousness.

          Hell, there are seven billion people on this planet: you can choose to shape your whole life such that you never encounter any opinion that you don't like. You choose to put yourself in situations where you are exposed to ideas and individuals who upset you.

          And so, I'm going to keep posting. You don't intimidate me. You don't shame me. You don't make me feel guilty. But it's absolutely crystal clear that I have struck a nerve. And that means I should keep digging, because the more ardently someone wants me to not talk about something the more important is usually is that I do.

          Cheers, and thanks for helping me set my research priorities for the next several months.

          I'll be sure to be quite loud about broadcasting my results.

  12. Phil Dalbeck

    One for the weekend -

    Have you tried CEPH running on a cluster of commodity tin full of SSD's behind Openstack as a DIY converged option for a laugh? I reckon its going to take everyones lunch sooner or later...

    1. Trevor_Pott Gold badge

      Re: One for the weekend -

      Ceph. Oh god. So many brokens. So much slow. So much potential. So terrible right now.

  13. Spaceman Spiff

    I always doubt the intentions, and abilities of a vendor when they are using such bogus terms as "hyperconverged systems". What EXACTLY DOES THAT MEAN?! Show me a valid scientific definition of that term. I'm waiting...

    1. Trevor_Pott Gold badge

      I'm sorry that I don't have a "valid, scientific definition", but I do have this. It's the closest I've come to trying to explain the marketing terms and the history surrounding them. I hope it helps.

  14. Anonymous Coward
    Anonymous Coward

    This illustrates a fundamental weakness of hyperconverged offerings. Storage at high throughput tends not to be cheap computationally. This reflects the reality that storage vendors have to be focused on correctness and features, not just 100% iops. With CPUs being shared with application workloads, adding storage intensive applications ends up consuming CPU at double the rate -- once for the apps, and once for the storage.

    1. Trevor_Pott Gold badge


      Nail on the head. And this is why I feel that more than just synthetics are required for a full testing suite to be accurate for this space. Maybe you should be out there doing testing, eh?

  15. Anonymous Coward
    Anonymous Coward


    Ah who gives a f*%#. Just chuck the workload on AWS, Softlayer, Azure ....

  16. Anonymous Coward
    Anonymous Coward

    What the heck... while you have this useless discussion the large majority of business are running their workloads on 5+ year old arrays that perform worse then my MBAir.

  17. This post has been deleted by its author

  18. SSD Lover

    Has anyone, and I mean ANYONE, in here actually considered the thought that perhaps StorageReviews testing methodology IS flawed? The always get lower results than kids on forums, not to mention other tech sites...and that is with single devices. Has anyone actually taken a closer look at their *testing*, and what they do, AND DO NOT, disclose about it?

    1. Smitty Werbenjaegermanjensen - he was #1!

      Published testing

      Everyone's testing methodology is flawed, because it's always just a proxy for what you will experience with your workloads in your environment. The best you can hope for is that the testing will give you an indication of who you should and should not consider POC-ing. I would never buy storage based on a vendor's promised performance, or even a reviewer's most objective assessment.

      The value of such reviews is in what the results tell you about the vendors rather than the actual performance of the products. For example, discrepancies between the sales guff and the actual, realisable capabilities of a product (e.g. Vendor: yeah we have dedupe... Reviewer: Good luck with that) help you understand the level of trust you can put in vendor claims.

    2. Trevor_Pott Gold badge

      I have considered it. I have given their testing methodologies a cursory overview based on what they make available publicly and found that the results I can achieve with those methods more or less line up with what they publish.

      The whole incident has piqued my interest for deeper research, however, and I am hoping to pursue this behind the scenes with them over the coming weeks. I have a call scheduled with them at the end of the week, hopefully I'll learn more.

      Overall, however, their results have tended to be among the most "realistic" I've seen. They most closely match the "real world tests" that I do; tests that tend to be around 1/3rd the headline achievable IOPS or throughput, usually because real world tests aren't 100% one (IOPS) or the other (throughput).

      This confusion is also why many of us in the testing community really do want an open, affordable, standardized set of tests that the industry as a whole can agree upon.

      1. SSD Lover

        Well, considering they are quite vocal that other vendors helped them develop their tests (the two guys running the show over there have no datacenter background or enterprise experience) then wouldn't it be quite possible to assume that other said vendors were using these guys as a means to highlight positive aspects of their products? that the tests are inherently skewed?

        Things to think about.

        1. testlabnut

          On that things to think about category you are so itching to try and find fault with:

          Percona Sysbench: MySQL TPC-C with help from Micron who isn't the fastest in that test. Audited by Percona among others to verify the accuracy and relevance of the test data.

          FIO: Help with deployment by Jens (developer) while he worked for Fusion-io. Audited by many others in the space and we share workload parameters used in reviews. Open source workload generator...

          MarkLogic NoSQL: Deployed with MarkLogic... no storage vendor attachment

          SQL Server TPC-C: Worked with Microsoft Server and SQL Server teams to build and participate, using Benchmark Factory from Dell/Quest. Click the profile of choice you want to run, set the scale and click "run test".

          VMmark: Worked with the developers of VMmark at VMware to deploy almost two years ago (well before VSAN or before we even worked with that team). One of the most audited benchmarks out there.

          OpenLDAP: Worked work the developer of the software to deploy. No storage vendor attachment

          Veeam backup test: Worked with Veeam to deploy. No storage vendor attachment

          Its not a bad thing to ask for help. It helps to have an open mind. Before any new test gets added to our site we run the idea past most vendors we work with and industry insiders to check for relevance. When vendors can provide input and offer help, it builds trust in ways many don't seem to understand.

      2. SSD Lover

        Since you are meeting with StorageReview, perhaps you can ask them about their "consulting" services, and why they refuse to answer the questions placed to them in this thread.

  19. testlabnut

    Kevin here - not hiding

    Funny you keep reaching there to find some way to think we literally have no idea what we're doing in tests. As Trevor has pointed to, our tests have been audited before. Most tier 1 vendors have replicated our environments specifically to check our results. The synthetic numbers hilariously you liked to bring up are one area of probably the least importance. You can game the system easily to get different numbers to publish. Applications on the other hand don't really mess around.

    The part about getting vendors to participate in our lab and offer feedback on tests is to help build trust. Like you say, Brian and myself had zero enterprise or storage experience going into this a number of years ago. Another way we prefer to explain it is we had no biases or legacy baggage to worry about. A lot of vendors like that. Funny how that's worked out. Lots and lots of "firsts" in the industry where previously closed off companies literally ship us any gear because they trust us.

    Trust and credibility are literally the only things that keep the lights on in our Cincinnati, OH building.

  20. swissarmyknife

    Kevin and Trevor are merely pointing out that if you buy the vendors numbers, or even gartners, you're buying a scam. I've used gear that got great synthetics, but was an actual dog on a real workload. And the opposite. One of the cheapest arrays I've managed, certainly one of the smallest (A storwize v7k), is shockingly fast...but got mediocre reviews and numbers from the big reviewers. But on a real virtualized workload (random as hell), it's phenomenal. We all have our pets, and companies we hate, but ALL of them slant the numbers and modify the systems to make the best of the synthetic tests they KNOW will be used on it.

  21. This post has been deleted by its author

  22. Anonymous Coward
    Anonymous Coward

    Proof of concept? Anyone

    While having numbers is good, as an IT Leader I rely on good ol' fashion proof of concepts. That way, I run the loads on them that I want. I would never buy into technology without first proving it out in my environment running my loads. But that's me.

  23. This post has been deleted by its author

  24. SSD Lover

    Selling 'lab services' to the very companies whose products you test and post as independent, that counts as being independent? Have you revealed to your readers the number of vendors that contract with you for services? If not, why not? If there is money changing hands, why is it being hidden?

    When a company approaches StorageReview for independent testing they are offered a suite of services, and if they pay for those services, their product can be tested and data shared prior to posting. Is that integrity keeping the lights on?

    Companies bring hardware into your lab and test it against your tests, then go back and literally change firmware to suit the conditions of your test environment. That is allowing vendors to cook their devices for the test environment, but they have to pay for that "Lab Service". Correct?

  25. SSD Lover

    Point Blank Range Question: Does StorageReview, or does it not, offer lab services (under any name, conducted by any entity) at its labs?

    Does the company post product evaluations of the very vendors that it contracts services out to?

  26. Anonymous Coward
    Anonymous Coward

    It took a lot longer for other IT giants to get as arrogant as Nutanix. Forget the technology for a moment and imaging doing business with them as they convince you to re-arrange and document how you test and sandbox your apps to run on "their" platform. BTW guys not cool to have all this swearing on a forum like this we all expect better!!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like