back to article StorPool CEO: 'We do not need another storage product'

We interviewed Storpool CEO Boyan Ivanov, who thinks all-flash arrays are missing the point these days. Storage is broken and needs a new concept to fix it. He didn’t wait to be asked anything, starting things off: Boyan Ivanov It’s disruption time and the storage industry is turning on its head. El Reg Er, okay. What do you …

  1. Lysenko

    Hmmm...

    So, run some software on commodity hardware/disk arrays which can either serve up raw files or do some intermediate processing on them before delivery?

    Novell NetWare V3.x with some NLMs on a Compaq Proliant circa 1992. I'm not feeling my paradigms shifting.

    1. Sandtitz Silver badge

      Re: Hmmm...

      "Compaq Proliant circa 1992"

      Proliants were introduced in 1993 so it was probably a Compaq Systempro.

      1. Lysenko

        Re: Hmmm...

        It was definitely a Proliant (P60, original Pentium) but your year correction is very probably correct.

  2. leon clarke

    Lock in

    If I'm following this, they want you to be locked into their software, instead of someone else's hardware and software combo. Which is a step in the right direction (assuming the software has a sensible price). But I still see something that looks and quacks like vendor lock in.

    1. GoFarley

      Re: Lock in

      Many features, such as sharding data across systems and deduping data create lockin by their nature. Oh - and good luck avoiding lockin with encryption. Even if a vendor wanted to completely avoid lockin, they couldn't - unless they had the first truly open storage software platform (that nobody else would have)

    2. Boyan_StorPool

      Re: Lock in

      Firstly totally avoiding lock-in is an illusion. It would imply not using ANYTHING, so you're not locked-in. In reality it is a continuum and you evaluate the degree to which you're locked - from close to completely locked to almost free.

      In the case of an array - you're locked to a large extent - you prepaid all the capacity you need + plus factoring some future growth; you need to buy the next box from them otherwise your creating siloes of storage, etc, etc.

      In our case - we provide standard block device which is usually managed by cloud management software. If you decide that the solution is not your best choice - just click, copy your data and you're with another vendor. No complex migrations, different data formats and expensive professional services.

      We have monthly recurring licensing plans, based on the capacity of your current system. In this case you haven't prepaid for the next 2-3 years. If next month you do not like it - you're free to switch. If you still want to commit and prepay for certain period - naturally you get good discounts for doing so.

      The storage industry is being commoditized and is becoming more open. All vendors have to realize this and unless they provide great value and delight customers constantly - they'll go out of business. This is the new reality.

  3. Anonymous Coward
    Anonymous Coward

    "This is a variant on the Nexenta, Swiftstack, Maxta, Caringo. Scality, DataCore, FalconStor, OSNEXUS software-defined storage schtick..."

    Yes, and these have all been very successful and therefore so will this one...

    I'm betting this is his first and last interview...

    Good Luck

  4. SirWired 1

    I don't think customers actually care

    Customers want storage, and they want it reliable, cheap, and flexible. I do not think a nebulous concept like "openness" is really on the priority list, because few customers want to deal with the inevitable integration and interop headaches that result from mix-n-match.

    And there are all-flash arrays that don't feature SSD's at all! I know that IBM's flash boxes use boards stacked with flash chips, instead of the typical RAID'd SAS-attach SSD's. (And I'm sure other flash boxes do this too.)

    And I love the yammering on about Mom 'n Apple Pie ideas of treating customers well... that sounds like an argument for sensible pricing, quality service and support, etc. I don't see how it ties into needing to deliver storage in a different fashion.

  5. jmith

    Misson Critcal Data

    Here's the problem that this guy is not thinking about. Misson Critical data.

    Most large corporations are only going to consider highly redundant hardware devices for their storage for quite some time. They don't want commodity hardware to fail and cause issues. While you can add additional hardware to protect against this you pay a performance impact for writing that data multiple times on multiple systems.

    There's another problem with his line of thought. While a lot of startups are annoyed by storage arrays it's business as usual for big corporations. They have tons of storage admins that keep these systems running without failures. The startups are going to be big consumers of the "commodity hardware storage" concept. Just a slight problem. Some of them don't have a lot of money. The ones that do have lots of money and have a thriving infrastructure that you would love your storage product in aren't the types to "buy stuff". They tend to create their own. Amazon, Google and Facebook all operate on commodity hardware. They bought the hardware but they did not buy someone's "magic sauce" to make it work.

    The type of data changes the appropriateness of the storage greatly. Would you put a lot of easily replaceable data on the most expensive EMC arrays? Nope. Would you put banking data on a PC with some drives inside and trust all that account data not to be lost? Nope!

    While I agree that the storage industry leaves a bit to be desired it's not completely outdated like this guy thinks. You can certainly make commodity hardware suit your needs if you have the time and knowledge like the big internet firms. You just have to decide on your own if you think this guy's somewhat ripped off from others software is worth the cost or if it's snake oil.

  6. Androgynous Cow Herd
    Meh

    Software defined software

    "Run your instruction set on commodity hardware" is not a new idea, but "Software Defined $THING" has been a series buzzword rivaling "Cloud" and only outpaced by "IoT"... With very, very few exceptions, the hardware that runs storage arrays are good old x86/x64 powered instruction sets. They might have a custom Disk controller or maybe a custom ASIC, but they are x86 systems in the end. Specialized hardware adds a bit in a couple cases, but really, the software (and the feature set derived from it) and support from the storage vendor are what a customer is really selecting.

    In my experience, the number one thing a customer is looking for from centralized storage is constant availability, then adequate performance, then the rest of the value added stuff. I would hold that building your storage in any sort of BYO manner will make delivering on "Constant availability" much less of a sure thing than a purpose built platform from just about any specialized storage vendor. Unless the SDS vendor is spending a lot of time building, testing, and expanding HCLS, and enforcing that tested/validated combinations of hardware are always used, BYO brings with it a lot of unnecessary risk. The BYO approach may have appeal to budget constrained shops, but, to be frank, budget constrained customers make for revenue constrained vendors.

  7. jbahn

    Someone's gotta test

    Expanding on jmith's comments ... if you've got mission critical data, you probably need to pay "insurance" to someone in the form of testing. Full disclosure, I'm with a testing company. Today, the storage vendors do a substantial amount of testing before they release new products. They often do Functional Testing – the investigation under simulated load of various functions of the storage system (e.g., backup, etc.) They do Error Injection, the investigation under simulated load of specific failure scenarios (e.g., fail-over when a drive fails), and Soak Testing – the observation of the storage system under load sustained over significant time. Through Compatibility Testing, they determine that the interaction of storage hardware, software and networking is compatible with major other subsystems, like virtualizers and database systems. And they do a huge degree of Regression Testing, to ensure that new releases don’t break things that used to work. Finally, they do Limits Finding, that is, they determine the workload conditions that drive performance below minimal thresholds, and they often document storage behavior at the failure point. Having said that, IMO, the smartest IT users still do their own performance testing with their own mission critical apps.

    1. Mr. Twinkee

      Re: Someone's gotta test

      And to add to that, SDS companies are also putting together reference architectures that they bake to make sure the solution is rock solid. Nexenta has done that with their MetroHA and qualified specific hardware from Dell and Supermicro. I understand that this is in order to help support the solution vs. having a support team that has to know about any and all commodity hardware in the world. They also require to have Nexenta install the solution, being that it is a complex active/active fully redundant data center solution don't want someone forgetting to plug in a cable and claim that the solution is balls awful when it was their staff that did the install.

      There is a trade-off between solution costs, installation costs and support costs depending on what type of "open" system you want to create or implement.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like