back to article Server vendors and the dead hand of commoditisation

The leading server vendors have known about the inhibiting effect of slow disk drive performance on their users for years, yet have done nothing about it. It's been left to companies like Fusion-io and Virident to solve that problem by inventing and popularising PCIe flash. The problem is well known. When users run …

COMMENTS

This topic is closed for new posts.
  1. M Gale
    Thumb Down

    Eh?

    I can see why there would be huge inertia behind x86 and the usual "it's a PC in a rack" server architecture. You change the processor or fuck about with the internals too much and all of a sudden your programs won't run and you've got to go out and compile or rewrite whole new ones. Or buy whole new software in at potentially massive cost.

    Shoving some SSD in the machine in a totally transparent manner however? How has commoditization been the fault there? As opposed to, you know, the crippling costs of SSD?

  2. Nate Amsden

    PAM

    From what I recall PAM (at least the first version(s) of it) was based on DRAM not based on flash.

    Also not sure why anyone would think a server vendor would come up with something like PCIe based flash on it's own, any more than they would come up with a new type of HDD.

    Where server vendors did spend some of their time of course is just beefing up the caching capabilities of their integrated raid controllers. With regards to flash HP ships flash backed cache as standard for their raid controllers. Give a system a half gig of write back cache and you can get some pretty decent performance.(some of the latest servers go up to 1GB of cache if not more)

    Probably the biggest issue with PCIe flash is the software drivers and stuff, with SAS/SATA that's all transparently handled by the controller(or at least should be). I remember reading a couple years ago about lots of issues with Fusion IO drivers in linux for example (I think the drivers were beta). Things have improved a lot since then obviously.

  3. Steve McPolin
    Grenade

    speed kills

    "In fact virtualisation, the work of VMware, was itself directly related to server disk I/O performance drag because earlier efforts to keep the server or desktop PC CPU busy when disk I/O delays caused it to be idle, meaning multi-tasking operating systems, were so bad at sorting out the problem."

    You might want to ease up on the meth a bit; your babbling...

    You also might want to sip or two from the knowledge fountain, because even after a bit of sorting, your claims in this are nonsense.

    UNIX, and presumably windows, aren't crap at the scheduling problem at all. Of the many reasons to use VMWARE; performance, throughput and utilization aren't amongst them.

    1. Wibble

      Why use VM tech?

      Just a point, but are't the VMs running on top of a Windows or Unix host? e.g. all I/O & memory scheduling goes through the host OS queues.

      The main reason for using VM tech is configuration and logical functional separation, particularly in the enterprise space. As the previous poster points out, performance throughput & utilisation aren't the reasons for using VMs -- except, of course, in the minds of clueless SFB PHBs.

    2. Sam Adams the Dog

      And furthermore...

      ... "flash cache" is not onomatopoeia. It's rhyme.

      1. Anonymous Coward
        Anonymous Coward

        "configuration and logical functional separation"

        I always thought VMs were more for Wintel anyway, and the logical separation is the biggest one. App X (the name doesn't really matter but it's a well known Enterprise app) on *nix can run multiple concurrent application instances and max the server hardware out. On Wintel I can only responsibly run one instance of the app per OS as the OS will not keep them separate (i.e. if one crashes it will take the others down too). The only way to really max out my hardware with Wintel is to run multiple virtual partitions as, *in my case* the OS and App combination isn't scalable.

        Of course, my use case is not typical and there are plenty of others out there - but in my world Wintel and virtualization go hand in hand. Not so much with the *nixes.

  4. Fazal Majid

    You need advanced software smarts to use Flash as a cache

    Flash is still too expensive to replace HDDs entirely in the storage hierarchy. In the meantime, it has to be inserted as an additional tier between RAM and disk. Intel tried to add flash as a cache on the motherboard, but failed, and high-end storage HBAs have had battery-backed RAM or NVRAM write caches for years, but the block device semantics of hard drives make it hard to get good performance out of the cache.

    The reason why Sun and NetApp succeeded where the others couldn't is because they both control the advanced logging filesystem technology, in the form of ZFS and WAFL respectively (and the fact ZFS and WAFL are very similar, to the point of NetApp suing Sun, is no coincidence). Flash inserts itself very neatly into the filesystem's log like the ZFS Intent Log. SGI could have done much the same with its XFS, but was too busy with its death spiral (as opposed to Sun, which did not cut R&D despite poor financials).I suppose we could also mention Legato's PrestoServe as sold by DEC and Sun, which worked at the same layer, using NVRAM rather than NAND Flash.

    The only other place in the software stack where flash SSDs are a big win are the database transaction journals of relational databases like Oracle or PostgreSQL, and until Oracle bought Sun, RDBMS companies were not in the hardware business. The exception are data warehouse appliance vendors like Teradata or Netezza, but those products are very expensive and because of their low volumes, do not command the same attention from the IT press as mainstream servers. Teradata has used SSDs quite a bit in its appliances, for instance.

    As for virtualization being a better way to multitask than UNIX or Windows, that's just plain wrong. UNIX virtualization solutions like Solaris zones or Linux' OpenVZ/Vserver/User mode Linux have far lower overhead and introduce less latency than hypervisors. The great benefit of virtualization is configuration management, i.e. containing DLL hell.

    Most enterprises have shockingly poor configuration and change management, both in documentation and processes, and when an IT manager or sysadmin has to do something about a poorly understood legacy system set up by someone who left the company, usually the safest thing to do is P2V it into a VM rather than try to reinstall it on a new machine.

  5. Anonymous Coward
    Happy

    Oracle uses PCU flash in Exadata too

    The Sun Oracle Exadata system also makes use of PCI connected flash as that intermediate layer between disk and database and Exadata's significant improvement on database performance when compared to other solutions can be credited in part to the flash storage. Interestingly, Oracle have chosen to use their flash as a pure cache, and not as solid state disk.

  6. Jim O'Reilly

    The Mainframe is alive and called x86

    As a close follower of Fusion and Virident, I've continued to be surprised at the sluggish rate of take-up in the industry. The explosion of cores in the x86 server CPUs is stressing IO well beyond any reasonable limit. We are at the point where VMWare cannot usefully take advantage of more cores, due to the effective IOPS per VM dropping into the low single digits.

    PCIe solid-state accelerators resolve some of these issues, and will go a long way to handling "boot storms" and heavily de-duplicated data, so they will be popular. There is competition coming, though. One model of flash usage is as a memory extender, and the first "Flash DIMMs" are just being announced. Still, this will definitely be a popular product area.

    The fundamental issue is, however, deeper. We rely on a storage IO and file system paradigm invented for DOS machines. It is fundamentally outdated. Further, we use RAID concepts designed for unreliable 9GB disk drives, and they also are outmoded. It's time for something radical!

  7. Joe Montana
    WTF?

    Useless at using resources

    The need for virtualization has less to do with efficient resource usage, and more to do with sandboxing of poorly written applications.

    Such a problem is more prevalent on windows than unix, where often you can't have multiple instances of the same application installed, and sometimes even different applications will conflict with each other.

    Unix always had a limited form of virtualization via chroot, and it was not uncommon for a single unix server to be performing a large number of different tasks.

    There were also various hosting providers providing unix based virtual servers many years before vmware.

  8. Anonymous Coward
    Anonymous Coward

    This doesn't make sense.

    Must've been something in the sushi at the last josstick lunch with the whalesong dynamics guys, eh Chris?

    Personally, I think that commodity x86 is indeed crap and just about unsuitable for serving anything, just as windows is. This becomes more of a pain if you try to run the things, as you need all sorts of extras like (IP)KVMs or basically the same thing in a plug-in card, possibly with fancy web interface for doing lots of things that you'd be better off having access to over a serial interface. Like, oh, bios settings, an nvram manipulation tool, the boot-up flexibility you get with openboot but not so much from any pc bios, that sort of thing. Virtualisation "fixes" this not by giving you access but by making the need for it go away, except in those cases where you still need to manipulate the actual hardware. The performance overhead is probably an added bonus for hardware vendors, but not so much for the end user.

    Unix, OTOH, has quite a bit of history catering to lots of people doing lots of things at the same time. This just happens to mesh reasonably well with running lots of serving processes at the same time. Virtualisation in that use is much more useful for windows because windows lacks many of the process, user, and other separation mechanisms. The drawback is that you now need even more interfaces and software layers (possibly^Wmore than likely fancied up with GUIs and web interfaces) to merely manage the resulting stack. Just hope all those bits stuck in there are rock solid, eh.

    I don't know if flash, with its limited write cycles, makes more sense as a cache than say a large bank of cheap-ish and slow-ish memory, possibly with battery backup. In the end, though, one might just find that the architecture is where the bottlenecks spring from, at which point multiple i/o channels and all that come back in the picture, and you're back to mainframes. Though as long as a simple pull of the wallet will command stacks and stacks of x86 servers to easily throw at the problem, nobody really needs to do anything fancy.

    In that respect, commodity hardware is much like xml, or violence. If it doesn't solve the problem, you're not using enough of it. Perhaps the same is true of virtualisation.

    1. Destroy All Monsters Silver badge
      Pint

      I drink to that

      "COMMODITY HARDWARE IS MUCH LIKE XML, OR VIOLENCE. IF IT DOESN'T SOLVE THE PROBLEM, YOU ARE NOT USING ENOUGH OF IT."

  9. Robert E A Harvey

    me too

    I think the problem is me-too management styles, where executives see an established market and want a slice of /that/. Innovation would take time, cost money, need explaining. Just photocopy what is already selling and get on with it.

    You can see the same thing with compact digicams, and white goods.

  10. Dazed and Confused

    Crap @ multitaking

    If they were crap at Multitaking how would VMWare help?

    Someone has to schedule the different VMs on the physical CPUs.

  11. SplitBrain
    Pint

    Sun Innovative....

    Agreed, wholeheartedly.

  12. Anonymous Coward
    Flame

    You get whatever Christmas you deserve

    If you do not have R&D-Concept-to-Market-Product workflow worked out you cannot exploit anything innovative until it is commoditized.

    The hallmark of all vendors you have mentioned here (sans IBM) is that they do not have R&D in their core business area - servers and PCs. They have some design and integration, but no real R&D as such. It is all Shop-n-ship.

    By the way, it is not just the PC vendors who have ended up caught in their own trap. It is half of the high tech industry. Cutting R&D and C2M activities looks extremely good on paper - lots of savings and decrease in risk exposure. It is however a guaranteed "the company will have no future" several years down the road.

    As far as "Our R&D is bullshit" which is the usual excuse for the cuts in the first place, it is once again a result of a cut (just a different one): It is the popular 'We outsourced our core "competence" cut'. A natural result of outsourcing "core competence" is that the company can no longer _EAT_ _ITS_ _OWN_ _DOGFOOD_."

    A company that cannot consume its own R&D internally and does not consume its own R&D internally, is doubly doomed to have no future.

    Once again, while the PC vendors are the classic examples here they are not alone. Half of the high tech industry has followed them into the same trap.

    1. Ilgaz

      Near everything comes from IBM

      SMART, sudden motion detection were all IBM/Toshiba etc. innovations. Seagate used to be innovative too. They still do cool things, like that hybrid momentus xt. One should ask how come they don't implement the technology to enterprise disks? There is no price problem there too.

      You know, everyone does have idea for hybrid disks but making them in massive numbers and cheap is the real thing. They managed to do the hard thing, how come they don't do the server/blade versions?

  13. Anonymous Coward
    Anonymous Coward

    And what about...

    The other thing is that while conventional disks have a fail period, so too do Flash components on the number of reads/writes that can be achieved, and that software does need to be tweaked to run better on those things, since as Percona and others have shown, the way they're used in the system makes a difference too...

  14. Mage Silver badge

    Innovation

    ARM and Transputer in 1980s was innovation. Since then we just have cost reduction and performance enhancement.

    Flash is the main innovation since then.

  15. Mikel
    Go

    HP?

    HP partnered with Fusion IO to make their "Storage Accelerator" product years ago. They even have it in blades. Best to leave them out of this one.

    BTW: The reason is obvious. They all sell disk SANs that run millions of dollars each.

  16. Salad

    SAN vendors already onboard with flash

    Not entirely sure why you refer to NetApp's caching mechanisms as the only bridge between DRAM-based cache and spinning media. Several other SAN vendors have been on board with SSD technology for years, like EMC and Dell Compellent integrating SSDs into their existing FC-based systems.

    I don't think there really is much to say on the topic of storage within commodity servers themselves. Any workload that outpaces current technology, like 15K RPM 2.5" SAS drives for example, generally warrants a dedicated storage system whether it be for capacity, scalability, reliability, or economy reasons. I don't think that server vendors have really left anyone in the cold here.

    I don't think many server operators are keen on the idea of obliterating a valuable I/O slot that could drive so much more than storage, which is already available on the front of the box in a little hot-swappable form factor. The market would be so small - the last time I heard about PCIe flash storage it was in a desktop PC that had water cooling and LED-lit fans...

  17. Bill Stewart
    Stop

    Unix vs. Virtualization

    Unix systems do a great job of managing disk and cache and multitasking scheduling - VMware doesn't improve that by wrapping an extra layer of emulation underneath. (I won't quibble if you want to criticize Windows's limitations, though it's also gotten better over the years.)

    What VMware really gives you is administrative separation, so you can have different systems administered by different people all running applications that want to be Root or Administrator, and by doing hardware emulation it makes it easy to snapshot machines and move them between different physical sets of hardware without the applications having to know about it, which is great for backups, hardware replacement, and debugging.

  18. Anonymous Coward
    Anonymous Coward

    Just put more memory in - with more processors?

    While I can see the point of a higher speed disk providing the OS and other static files a lot faster, doesn't it just make more sense just to have a bigger main memory ... on each of a number of processors? Single dedicated processors for 'web server' or 'database' only handling that process must be faster than creating many virtual views of exactly the same thing - and all the cached data simply lives in main memory ...

  19. wanderson

    Server Commoditisation Bad for Innovation

    Mr. Mellor overspoke in article about " Windows and Unix were so crap at the job" in regard to server apps multi-tasking, since both Sun Microsystems/Oracle with Solaris Containers and FreeBSD with "Jails" (both UNIX based OS) have very effective multi-applications tasking systems even if not supported by disk I/O improvements of type provided with Virident or Fusion-io.

    Therefore, he needs to think before attempting to attach lack of innovation and poor performance of Microsoft Windows technology to others, so as the mute the mediocrity and lack of innovation that emanates from Redmond.

    This linking of "misery loves company" syndrome may not be intentional in some instances, but has shown to be carefully planned by many Microsoft dupe reporters in trying to minimize the negative view (and reality) of Microsoft's failures in creative enterprise technology.

    1. Chad Larson

      Solaris and FreeBSD

      Both have, in addition to Zones (Sun) and Jails (FreeBSD) fairly robust tools for using SSDs (or other memory drives) within ZFS. The filesystem 'knows" what data is better cached, and yet is written to more than one place within the filesystem.

  20. Lowres
    Grenade

    But would it be commodity?

    Right so you take a $3,000 server, then stick in a $15,000 PCIe card its hardly commodity anymore. is it?

    Lots of fast SSD/RAM that can be used in this way and written to a lot more frequently than a disk would be is bloody expensive.

    Then there are also the requirements for redundancy that play a part. You loose the IO card you loose the data....

  21. iffer
    FAIL

    umm been there 20 yrs ago

    The reason its not commonly done is because its been done and found to not be an effective use of resources. In the 80s we had add-in cards full of ram (used as a ram disk) with battery back - functionally equiv to your flash disk, without the limited write cycle problems.

    What we discovered is the ram$$ was better used if it was placed in the system memory pool where it could be used as disk cache/ram disk or working storage. And we put the battery backing on the whole box.

    Jen

    Oh and I went back looking - here is a 2005 pci ram/disk card http://www.bit-tech.net/news/hardware/2005/05/31/gigabyte_ramdisk/1 That seems to be what you were saying wasn't invented?

  22. Ilgaz

    even readyboost can create miracles

    The absurd thing is, even MS did all it could, on expense of adding 1 more item to already crowded menu, users doesn't really know/care that a similar end user option exists for years beginning with Vista.

    A cheap/ class 4 sd card on a hp laptop of my neighbour made him give up buying a faster disk since there are 2-3 times speed increase in lot of operations, especially ms office/java apps. Let me don't forget the surprising bit. "heavy" but working security suite (not norton) got a very special boost with it as they have to deal with a lot of temp files (consistency data etc) and updates occur instantly.

    Did any company market the already built in, kernel level technology by selling some very fast sd card reader built on a pci card, you can easily exchange the cheap sd with new one?

    My neighbours laptop sd card slot was idling for years, he didn't know it exists and he sees a little point anyway. Only company understood the business seems Toshiba, they stamp "windows readyboost" to faster usb sticks.

    Btw nobody seems to question the seagate/wd etc. Cache sizes. Comedy, real comedy...

  23. Antti Roppola
    Welcome

    The new R&D paradigm

    So I thought the new model was to outsource R&D by risk using VCs to fund startups which then get acquired if they show any promise.

    I supopse the art is knowing how long to leave companies like Fusion-io and Virident to reduce risk and grow market before they get gazumpped from under your nose or steal a march on you and trun in to competitors. Then there's that whole adbdication of control (or is that just the ultimate expression of faith in the market?).

  24. Henry Wertz 1 Gold badge

    Most vendors are not innovative

    That's just how it is. Dell, etc. did not develop SCSI, IDE, or SATA, RAID, AGP, PCI, DIMMs, they haven't designed processors, and generally not their own chipsets. (HP did for their UNIX workstation line, IBM does sometimes on desktops too). This is not what they do, they put together the commodity components. I don't think this held anything back, other companies supplied flash and now that prices are becoming more reasonable Dell etc. are beginning to use them.

  25. AntiPoser
    Linux

    What about the customer

    IF we passing blame all around why not the customer whose helped slow inovation down by demanding lower costs and using the large vendors for competing bids. Profits in x86 market are small compared to the large RISC Unix market, profits forced down by customers playing the vendors against each other. IBM's dropping of its desktop division was a good casualty of this, yes they gave it nice business names like not our core business or other find terminology but basically it became to expensive to produce a Thinkpad...Lenov showed that by producing as long as the agreement was for then dropping it fast...Compaq the great x86 player who just got eaten by HP why innovate when you can buy.

    Sorry but there is lots of blame to go around least of which I think is the virtualisation vendors be it VMwarw ro Microsoft.

  26. Anonymous Coward
    Grenade

    Violins on TV

    And when HP teamed up with Violin Memory for a TPC run that was unbelievably good, Oracle refused to sign off on it. Rumour is, it beat the Exadata contraption in every way possible, especially price.

    Putting flash both on the PCI-e and storage buses can be a real eye-opener. Expensive though, especially the way Oracle does it.

    1. Anonymous Coward
      FAIL

      re: Violins on TV

      That price argument is nonsense. The HP/Violin combination had ZERO redundancy in the hardware and software. If anything failed, except for a SSD in the Violin array, then you were down!!! Oracle will not sell an Exadata without redundancy. HP and Violin don't care about your data -- OK, that last bit was unfair, but the whole price aspect of this is nonsense. If you level the playing field by adding redundancy to the HP solution then the Price/Perf argument becomes much more interesting.

  27. Eddie Johnson
    Coat

    My PDP-11 is faster than your Superdome

    "VMware is, after all, just a glorified way of multi-tasking apps in servers and, originally, PCs, that was necessary because Windows and Unix were so crap at the job."

    Glad to finally see someone say that in print. Now can we take the next step and fix the failed piece of the stack, rather than put another layer on the stack? Virtualising OSes is an acknowledgment the OSes are failing to do what they promised in delivering a multiuser/multitasking environment . If you were swapping apps in and out rather than whole OSes the disk load would be that much smaller. This is yet another case of poorly implemented software driving the demand for faster hardware just to keep your head above water.

    1. Billl
      Happy

      re: My PDP-11 is faster than your Superdome

      ummm... Solaris Zones and BSD jails maybe? Even chroot jails on *nix is reasonably effective.

      There -- solved. What else you got?

      VMware/Xen do not only provide better utilization, they also provide a way to consolidate heterogeneous environments. If Homogeneous consolidation was the only problem, then the above mentioned solutions would be fine and we would not need VMware/Xen.

  28. Schyzophrenic

    VMWare and the Server

    I think the main reason VMWare is being installed on the server of today has nothing to do with multitasking, resource management, or lack of either in the server class operating systems of today. I think the demand for high performance/low price servers of today have created an environment where the need for individual servers per task due to storage or processor limitations is a thing of the past and it makes more financial sense to purchase one dedicated system to perform all the business tasks with a duplicate for redundancy rather than a massive rack of specialized servers each doing their own task.

    I think this also speaks highly of how effective disk IO is in these systems that so many medium and large enterprises are moving towards this model knowing that replacing multiple older servers with a single current unit will increase performance while still decreasing operating costs. This is true for *nix and M$ environments.

This topic is closed for new posts.

Other stories you might like