back to article STEC becalmed as Fusion-io streaks ahead

Conditions are variable in the solid state drive (SSD) world, with STEC lagging while Fusion-io has the wind in its sails. STEC makes SSDs that replace fast hard disk drives (HDD), fitting in HDD slots in storage arrays, and boosting the I/O rates much better than having lots of short-stroked, fast-spinning Fibre Channel …


This topic is closed for new posts.
  1. Craig Ringer


    "but these haven't yet received a wide take-up buy its OEMs"

    buy, by, who cares whitch is which, people are to fussy about these things.

  2. Ian Michael Gumby
    Thumb Up

    Fusion IO vs SSDs

    The interesting thing about FusionIO is that they are doing 'raid' within the ssds on their pcie card.

    Unfortunately, its a very pricey option unless you can justify the 13K or more for a FusionIO solution. Then if you can, its a better way to go.

    Also they need to solve the boot problem. When they do, you have a very fast, low power, low heat solution.

  3. Colin_L

    the problem with FC SSD

    One of the biggest problems most if not all of the SAN vendors are facing is scalability.

    If you have a box like a CX480 made to hold greater than 400 spinning disks but you put in 2 trays of SSD and tap out both controllers then you've paid a rather lot for your roughly 2.2 TB. Let's be kind to EMC and say that it took 4 trays. Still, that's a tiny amount of data to tap out a frame that would have held well over 100TB of disk. You can scale easier in architectures that use a meshed star topology, true. Most midrange aren't like that, and in this economy it seems that hardly anyone is ordering up a Symmetrix with a bunch of controllers and SSD.

    Texas Memory Systems is the only maker of dedicated Flash SSD arrays that I'm aware of-- and while they have perfect scalability (because they built it that way, rather than for spinning disk) they are nowhere near the features, redundancy or ease of administration with the conventional SANs or the world.

    Fusion IO, TMS and others make PCI-E cards that go directly into server. That's all well and good, but it doesn't work for a clustered system. (Those that support disparate / non-shared storage do so via private bus, and using that would sacrifice all the speed of SSD.) If your business has the need for SSD speed and the money to buy it, you probably have clusters. That's not to say you can't find some way to use direct-attached SSD...

    I expect the next generation of midrange arrays to have much more controller horsepower than they presently do. And soon enough someone will make an array that has controllers, dram cache memory, flash memory and some spinning disk all working in concert. SSD will be like a second layer of cache.

    Just my opinions of course-- stay back, all you lawyers and storage vendor employees. :)

    1. seven of five

      spot on, and furthermore

      To make things worse, when you have several of those TMS boxen, they start saturating your SAN backbone... especially when backup tapes run over the same pipes.

      As my teacher once said: "You can´t remove a bottleneck, you can only relocate and defer it."

  4. Anonymous Coward

    I'd talk to however checks your stock prices

    Erm EMC's current stock price is $17.87, maybe Jul last year it was trading around $12

    You can even use google to check stock quotes !!shock!!

  5. JL 1

    Big Jim?

    Is that *the * Jim Dawson quoted here? Big Jim of StorageNetworks and 3PAR fame? If so, well done Jim!

This topic is closed for new posts.