* Posts by Storage Scrutinizer

3 publicly visible posts • joined 26 Aug 2009

Pliant's SSDs are awesome, says Pliant

Storage Scrutinizer

Pliant SSD performance claims

Pliant has been claiming 180,000 IOPS per SSD in -- MIXED -- workload scenarios.

Some comments...

1) 68,000 IOPS (1.1m divided by 16 ssds) is less than 40% of claimed performance. Still -- seems like a good number, except....

2) ...according to the video, this number was "achieved" by setting queue-depth at 64 outstanding IOs per drive!!

This is a ridiculous queue-depth number that would never be seen in real world applications...never never ever.

Real-world applications are RARELY able to generate more than even 6 or so outstanding requests at any given target, and the average is only 4. The idea that there are 1024 I/Os queued up in this system (64*16) is utterly ludicrous. Likewise the Pliant/Oakgate benchmark result is pure fantasy.

Back in 2008, Pliant slammed a competitor based on their published SPC-1 benchmark performance -- claiming that their SSD would cost 1/4 or less in terms of $/IOP.

http://blog.enterprisestoragesense.com/2008/04/14/never-send-hdd-to-do-the-job-fit-for-efd/

So...I wonder why Pliant didn't go the SPC-1 route? All they need to do is show about $0.25 per IOP on SPC-1 to match their previous performance claims and settle the matter.

Should be easy right?

IBM shows flash of SVC inspiration

Storage Scrutinizer

More FlashDancing and STEC-pumping

Barry Whyte didn't say that IBM "dumped" Fusion-IO for STEC. They just chose to highlight STEC for SVC at the present.

Mr. Whyte's superficially plausible technical arguments don't totally pass the smell test however, because he neglects to even acknowledge the economic aspects of the decision for IBM.

There are certainly applications where SSD in a PCIe slot is the best approach, and there are profit-motivated scenarios where it makes more sense to pump STEC vs. Fusion-IO.

What Barry Whyte ignores is this: when IBM (or EMC, or Compellent, or anyone else) sells a STEC, they turn $20,000 in revenue at 50% gross margins. When they sell a Fusion-IO unit they turn $3,000 in revenue at 40% gross margins -- for the same SSD capacity.

But the decision storage vendors face is even bigger than that. For OEMs like IBM (and especially SAN players like EMC, Pillar Data, Compellent, etc.), the SSD game is all about OEM-profit-per-IOPS delivered to the application, and that's where STEC really shines.

Consider that in IBM's STEC benchmark, (SPC-1C/E), the revenue generated by the ZeusIOPS drives is $2-per-IOPS. In SPC-1, 84 STEC units delivered 300,000 IOPS -- about 3,500 IOPS per SSD. (FYI, these results actually show a HIGHER price-per-IOPS than HDD running the same benchmark!!!)

In STARK contrast, for IBM's Quicksilver test, 41 of the Fusion-IO drives delivered 1,000,000 IOPS. That's THREE TIMES as many IOPS from HALF AS MANY SSDs.

Now consider: that amounts to 24,000 IOPS for $3,000 (Fusion-IO) compared to 3,500 IOPS for $20,000 for STEC.

That's about $0.13 per IOPS (Fusion-IO) vs. $2.00 per IOPS (STEC).

Since these devices are both based on SLC Flash, it doesn't require a technical genius to figure out that there is about 100X higher OEM-profit-per-IOPS in selling STEC vs. Fusion-IO.

Now...if YOU were a Storage OEM trying to squeeze maximal profits out of the the SSD hype-cycle (at the very peak of customer confusion), where would YOU invest your BenchMARKETING budget?

It's that old simple rule again...FOLLOW THE MONEY. That's why EMC and IBM and virtually EVERY other SAN vendor is pumping STEC...for now.

That's also why STEC is doomed.

With the Intel/Micron joint-venture now focused on building it's own Fusion-IO clone instead of building a SAS SSD, with Samsung as the new "virtual" owner of Fusion-IO, with Sun's replacing the F5100 with the F20 PCIe card in Oracle's Exadata2, all of the NAND Flash silicon vendors have aligned themselves with the "Flash-on-the-motherboard" approach, because this is the only path to meaningful volume in the Enterprise Flash game.

In the spin of SSDs on database servers

Storage Scrutinizer
Stop

SSD Spin cycle

There's a reason why there hasn't been a single transaction-processing benchmark published on SSD. And there won't be any TPC-C or TPC-E result published either.

It's because the asymmetry of read-write performance gums up the works in any database that does any substantial amount of updates whatsoever.

Now, we do have a couple of published results for PCI-bus attached SSDs (the only ones that make any sense BTW) on the TPC-H data warehousing benchmark. That app should be a home-run for SSD because the workload consists of >95% random reads. So why then are the TPC-H systems with the best cost/performance (Kickfire, for example) still based on spinning disk?

In enterprise applications the performance advantage of SSD vs. HDD drop like a stone. The STEC Zeus-IOPS for example drops from 45,000 IOPS (advertised) to 5,000 (measured, SPC-1C/E) That's still fast, but not worth $20,000 when you can get the same number of SPC-1 IOPS from an array of 10Krpm disks costing less than $6,000. For the difference in price, you can even buy 15yrs worth of electricity to run them, not to mention several terabytes of additional capacity for "free".

I think the REAL story that you guys should be publishing is titled: "where's the beef: why SSD manufacturers like STEC refuse to publish audited application performance benchmarks."