back to article Intel squeezes one million IOPS from desktop

Intel has managed to squeeze one million IOPs out of a two-socket desktop tower. Intel Fellow Rick Coulson showed off the eye-opening setup during a keynote Tuesday at the Intel Developer Forum in San Francisco. The secret ingredients are SSDs tightly coupled to a host in a highly tuned setup that Coulson called "co- …


This topic is closed for new posts.
  1. Anonymous Coward

    Intel -- short-stroking SSD and the new kings of benchmarketing BS

    Re: "To the non-storage guys in the audience, he said "I want to put this in perspective: this is about 5,000 disk drives worth of random performance."

    Instead of foisting this malarky on the market, why won't intel just publish an SPC-1 benchmark? Or TPC-C? Or ANY audited benchmark? Oh yeah...I forgot. That would require them to subject thier test setups to public scrutiny and verification.

    By the way...has anyone else noticed that Intel's SSD spec sheets have some little teeny-tiny print at the bottom that discloses even more ridiculous BS? Turns out that Intel "short-strokes" their SSDs so that they test only an 8GB partition on a 160GB disk. That way, all you are really seeing is the performance of the DRAM write cache -- which is oh-by-the-way a VOLATILE memory. Of course this means Intel's specs are crap.

    Imagine...short-stroking an SSD!!! Purely laughable garbage from the mickey-mouse marketeers at Intel.

  2. Hate2Register

    Hang on...

    When you put a TLA (three-letter-acronym) in a piece, you explain what it stands for. Otherwise people have to search your article for a later explanation, which may or may not be there. How damned annoying. Very. Journalists all other the place get this wrong, and I'm all for stringing you up using your own underpants. Mutant.

    You used IOPs, SSDs, and I/Os, all without any explanation. I know most of them anyway, but you forced me to read half the article, looking for a definition for IOP. If you hadn't provided one, then I would have thought WTF, I would H82BU. FFS man, @TEOTD, if you can't write sexy, then don't get jiggy, dude.

    FYI, all these acronyms are valid. Will you look them up. WYF. (I made that one up). Wise up, writer.

  3. Anonymous Coward






  4. Sampler
    Thumb Down

    @ Hate2Register

    It's a technical site, readers and expected to have at least a limited understanding - you want it spelt out in nice little words than go read the bbc.

  5. Def Silver badge


    "...the CPU utilization of the tower was about 50 per cent..."

    Doing what? If it's helping control the drives then that's incredibly poor. If it's just running the test then that implies the CPU is still waiting for the drives to provide it with data - again, not that impressive when you think about it. (Yes, I'm sure it would have been waiting longer with regular drives, but we don't know what the CPU usage when using regular drives.)

  6. TeeCee Gold badge

    Re: short stroking SSDs

    Yup, bloody silly that. You might just as well add a shedload of RAM and run a massive RAM disk cutting out those expensive SSDs. That'd be quick. Since all the seriously clever stuff here is getting the interface up to speed, replacing this with processor to memory I/O (a.k.a. fast as fuck) seems like the blindingly obvious and far simpler option.

    Copy regular to RAM disk on boot, copy same back before shutdown. Since they're using write cacheing on their SSDs, it's no more screwed if the power goes off than their setup (less even, the data on the non-volatile storage remains consistant, if outdated).

    Option for going somewhat faster than fuck would be to do this on a multi-socket box with some secret sauce software drivers to run a RAM disk on each CPUs memory, with I/O for the individual "disks" handled by the associated CPU and RAID 0 the beggars together. It should be possible to get performance levels in excess of OMFG! with that (I suspect the I/O limit here would be what you can squeeze out of HyperTransport or QPI).

    It's be just as impressive to anyone who didn't bother to read the small print, but it wouldn't sell any SSDs though.......

  7. myxiplx@google

    Double checking the AC

    He's bloody right too. Quoting page 8 of that Intel PDF:

    Figures for read and write IOPS: "Up to 35,000", "Up to 6,600", "Up to 8,600"

    And then right underneath:

    "2. Write Cache enabled

    3. Measurements are performed on 8GB of LBA range."

    Double checking, Anandtech have some benchmarks. They don't mention the extent, and report figures for 4kb reads in terms of MB/s, but they seem to work out at 15,000 read iops - under half of the Intel figure:

    By all accounts these are bloody good drives. Come on Intel, what are the real world figures?

  8. myxiplx2

    PCI-e SSD's anyone?

    It's not the 1 million IOPS I find interesting here, it's the fact that it mentions the drives are connected via a PCI-e expander.

    That implies that Intel are developing a high end PCI-e based SSD, something to compete with Fusion-IO perhaps?

  9. Anonymous Coward
    Thumb Down

    Short-stroked SSD numbers = meaningless drivel

    Author said: "But no matter how you cut it, one million IOPS - 1,076,600, to be exact - is one hell of a lot of desktop I/O."

    No it's not. It's a meaningless number until we know how the test was performed. I can do more in system DRAM than Intel can do in it's VOLATILE SSD DRAM cache.

    And the short-stroking stuff is unbelievable.

  10. Anonymous Coward
    Anonymous Coward

    Micron PCI-e SSD


    They were almost certainly using the Micron PCI-e board.

    FYI...Micron is Intel's NAND flash partner....

  11. Anonymous Coward

    Not just Intel...look at STEC BS

    STEC claims 45,000 IOPS for the $20,000 ZeusIOPS...but when IBM ran it against the SPC-1 (audited) benchmark, it took EIGHT of them, with volatile write cache enabled, to reach that number.

    STEC Benchmarketing BS = 45,000 IOPS

    Reality = 5,600 IOPS -- with write cache on.

  12. Chris Evans


    "It's a technical site, readers and expected to have at least a limited understanding - you want it spelt out in nice little words than go read the bbc."

    True but we all have our own area of technical expertise and some may be new to this field I had to think twice what it meant.

    Good practice is to expand the abbreviation on its first usage, though I do recall one technical article a few years ago that wrote "RAM (Random Access Memory)" and then went on to use half a dozen obscure acronyms with no expantion.

  13. Hate2Register

    @ Sampler

    Just because it's a technical site doesn't mean you don't have to write properly.


  14. This post has been deleted by a moderator

  15. Barry Whyte

    Re: Not just Intel... look at STEC BS

    Disclaimer : I work for IBM, not STEC.

    Before casting aspersions, maybe the "anon coward" here should do some homework. The limit in the SPC-1 pSeries SPC is not the STEC drive, but the SAS HBA and ESM units used to connect to the SSD. If you benchmark an STEC drive in the right enclosure / fabric you can SUSTAIN around 45,000 IOPs (read) and 17,000 IOPs (write) from a device when doing 4KB blocks.

    SPC-1 uses between 8KB and 16KB blocks, and as we all know, SSD IOPs half as you double the block size, therefore at say 12KB this would bring the number down to around 6,000 IOPs for SPD-1

    Go do the maths before you start mud-slinging.

  16. Anonymous Coward

    STEC SSD on SPC-1

    Barry Whyte,

    There was no "SAS HBA", it was a 5908 RAID controller. Moreover, 45K/17K IOPS @ 4K is about 100MBytes/sec. Average SPC-1 I/O size is 8.3K. Therefore at 5,600IOPS the STEC SSDs are doing only about 45MBytes/sec. each.

    Now, you are suggesting that the STEC SSDs could have done several times more IOPS, but they were bottlenecked in the SPC-1 benchmark by the IBM 5908 RAID controller.

    You are suggesting that IBM put more than twice as many SSDs on that RAID controller than what it could handle?!?!

    Given that the key metric here is Dollars/IOP, and that the $100K worth of SSDs were 95% of the system price, it would be extraordinarily silly to put more SSDs on a RAID controller than were needed.

    By the way, Ideas International is the analyst firm formally appointed by the Storage Performance Council to cover technical SPC benchmark analyses, and Gary Burgess is the head analyst. See here:

    There you have it. The STEC SSD is actually costs MORE per IOP than HDD. If these observations are all so wrong, maybe IBM should take it up with Gary Burgess?

    And while you are at it, when do we get to see the SPC mandated "Full DIsclosure Report" that remains missing from IBM's SPC-1 submission? IBM was supposed to have submitted it before the end of July...

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2020