back to article SUPERCOMPUTER vs your computer in bang-for-buck battle

A couple of weeks ago I posted a blog here (Exascale by 2018: Crazy...or possible?) that looked at how long it took the industry to hit noteworthy HPC milestones. Chatter in the comments section (aside from the guy who assailed me for a typo, and for not explicitly calling out ‘per second’ denotations) discussed what these …


This topic is closed for new posts.
  1. Paul_Murphy


    Can you wifes' desktop run crysis?

    sorry -- had to :-)

    Anyway I would imagine that the bulk of the cost of a HPC is in the design and construction phase since just getting all those components to work together is tricky enough - when you only need to worry about one processor, memory bank and disk drive things aren't as complicated.

    Still - it does look as though the industry is passing on the progress to it's users - which is nice.


    1. stucs201

      Re: crysis

      An i5 with a decent amount of memory, if it can't then its close. Most likely all it needs is a half decent graphics card.

  2. Michael H.F. Wilkinson


    I will test the 64 core 512GB single box (4U rack server) we are getting shortly (for processing large astronomical and remote sensing images rapidly). I will compare the cents per MFLOP/s to the figures here. We already know it will kick the backside of the 32 processor Cray SV1e we used to have performance wise, at less that 1% of the cost. I am really curious what the figures will be. Linpack has its limits of course, but it is still nice to know where you stand, even roughly.

  3. xeroks


    Interesting article ( though no mention of either a ZX Spectrum, or computers used for Moon landings.)

    I think your homebrew hardware is showing where some of the price differentials are coming in: cooling and infrastructure.

    I'll bet supercomputers being used today need a bit more cooling that a couple of noisy fans, and more infrastructure than a domestic power socket.

    Scale up that Generic PC to even Roadrunner speeds, via an imaginary beowulf array, and you will have a shed full of quarter of a million grey boxes. Going to be hot in there, and you'll need a few 4-way adapters too.

    1. Marvin the Martian

      Speaking of Cool...

      So what are the energy costs?

      It's a bit silly to go to all these efforts for calculating cost of performing, but skipping the energy cost? Especially as you note that the seemingly-well-performing ones (wife's desktop, hydra-with-cooler) are clearly power hogs.

      It is probably easy to get figures for the home computers (go to Maplin's/Radio Shack, buy power monitor for 15 or so quid), but the larger ones may be tricky to find? If not, it's an easily added column that would actually tell us something. The rest of the TOC (maintenance personnel + parts, expected lifespan) are too vague to add.

  4. Boothy

    How about GPU?

    I take it all these tests were only using the CPU?

    Would be interesting to see the same sort of testing and costings against GPU aware versions of linpack. Such as a CUDA version for nVidia GPU in your Lenovo W510 and a more current 580 or similar running in a desktop.

    1. danolds

      Re: How about GPU?

      Good point...the wife has a NVIDIA 285 (I think), which should run CUDA no problem. But it's the Hydra machine that I really want to try. It has two NVIDIA 590's and should really be able to pull a good CUDA-enabled Linpack if I can find the code. Maybe I'll reach out to NVIDIA and see if they can point me in the right direction....

  5. Uncle Slacky Silver badge
    Thumb Up

    What about a Beowulf cluster of Raspberry Pis?

    A.K.A. "Bramble" - it would be interesting to see what price/performance ratios you could get from that.

    1. Captain Scarlet Silver badge

      Re: What about a Beowulf cluster of Raspberry Pis?

      A cluster, seems impossible just to get 1 Raspberry PI let along more than one.

      1. Anonymous Coward
        Anonymous Coward

        Re: What about a Beowulf cluster of Raspberry Pis?

        Would that be a Raspberry bushel?

        Can Pi be plural?

        1. Uncle Slacky Silver badge

          Re: What about a Beowulf cluster of Raspberry Pis?

          No, it's a "bramble" - see for example:

          1. James Hughes 1

            Re: What about a Beowulf cluster of Raspberry Pis?

            Price/Performance ratio isn't that great for a bramble, since you cannot access the GPU, and its only a 700Mhz Arm (although even a single Pi is rated faster than a Cray -1!)

            However, power consumption/flop ratios are pretty good.

  6. Linzello

    Accuracy of results

    It seems Linpack gives drastically different results to

    Any idea what causes these differences?

    1. danolds

      Re: Accuracy of results

      Yeah, you're right, my results are only grossly comparable to 'real' Linpack run by professionals. There are many reasons, here are a few major ones: 1) I'm running an abbreviated Linpack on Windows - if I were doing this as a serious exercise, I'd be running it on as stripped down a version of Linux as possible.... 2) I'm not tuning the system or the benchmark at all. I should have run many many iterations of Linpack with different problem set and array sizes to see exactly which config gives a bigger number....3) Theoretical max on Linpack is "cores" x "frequency" x "FP operations per cycle". There are ways to tune each of those factors, none of which I did.

      I think I probably got to about half of the Linpack potential on the big machine - maybe a bit better on the smaller ones.

    2. Mark Hahn

      Re: Accuracy of results is just a flash plugin - very little relationship to the true speed of the computer it runs on, and totally unrelated to HPL.

  7. Peter Gordon

    Other rows that would be interesting in that table...

    Playstation 3 (often used for clustering.. or was until they removed OtherOS)

    HP Touchpad at firesale price (should be good bang for buck :)

    1. Andy Fletcher

      Re: Other rows that would be interesting in that table...

      Pretty sure people are still clustering PS3's (the ones that are doing research). They don't really need PSN access to run nuclear detonation simulations.

  8. clv101
    Thumb Up

    We’re all using supercomputers.

    State of the art computer performance from a little over a decade ago, is now available to everyone able to afford a modern PC. We’re all using supercomputers. Could we be doing more with our computers than playing games and Microsoft Office?

    I blogged about this a while back:

  9. Robert Grant
    IT Angle

    Commodity hardware without a fashion label is best value?


    And yes echo the above comments - as soon as you try and scale up that commodity PC you'll have massive costs.

  10. The Serpent

    Generic business desktop

    Quad core i5 with 8GB? Methinks your wife doesn't work in local government!

  11. Tom 7

    Avoid iFLops

    My two year old machine cost <£200 and bangs out >7GFlops

    so thats about 5 years ahead of apple in bangs per buck.

  12. banjomike

    Modern desktops are excellent...

    ...which makes the dumbing-down of Windows (Win7 and Win8) particularly annoying. Vast power with an OS aimed at the occasional or "average user" (no insult intended). If you actually want to make use of that power with Windows you will have problems.

    1. Anonymous Coward

      Re: Modern desktops are excellent...

      I don't get it. AFAIK there isn't anything in win7 or win8 that'll prevent you from using the hardware to its maximum (aside from insanity like using assembly, but you might even be able to do that). What the basic UI exposes has nothing to do with the capability underneath - you could slap a port of Microsoft Bob on a 16-core Linux server with 64gb of memory, but it wouldn't affect what you could do once you used lower-level functionality.

      If Win7 didn't let you use the capability of the processor, I wouldn't be doing half the things I'm doing with vehicle simulation, gaming, graphics processing, and so on. I mean, I suppose it's possible that the OS is using 30% of the CPU all the time, but I don't think that's true, and even if it were true, it wouldn't have anything to do with 'dumbing-down' per se.

      1. danolds

        Re: Modern desktops are excellent...

        What I was alluding with the Windows 7 comments in the article is that I had a full-on general purpose operating system running while I was pushing the hw with this massive benchmark. It wasn't taking up a huge portion of compute cycles - but it was taking up some of them. If I had gone whole-hog and installed the most stripped down Linux o/s I could, then it would free up more cycles for Linpack. From what I've heard from professionals in the industry, it would also give me more knobs and sliders to play with to optimize the o/s to run the benchmark.

  13. Bob H
    Paris Hilton

    If someone made cheaper infiniband or 10GbE switches an above average office could do some serious calculations. Just cluster all the machines together, then give everyone VDI so they don't get too confused. Or is this an argument to revisit thin clients and rent out your spare compute cycles?

    Paris, bcause she knows about resource utilisation

  14. Valerion


    Why spend $10k on a personal project to get yourself the fastest computer in the state?

    Ok, you've obviously got the cash to spare to do it so fair enough... but in a couple of years it'll be slower than everything new and will have cost you a fortune for no discernable benefit. And in 10 years it'll be junk, making it $1000 per year. Could've bought a decent new PC every year for ten years for that. Or every 2 years for twenty years, and ended up with a much superior machine at the end of it.

    1. FartingHippo

      Re: Hyrda

      1) It's his money

      2) Maybe it's because he enjoys the process. Lot of people spend thousands on their car, while only increasing its value by hundreds (or for a few numb-nuts with a penchant for underfloor lighting and cornflake boxes on the bonnet, actually decreasing it's value).

      3) Would you tell someone who'd spend $2k on a cartier watch they could have bought 200+ casios with that? Actually you probably would.

      4) Some people spunk multiples of that on a hi-fi which is indistinguishable from a $3k set up in terms of quality (those that disagree are simply delusional). Some people spunk that on a case of wine. Some people spunk that on a sparkly piece of crystallised carbon.

      5) It's his money.

      1. JEDIDIAH

        Re: Hyrda

        > 3) Would you tell someone who'd spend $2k on a cartier watch they could have bought 200+ casios with that? Actually you probably would.

        It helps to have a clue in these things lest you get taken advantage of.

        > 5) It's his money.

        Yes, and we retain the liberty to call him a fool too.

      2. Anonymous Coward

        Re: Hyrda

        And there are some REALLY crazy guys who spunk it on women.

        1. danolds

          Re: Hyrda

          First, if I had bought Hydra from a workstation vendor, I'm betting the all-in cost would be closer to $12,000. I'll check that out, I'm curious now. Second, I didn't actually spend that amount of money on that system. I'm very lucky in that I work in the industry and can get engineering samples and reviewer samples of some products every once in a while. For this system, a very helpful HPC vendor helped me get the Xeon processors and NVIDIA gave me two evaluation video cards. That helped defray the overall cost of the system considerably - phew....There will be more details on this when I start publishing the Hydra blogs...

        2. SYNTAX__ERROR

          @ David W

          A dubious choice of words there, David....

      3. Daniel B.

        Re: Hyrda

        "3) Would you tell someone who'd spend $2k on a cartier watch they could have bought 200+ casios with that? Actually you probably would."

        Yes, I consider those people snobs. However, putting down $10k on a supercomputing project actually serves a purpose; the ubercomputer will actually do stuff faster, while the $2k cartier will have *less* functions than a casio.

        1. Valerion

          Re: Hyrda

          But the longetivity argument still stands. A $2k Cartier watch will still be worth a substantial sum in 20 years time and will still carry out its primary function of telling the time perfectly well. The computer won't be up to any modern task and will not be worth anything either. Stick a 486DX-100 bsaed PC with some incredible-at-the-time graphics on eBay, and also stick the watch on... which would get the interest?

      4. Valerion

        Re: Hyrda

        People are free to spend their money on anything they like, of course. I'd never say otherwise but I'm free to question their sanity :)

        Enjoying the process is fine and a perfectly good reason, I don't have a problem with it, I just see it as a waste of money.

        The watch though I don't see as a waste of money as its value will last and it'll be just as good in 10 years as it is now. I own a $500 watch and love it. So far it's outlasted 3 desktop PCs and 2 laptops and works as well as it ever did.


      Re: Hyrda


      There are diminishing returns when it comes to bleeding edge high end hardware but you can still get some pretty powerful kit for not much money. If you hit that sweet spot, you can still have a very powerful machine that will stand the test of time and you don't have to spend 10K on it or even $3600.

    3. Ilgaz

      Re: Hyrda

      I know animation/ graphic artists who paid $200K for symbolics/sgi/barco presentation and paid for them in months time. $10k is almost bargain in TV industry, there aren't many things you can buy for that price.

      Also,it is people like me trusted to Apple for professional work and stuck, he will just change mb and cpu, perhaps memory to upgrade the machine.

  15. JLH

    Bob H

    I get your argument re. the amount of CPU power in an average office.

    And IB switches are quite cheap these days - see for example

    I would conter though with exactly the same argument - CPU horsepower is relatively cheap these days, and it is the effort and wages of the programmers and administrators which is the cost.

    So I would say it is better to have dedicated hardware in an environmentally stable room, close to the data. Rather than coping with a mongrel set of desktops, which vary in speed and memory.

    Depends on your application of course.

    And cloud (ye Gods why did I have to use this word...) changes things - I wouldn;t bother these days to do office level cycle scavenging. Hire those cloud machines by the hour.

    At the Sandybridge launch the other day there was a talk by Amazon - their HPC instances when ganged together reached 42 in the Top500

    1. danolds

      Damn you..

      ...for bringing the word 'cloud' into our nice little hardware, is there no way to escape a cloud discussion?

      1. JLH

        Re: Damn you..

        God have mercy on my soul for using that word.

    2. Mark Hahn

      uh, cloud is expensive

      you know Amazon's profit margin is HUGE, right?

  16. Ramazan

    Big systems' prices don't scale linearly against processing power. If you need That Much Power, be prepared to pay 10x or 100x of performance total... And then you pay 0.3x to 1.0x of initial price for 24/7 support and maintenance each year.

  17. Giles Jones Gold badge

    There's so many cores out there doing bugger all, I don't know why people can't be paid to process data via some background service. So long as it's not classified of course.

    1. Anonymous Coward
      Anonymous Coward

      Depending on their electricity rates, they might come out negative if they're not careful. Plus, a 10c rise in chip temp will drop lifetime by half (or thereabouts). Crank your CPU all the time and you've basically got a space heater turned on 24/7. Except it's a space heater that makes your computer die faster (That said, I can't remember the last time I had a machine fail due to long-term CPU fatigue).

      You don't get something for nothing. Now, it might be more efficient to have a whole bunch of people doing the processing when they've -already paid for the infrastructure- (power supplies, environment, upkeep, purchasing, etc) even if you have to compensate them for their electric bills. People will rent out their computers without considering the cost of the house to put them in, the time they spend setting them up, the effort they go to to make sure they get fixed if a lightning bolt puts a hole in the mobo, etc. They'll probably want to be compensated for the electricty (if anything) and consider extra cash as 'free' - somewhat like people considering gas to be the only real cost of driving their cars.

      So, a little public myopia might be a big benefit to people who need the cycles and can implement something well.

  18. David Shone

    Welcome to the 1980s

    There is very little here that's new or surprising.

    In the 1980s it was clear that a cluster of small systems (such as workstations) could often do the job of a supercomputer, so long as you could carve-up and distribute the work. Since then, this has been reinvented with many names: Beowulf, Grid, Map-Reduce etc.

    The basic ideas - and related ones such as cycle-stealing - probably go back even further; however it was the arrival of microprocessor-based systems that changed the economics and architecture of supercomputing so that big systems are effectively clusters of small ones, with the added overhead of fast interconnects and other infrastructure.

  19. stuff and nonesense

    The power of an 80's super computer....

    And it still takes forever to load up Word!!!!

    1. Tchou

      Re: The power of an 80's super computer....

      I began using a computer with an Atari STF, 8Mhz, 1MB and i can tell you that "forever" was a lot longer than now.

      1. danolds

        Re: The power of an 80's super computer....

        My first computer was a Sanyo MBC-550....which was billed as 80% IBM compatible. Meaning that a program would get 80% of the way loaded until it crashed. And you had 1MB of memory?!! I only had 128k RAM - and a single floppy - couldn't afford the dual floppies...and YOU'RE complaining about long processing times? lol lol

        1. stucs201

          Cue four yorkshiremen...

          A floppy disk drive? Luxury! Some of us dreamed of a disk drive while waiting for tape (ordinary audio cassettes), which then gave errors 80% of the way through loading on the system it was designed for. No doubt someone else will come along with tales of punched tape to continue this...

          1. danolds

            Re: Cue four yorkshiremen...

            Did I say I had a disk drive? It wasn't all that fancy. I had to spin it myself with a foot powered petal to keep it moving. And if I spun it too slow or too fast, it would screw up the read or write and I'd have to start over. I used to dream of having a reliable disk drive or even a cassette tape drive that worked.

            Of course, I didn't get much time for the morning we'd have to get up, clean the bag, and then sweep the road clean with our tongues...

    2. janimal

      Re: The power of an 80's super computer....

      Elite on the commodore 64 used to take over an hour to load off cassette tape and the load would fail 40% of the time. You had to reserve yourself a couple of hours just to get the game loaded when you wanted to play :)

    3. Seb123

      Re: The power of an 80's super computer....

      Get an SSD. "Loading up" Word requires very little power. The bottleneck is in the storage.

  20. Anonymous Coward

    Thunk What The Heinous Kim Could Do !!!

    Thunk about it, KIM could simulate an atomic bomb on that ! We must demand them to return all their Weapons-PCs to DULL !!

    De SUN doss havve an article on DE KIM:

  21. Christian Berger

    Yes, but what can we do with it?

    While in the past you could do anything with your computer you could imagine. You now have artificial limits imposed by companies like Apple, Microsoft or others. You cannot do the things you used to do on a Cray on an iPad, since writing a Fortran Compiler running on the iPad would violate the usage conditions.

    We would have so much power, but instead of finding ways to make it usable to the average person, companies choose to go the easy way and dumb down computers computers more and more, turning them into nothing more than appliances.

    1. Anonymous Coward

      Re: Yes, but what can we do with it?

      Last time I checked, you could do pretty much whatever you want with a PC - make your own OS that just has a hex editor, use some other OS, run it off magnetic drum memory... I don't know what you're getting at there. If you're saying you can't do arbitrary things *within an OS*, well... uh, yeah. You can't make Windows hard realtime, because it's... not. You can't reverse engineer it. But you can indeed do the things you used to do on a cray on a PC.

      As far as iPads go, the things *are* appliances, like super-powered universal remotes or calculators or stereo amps. There's no god-given right to run a fortran compiler on an iPad any more than there is to run your own operating system on your Harman/Kardon. You can't do it, but that's not exactly jackbooted thug kinda stuff.

      If you want to run a fortran compiler on a tablet, get a Playbook - which *does* have a very nice RTOS - and do it in C. Or do it with Android. The fact that one company makes a tablet that's an appliance (say, a more functional Nook or whatever) as opposed to a general purpose computing device, something it was never advertised to be, is hardly an indication that "you can't do anything anymore". Things like PCs with open architecture exist - and are dominant - and tablets with very capable operating systems are available - but you cherrypick the examples of things you dislike (iPads, Macs), pick some random nonsense ones (Windows?), and proclaim the world of computing to be doomed.

      Sure, the world is bad if all you look at is the bad stuff. But there's a lot more good stuff out there now than there was 20, 30 years ago - one hell of a lot more. And if you really care, that iPad can probably go to (or make) a web site that runs a Fortran compiler in jscript, anyway. And, if the article is to be believed, with fairly good speed, too.

    2. Anonymous Coward
      Anonymous Coward

      Re: Yes, but what can we do with it?

      Oh, and one other thing - you say that computers are getting dumbed down more and more, and they're more and more appliances. Well, hang on - take a look at PCs in the early-mid 1980s - from the perspective of a normal consumer. This is not including people who do hardware hacks, or know machine language, or are developers; as far as I can tell you're talking about general users.

      Sinclair: BASIC in ROM. No operating system. No expansion. No use whatsoever.

      TRS 80: BASIC. DOS like thing? I can't remember. A little bit of expansion. WTF gfx.

      C64: BASIC in ROM. Magicians can do anything with it; normal people could run games.

      Atari 800: BASIC in ROM. No operating system as such; not much expansion.

      Apple II: BASIC in ROM. Simple operating system. Expansion via addon cards to an extent.

      IBM PC: BASIC on disk. DOS. Same as we've got now. Expensive as hell.

      Mac: Sophisticated OS. Hypercard (!?!!). Expensive as hell. No expansion at all. Ever. Like, really.

      AMIGA: Expensive as hell. Awesome OS. Multiple programming options. Huge expandability. Fantastic graphics and amazing games. Result: Went out of business almost immediately.

      All of those operating systems (Save the AMIGA's) were quite simple and couldn't do a whole lot. Hardware expansion was extremely limited at best. The vast majority of computers limited you to ROM BASIC out of the box, and moving beyond that (aside from premade programs) was very difficult.

      You could do any of the things you'd like to do on any of those computers - not without going to extraordinary measures, like using assembly / machine language. On the PC you can use already available dev environments, among other options, without bypassing (or recreating) the operating system.

      Honestly, things are a lot better now than they were then. I'm guessing it feels like we could do more then because we didn't know what could be done yet - so it felt like there were infinite possibilities. The more you know, the less expansive your view is. But that ain't the OS maker's or the hardware's fault.

      1. Christian Berger

        Re: Yes, but what can we do with it?

        Well the PC is the counter example showing how a hardware platform should be. You could, in theory have the same for mobile devices, however manufacturers keep you from having it. I'd love to have a mobile phone with some "Open Firmware" or "BIOS" or "EFI" so I can just load any operating system I want from SD-card without restrictions, and a simple hardware abstraction layer so basic things will just work, but there is not jet such a thing. So we end up with a lot of our computing world being deliberately dumbed down, just like in the 1980s.

        1. Anonymous Coward
          Anonymous Coward

          Re: Yes, but what can we do with it?

          Maybe so, but that's market forces, not an industry conspiracy to lock down devices (as he RIAA would like). If you were industrious enough you could probably manage what you're describing, but yeah, nobody offers it any more than a user-expandable audio amp or a multimeter running linux (which, now that I think of it, would be really cool...).

          Don't confuse market forces with intentional evil. It makes it harder to detect - and fight - when it's the real thing.

          Also, there's stuff out there like the mbed ( and of course the r-pi that do indeed let you do some stuff of this stuff quite easily - it's just that doing power and packaging for mobile devices is hella expensive, so i can't see anyone physically building an iPad any time soon. But these days you could probably homebrew the equivalent of a late-90s laptop with a 3d printer and enough perseverance. Hmmm.....

  22. Bradley Hardleigh-Hadderchance
    Thumb Up

    Some guys come home from work...

    ..and wash up, then go racing in the streets...

    Excellent stuff!

    I can get just over 50GFlops out of my i5 750 2.66 OCd. Stable and cool. According to LinPack.

    Amazing little chip - so much more bang for the buck than the i7 - but then you probably already knew that. All in one seriously stable, seriously 'Nuclear' looking package.

    I don't need more speed and I do Audio/3D. Well, I lie, I do need more speed, but even the Hydra wouldn't help me out here - I need a server farm.

    Happy for now, anyway.

    1. Turtle_Fan

      Don't bad mouth the i7!

      While I have no doubt your i5 may work wonders the i7 is still largely unsurpassed a good 3+ years after launch.

      I refer of course to the 1366 variety and not the "i7 in name only" other ones.

      Looking forward to the proper arrival of 2011 i7 to see if it will finally be surpassed.

      1. Tchou

        Re: Don't bad mouth the i7!

        If you need computing power, you should consider going Xeon.

  23. Random Coolzip

    What, no phone?

    I'm disappointed you didn't benchmark a contemporary phone. I think any of the latest dual-core models would put up a respectable showing.

    1. UK Jim

      Re: What, no phone?

      There are a variety of Linpack ports for Android, some results are at for instance...

  24. Bradley Hardleigh-Hadderchance

    Not bad mouthing the i7...

    Wished I could afford it. More expensive RAM, more expensive Mobo.

    But yeah, for a few dollars more I would definitely have got it.

    And OCd that little bugger to the hilt as well..

    I got my i5 up to 3.8GHz stable, but was starting to push it. The power draw and heat was *becoming* exponentially expensive. Still doable, but nah. After 3.8GHz, I would imagine you would need water cooling. Apart from that, my Gigabyte Mobo (which is a real beauty btw), doesn't like being pushed too hard for too long, so they say. Me, I stuck at a nice steady 3.42GHz with it running at stock temps and passing 100 LinPacks.

    I also ran about 8 or 9 other torture treatments, but after 12 hours of different passes at different settings in Prime95, I thought, why abuse the little bugger? I run a totally stripped down and customised via nLite install of WinXpSP3 on it, and it is pure joy and bliss when I spark her up.

    ("Her", because you know, how things can go wrong very quickly with women if you don't pay 'attention'?)

    I wish I had the money to buy another system and do it all again, even though things move on.

    Maybe in a year or two....

    That's why I say: Kudos to that man for building his 'Hydra'. The knowledge and satisfaction gained is tantamount to being a nitrous-oxide fuelled hot-rodder. Hence my reference to 'Racing In The Streets', by Springsteen. Have you seen how many 'good' over-clocking forums there are? How many are addicted to this 'hobby'? How many are not happy until they release the magic smoke?

    All I can say is: For those about to (Over) Clock, we salute you!

    1. Anonymous Coward

      Re: Not bad mouthing the i7...

      "I run a totally stripped down and customised via nLite install of WinXpSP3 on it, and it is pure joy and bliss when I spark her up."

      For what it's worth, I've built i5s with SSDs that go pushbutton-to-desktop in 12 seconds with win7 premium - boot, not resume. It's pretty cool. The first time I started one up I thought there was something wrong.

    2. danolds

      Re: Not bad mouthing the i7...

      Loved the Springsteen reference - know the song well - always thought of it as "Thunder Road II: reality sets in"

      I am going to do some overclocking with Hydra, but not sure to what degree. The water cooling is working great - system very cool under load, and I have gear that will overclock well. So I'll dip my toe into the overclocking waters and see how it goes. Thanks for the encouragement, it's much appreciated....

  25. bazza Silver badge

    K machine - pricey?

    The K machine is mighty pricey, and it would interesting to see how that cost breaks down into CPU vs I/O development. The K machine has a very elaborate interconnect. This must surely take a lot of the credit for the machine's sustained performance being so close to the theoretical peak performance. The cost break down might illustrate where investment pays off best.

    1. danolds

      Re: K machine - pricey?

      You've hit on probably THE key question in HPC (at least for the vendors). I don't have the answers - but I think I'll write a short blog to raise the question...thanks for that!

  26. Jan 0 Silver badge


    Thanks for the nice comparison, but I think you're using the wrong units.

    Long ago*, Byte magazine compared a range of computers with a standardised Vax configuration. It rated various mini-computers in milliVaxes and IIRC an IBM PC at around 0.05 milliVax.

    So how many Giga/Peta/Tera/Exa-Vaxes are we up to now?

    *Does anyone know if the original article available on line? I can't find it.

    1. JLH

      Re: Units

      Talking about VAXes, at CERN years ago the standard unit of comparison was a VUP - Vax Unit of Performance

      I THINK a VAX 750 was one VUP, might have been a 780

      I'm surprised that an IBM PC is measured in milliVAXes - I thought they would be roughly comparable.

      1. Peter Gathercole Silver badge

        Re: Units

        11/780 was the base.

        When the IBM PC was launched, remember it was a 16 bit processor in an 8 bit system (8088 had an 8 bit multiplexed data bus needing two cycles to store a 16 bit word), and was only clocked at 4.77 MHz. In the Personal Computer World BASIC benchmarks, the BBC micro could whip the ass off the IBM PC in performance terms, although this should not be a suggestion that Linpack results would be the same.

        I always regarded an original 6MHz PC/AT as about the same processing power as a PDP 11/34, although that was only on a subjective feeling, and a VAX 11/780 was much more powerful than my 11/34.

  27. LisaA

    Cost of K computer

    According to Riken website, the budget spent for K computer is as follos, The unit is 100M-yen.

    1) Design: 275

    2) Manufacturing: 458

    3) Building: 193

  28. Magnus_Pym

    All that power...

    ... where does it go?

    I Think the most interesting part is that the MINIMUM spec for a new PC is way above a supercomputer of only a few years ago. What does it need all that power for? A lot of it is fancy (and to my mind unnecessary) graphics and a lot goes in securing the thing against attack from outside. Every operation is checked and rechecked to make sure it poses no security risk. BUT all this checking is there because user (supposedly) want their PC to do inherently risky things and most of these risky things are there to make the OS look good; ("rich content", shudder).

    1. Anonymous Coward
      Anonymous Coward

      Re: All that power...

      "I Think the most interesting part is that the MINIMUM spec for a new PC is way above a supercomputer of only a few years ago. What does it need all that power for? "

      rather than what you might think, the simplest Occam's razor explanation is that the average coder is LAZY and will not optimise their generic code to take advantage of the most basic SIMD code available to them and so they rely on the compiler to do all the work for them rather than learn simple SIMD assembly and use that in all their separate routines.

      the x264 developers being the exception this rule with their performance mindset in everything they write proves you can get both best speed and quality if you put the effort, real life testing, planning, and most of all benchmark all routines in a methodical manor, shame most x86 devs are to lazy to even bench most of their code and so they just copy/paste any old crap from years ago and think thats good enough.

      1. Anonymous Coward

        Re: All that power...

        If you think that, as someone using a PC for servo system control, I'm going to spend hundreds of man hours screwing with assembly in order to beat - assuming I'm FANTASTIC at assembly - an optimizing compiler, you live in a world divorced from reality. Even if that extra 10% mattered - which it doesn't as I use only 10% altogether - it wouldn't affect mem usage or other factors. And it would increase the risk of strange problems dramatically.

        You ask what. We want lots of CPU stomp? There you go - it's so we can take advantage of the power to have better programming environments, so we can use les efficient but easier-to-maintain code, and generally not behave like it's 1980.

        Your argument is like saying that all of the safety / comfort / fun of modern cars is 'bloat', and really we could use existing technology to make a model T that's 10% more efficient. Maybe so, but then what the f*ck are we doing all this for anyway?!

        I probably -could- do everything I need with a C64 and an agony of optimizing and a decrease in feature set and horrible maintainability. I -could- do my windows code with a commandline and have people telnet to with a VT100 terminal.

        Or I can use a modern dev environment that takes up hundreds of meg (oh nooooo!) and have a nice place to work in and lots of tools.

        Your complaints are nothing more than a variation of "we went to school uphill both ways".

        You're opposed not on technical grounds but on moral ones. Which is fine, but don't experience expect people who use computers as tools to whip themselves because you think coding on 2012 hardware should be done like it was in 1985 - or that the hardware itself should be like it was in 1985, maybe.

    2. Anonymous Coward
      Anonymous Coward

      Re: All that power...

      All that power goes on churning through more data than ever before. A lot of NICs at home are now running 1Gbs-1. Server side you might be running 10 or 40Gbs-1, often on multiple cards. Just processing that data takes a lot of work. Video transcoding or processing is something that machines are doing a lot more on these days. Screen sizes have increased a huge amount which makes a monster difference. A 640x480 screen with 8 bit colour depth is about 300kB of data. A 2560x1600 screen with 32 bit colour depth is over 16MB. Run the old screen at about 10 frames a second and you are processing a mere 3MB/s; run the new screen at 50 frames a second and you are processing 800MB/s. Add on 3D with detailed textures, or anti-aliasing, and things go up again by a huge amount. All of this requires so much more power.

      Could we make do with older machines now: yes we could. Do we want to: almost certainly not.

  29. Anonymous Coward
    Anonymous Coward

    "Our pal Jack Dongarra, one of the founders of the Top500 list, ran Linpack on an Apple iPad 2 and reported that the tablet hit between 1.5-1.65 GFLOP/s, which is higher than the Cray-2 back in 1985."

    really of all the ARM devices you could pick chose the most expensive as your baseline, im sure ARM or any their licensees could knock together a simple 4 core A9 or A15 with NEON and put 4 of these quad core SOC's and related DDR3 ram on a generic SODIMM and Design and implementation of a generic carrier PCB boards to attach these cheap as chips SO DIMM ARM modules if there were a few 100 K involved

    hell even an existing basic Shenzen company such as Telechips or Rockchip could produce a few 1000 A9/A15.NEON and complete SOC SODIMM +generic carrier PCB boards, never mind a basic nvidia , or Samsung arm A9/NEON would be a far better generic price comparison than your top priced apple offering, and you could probably even get in the Linaro Partner Program contributing and collaborating to improve core Linux software and tools for ARM System-On-Chip platforms and get them to NEON SIMD optimise all the HPC software you like.....

  30. david 12 Silver badge

    iPad2 = Cray

    Does that mean I can run weather predictions on it? Is there an App for that?

    I realize that they didn't do 7 day precitions then, and the calculation grid had fewer points, but it would be fun.

    1. I. Aproveofitspendingonspecificprojects

      Re: iPad2 = Cray Weatherforecasting

      Forecasting the weather would be the REAL test.

      THe data is available online at NOAA and related sites worldwide in Reanalysis files called GRIB for the old stuff. I don't have a clue how you'd set up a model run from that basic initiation set-up though.

      However with the charts you draw you could check how well your models did from the next grib in the series.

      Or you could just call up the charts as required.

      If you are serious expert help can be found here:

  31. Francis Vaughan

    Scalability and sustained performance

    It was touched on above, but in real HPC systems, the Linpack performance is mostly ignored. (Unless your only workload is linear algebra.) The key concepts are how well the architecture scales, and what the sustained performance is. The cheap laptop gets the best bang for buck simply because it is incapable of scaling. You cannot join a boxload of them together and get much useful sort of speedup. Not in a way which can produce good sustained numbers. Supercomputers have never been about raw flops. They have always been about a critically balanced design, with equal attention paid to cpu speed, memory performance, and I/O performance. There is no value in a high speed CPU if you can't feed it with data fast enough. Caches are often not as useful as you might hope.

    The cost of interconnect interfaces and fabrics are always a significant fraction of the cost of the machine. From simple Infiniband, through SGI's NumaLink, IBM's Blue Gene and beyond, you get what you pay for, and depending upon the workload, you have no choice about which fabric to use. Tuning the interconnect topology to the problem is useful too.

    Linpack has the disadvantage that it can be tuned so that it is insensitive to the interconnect. Because it does little more than time solving large matrices, the amount of time spent communicating data is related to the length of the edges that divide the matrix into subdivision distributed across the nodes. But the amount of data is related to the area of the subdivisions. The bigger the data size the less communication is needed relative to the work done. The more memory you add to a machine, the larger the test dataset you can configure, and the faster the Linpack result. Simply because the less important the interconnect becomes for the contrived test. This usually does not relate to the machine's actual performance on real world workloads. Hence the need to spend real money on interconnects, even if it doesn't actually appear to improve the simple performance. An old college coined the phrase "Gigaflop harlotry" to describe the focus on simple Linpack number and rank in the Top 500.

  32. Peter Gathercole Silver badge


    A real supercomputer is a lot more than just processing power.

    The current systems I am working with (still on the top 500 - just) are split (very approximately) equally cost-wise between processing, networking and storage.

    The interlink is important for massively parallel jobs, and there is no point in crunching numbers if you can't store the results. Linpack can be a very misleading benchmark.

  33. Gordon 10

    Folding & Seti

    Just for comparisons sake it would be interesting to know the avg and peak score for the Folding or Seti@home networks. Suspect they come very high in the price/performance mix.

    Shame they all have better things to do than run Linpack :)

  34. Risky


    I theink $1,700 could get you a lot more benchmarks if the money was spent right. Hell you can get Dell to send you a dual-quadcore xeon workstation for $60 more!

This topic is closed for new posts.

Other stories you might like