back to article 9TB in 20 minutes? Sign me up!

In an experiment IBM researchers used the fourth most powerful supercomputer in the world - a Blue Gene/P system at the Forschungszentrum Julich in Germany - to validate nine terabytes of data in less than 20 minutes, without compromising accuracy. Ordinarily, using the same system, this would take more than a day. Additionally …

COMMENTS

This topic is closed for new posts.
  1. Coyote
    Linux

    just pipe the input to /dev/null

    ...makes it run real fast!

  2. Idiots _Quotient
    Megaphone

    Kooooooool But Not Gr8 !

    Nice But I have Given up on International Boffins Machine.... They have Disappointed me with Cell cesarean Session ......................

  3. Anonymous Coward
    Coat

    I barely understood any of that...

    ...but I like to see that data processing may get quicker, even for me.

    /where is that 'Boffin to English' dictionary?

  4. Andre 3
    Terminator

    orders of magnitude improvement in other analytic tasks

    So they can apply this to the XIV's then?

  5. Markus
    Pint

    It's not that hard to understand..

    Just take a simple example: We examine people leaving a pub and note for everyone the amount of beer ingested and the amount of pee lef in front of the pub. Clearly these two factors have some sort of connection. To get a grasp of these one may produce a special 2-by-2 matrix from the gathered data, a Covariance Matrix. Real world examples involve a lot more than two factors, and as such the Covariance Matrix gets very big. Now, this Matrix can also be seen as a function, that, when applied to some data yields some results. As it is, people tend to have some Covariance Matrix and a lot of those results. And they want to know, what the input was. So they want the inverse Covariance Matrix (which resembles the inverse function).

    Up to now the time for calculating this inverse Matrix grew (roughly) by the power of three of the size of the original matrix.[1]

    The IBM boffins found a way to do calculate the inverse for which the required time correspondents to the square of the input size. (By estimating a matrix property and using this for an easier calculation). And as a nice plus the new algorithm scales pretty well on massive multiparallel machines.

    (At least that's what I read from the abstract, don't have time to look into the paper right now.)

    [1] That's why the abstract says "Cubic complexity" not "Cubit ..." (It doesn't involve Quantum Computing. Yet.)

    1. kurrajong
      Thumb Up

      Thanks for shedding some light

      I think your reply was far more informative than the article.

  6. Anonymous Coward
    Anonymous Coward

    maybe

    Maybe they just verify the parity bits?

  7. Anonymous Coward
    Thumb Up

    Cheers Markus

    > shots of them standing around looking smarter than any of us.

    not all of us... ;o)

This topic is closed for new posts.

Other stories you might like