back to article UK's Arm-based Isambard 2 supercomputer powers off for good

The UK's Isambard 2, one of the early Arm-based supercomputers, has officially retired after just a few years of operation. It is superseded by the more powerful Isambard 3 and Isambard-AI, just as British supercomputing enters an uncertain period for funding. Isambard 2 ended service at 9am on September 30, but started …

  1. Yet Another Anonymous coward Silver badge

    Average job 345 cores

    Unless this average somehow counts separate single core setup/test type tasks - this seems odd.

    Why use <2% of a supercomputer for a job?

    Wouldn't you just run that on a local workstation type machine?

    Admittedly I'm very old, but in my day you only got to send jobs to the Cray that would be impossible on any other machine.

    1. Zippy´s Sausage Factory

      Re: Average job 345 cores

      I suspect they're smaller jobs where getting an answer in three seconds is preferable to getting an answer in five.

      Not only that, but I suspect they're trying to ensure it gets as much use as possible (maybe even selling access to third parties to help towards costs), because why wouldn't you?

    2. cyberdemon Silver badge

      Re: Average job 345 cores

      Different ways of parallelising a task..

      One way is to split it into multiple 'jobs' running simultaneously, rather than have each job try to use 10496 cores

    3. Anonymous Coward
      Anonymous Coward

      Re: Average job 345 cores

      It depends a lot on the Cray; in my probably rather later day people ran all sorts of jobs on one processor on the vector Cray that would run for a few minutes there and in less than overnight on their Pentium 4 desktop.

      If you have a job that's too big for your laptop - say needs a terabyte of memory and 10^17 floating-point operations - but that you're not going to run 24/365, paying for a little bit of time on a fraction of a 400-node academic HPC facility is generally massively cheaper than buying a dozen nodes and 64 16GB DIMMs, and a lot cheaper than paying for a few days on big nodes on AWS.

      What's confusing me is using the term 'supercomputer' to describe something which is no more than five racks - Gigabyte H263-V60 will fit four of the Grace superchips in 2U, so the proposed machine is 192U = five racks counting switches, and the reason you buy from HPE rather than Gigabyte is to get denser packing of the chips into the servers.

      1. Yet Another Anonymous coward Silver badge

        Re: Average job 345 cores

        >academic HPC facility is generally massively cheaper than buying a dozen nodes and 64 16GB DIMMs,

        That's what changed over my using supercomputers career. When I started the mythical Cray was the only thing to run tasks beyond beyond the department Sun or VAX.

        Then Linux+Alpha then Linux+Pentium Pro meant we built our own Beowulf clusters with shelves full of home made PCs and these were massively cheaper and fast - unless you had a single Fortran array that used Gbs (!) of memory and needed 400 cores to see all the data.

        Interesting that massively multi-core ARM is swinging back to 'supercomputer' being the best bang/$

    4. Anonymous Coward
      Anonymous Coward

      Re: Average job 345 cores

      There probably wasn't too many separate workstations with four to eight 64-ThunderX2-core nodes around for staging and finetuning code prior to running larger jobs. For example, I can imagine doing (maybe) 200 prep runs on 4 nodes for every one run on the full 336 nodes, which averages to 5.6 nodes per job.

      1. Yet Another Anonymous coward Silver badge

        Re: Average job 345 cores

        I've been out of the field for a while.

        Do you get to interact with these sort of machines directly?

        We used to compile, profile and submit jobs from a front end (VAX = I'm old) and only the operators ever actually logged into the supercomputer. I don't think you would get 5mins allocation to see how well your job ran.

        Obligatory old man story. I once got a job report back after running for 10s of hours saying "The Cray had an uncorrected memory fault: please check your results". As if I could go through the model run with a pocket calculator and see if I got the same answer.

  2. ChrisElvidge Bronze badge

    How much do they want for it?

    Can I buy just one part of it as it's just made up of several HPE parts?

    Can it run Horizon? (ref Fujitsu parts)

  3. elsergiovolador Silver badge

    Surprise

    I am surprised there is Isambard-3 and not some sort of a voucher to use Chinese supercomputer instead.

  4. Anonymous Coward
    Anonymous Coward

    So what's happening to the Isambard 2 hardware?

    Hopefully not just landfilled. Surely someone can use some of those components?

    1. Yet Another Anonymous coward Silver badge

      Re: So what's happening to the Isambard 2 hardware?

      Typically not, they are specialised and the system as a whole needs a lot of custom power and cooling.

      Unless you want it just as a museum piece it's probably cheaper to build and operate a new system.

      In addition it may have export restrictions on who you can sell it to.

    2. hoola Silver badge

      Re: So what's happening to the Isambard 2 hardware?

      The reality is that it is too old to be of any use for it's intended purpose and much of the hardware too specialised to be repurposed. Most will simply be recycled.

      HPC facilities generally have a higher hardware churn anyway as by there very nature they tend to have workloads that are already pushing normal compute boundaries. If you cluster is using 1MW and after 2 years you can double the performance, memory or storage for the same power draw then as long as the money is available it will be replaced or upgraded.

      We rotated the hardware round:

      Tier 1 -> business justification to run the jobs.

      Tier 2 -> general availability as it was the hardware from the previous cluster.

      Both ran at about 98% utilisation most of the time. That is called "sweating your assets". Boy were the hot aisles hot!

      When they were replaced it was generally on a very modest "buy-back" where some parts would be reused and the rest recycled.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like