back to article Dishing up the goods: Square Kilometre Array moves out of the theoretical and into the contractual

The governments of South Africa and Australia have signed agreements formalizing the construction and operation of the Square Kilometre Array Observatory (SKAO) telescopes by the Observatory's governing body. The intergovernmental radio (and world's biggest) telescope will survey the sky over ten thousand times faster than has …

  1. Yet Another Anonymous coward Silver badge

    Alternatively

    That's 0.05mWales or 16,300 square cricket pitches in New Imperial Units

  2. andrewmm

    10 years of deign work ?

    That means the electronics is already 15 years old ?

    1. Yet Another Anonymous coward Silver badge

      Re: 10 years of deign work ?

      Mostly no - when it was started nobody had any idea how to do the electronics.

      Building it hoping extrapolating that GPU/FPGA/storage would catch up was a reasonable plan.

      Unlike a certain optical survey telescope that worried about the data size and spent most of the design time inventing their own tape drive technology, because nothing available could store the XXXX bytes of data expected.

      Where XXXX is a number that sounded insane at the time but is now probably on your phone

    2. Cuddles

      Re: 10 years of deign work ?

      Is that a problem? If you always wait for the latest and greatest new tech to be available, you'll never be able to do anything at all. At some point you have to draw a line and say that you're going to go with what is available right now. If they wanted to use the most recent tech available right now, it would take another 10 years' work to redesign things around it and all you'd have achieved is delaying the actual science by a decade. This is true for all big projects. The LHC is horribly out of date, the James Webb telescope hasn't even launched yet and is way out of date, Hubble is practically a joke compared to what we could build today yet still provides some of the best science available in its field, the Shuttle flew with a computer somewhere around 200,000 times less powerful than a desktop PC by the time of the last flight. It's not about having the shiniest specs imaginable, it's about designing something that you know will work given the technology available while you're doing the design. If something better is available by the time you're done, that's great news for an upgrade and/or successor.

      Edit: Consider this article - https://www.theregister.com/2021/10/18/lucy_solar_array_issue/ The Lucy spacecraft has been under design for at least 7 years. It's planned to reach it's final destination in 2033. How up to date do you think it's electronics will be then? Yet it will still be able to learn things we've never seen before.

      1. Yet Another Anonymous coward Silver badge

        Re: 10 years of deign work ?

        Depends on the problem.

        For space missions you want to lock the on-board software as early as possible because the HW depends on it and you are usually constrained to lots of of space-qualified systems.

        What you innovate on is the processing software on the ground.

        With SKA and CERN (or at least ATLAS) you are drinking from a firehose and at the start of the design you can't reasonably build a system that can keep up. With ATLAS (friends worked on it) the original plan was that you wouldn't be able to process everything and you would just sample the data - hopefully if events were random you would get X% of them by grabbing X% of the data. I assume with updates they can now process everything.

        Definitely with SKA there was a lot of extrapolation of what would be possible when the HW was ready - especially at the front end digital correlators that have to swallow the raw signal.

        Ironically what is difficult on these ground based missions is keeping them running for 20-30years.

        I know the Keck telescopes built in the 90s have a full time team building things like motor controllers and PC interfaces to replace stuff made either for ISA bus era PCs or some long forgotten industrial control rack machines.

        1. Cuddles

          Re: 10 years of deign work ?

          "I assume with updates they can now process everything."

          Not even close. The crossing points at the LHC should generate around 600 billion collisions per second at the design luminosity. ATLAS records about 1000 events per second. It's a lot more than just grabbing a random sample though, the triggering system deciding which events to store is quite possibly more complicated than the actual data storage and analysis systems.

          1. Yet Another Anonymous coward Silver badge

            Re: 10 years of deign work ?

            >, the triggering system deciding which events to store is quite possibly more complicated than the actual data storage and analysis systems.

            Yes, that's what I meant.

            The original design, IIRC, was to not even be able to respond to all triggers - but hope that an unbiased subset would still be valid.

  3. Anonymous Coward
    Anonymous Coward

    I would *love* to work on the supercomputers storing and processing that data. Any idea how I get a job like that? o.o

    1. Anonymous Coward
    2. Muscleguy
      Boffin

      Look up the academics involved in this and contact them.

    3. fuzzie

      These might be a good places to start

      > https://www.skatelescope.org/newsandmedia/outreachandeducation/skawow/hpc/

      > https://www.sarao.ac.za/vacancies/

  4. Timbo

    Space??? who needs space?

    "130 petabytes of data produced a year"

    That's 130 million Gigabytes...or about 356,164 Gigabytes per day (@365 days per year) !!

    That's an awfully BIG amount of data...and once the SKA starts up, and assuming it doesn't go offline too often (at either of it's two main locations) data storage for this will always be increasing - plus one wonders what sort of data processing will be required?

    I assume some data sets will be entirely devoid of any data and/or full of radio "noise" - but it will still need to be "processed" before being discarded.

    And whether like the SETI@home project (now offline), data will be split into "usable" packets, that can be worked on/processed individually.

    1. Korev Silver badge
      Boffin

      Re: Space??? who needs space?

      I think the technologies just about here to handle that, it'll just be really, really expensive...

      An array that could take 4GB/s isn't that hard in the age of flash and ~32Gb/s is well within 100GigE...

      1. Yet Another Anonymous coward Silver badge

        Re: Space??? who needs space?

        The point is that you aren't trying to store that data - you are trying to process it.

        Radio astronomy is basically capture lots of random noise, do clever stuff, get picture.

        You do have to store lots of intermediate data because it can take months to build up an entire picture when you are relying on the Earth moving to shift your telescope.

        The amount that will actually be stored permanently is relatively small.

        1. Kane

          Re: Space??? who needs space?

          "The amount that will actually be stored permanently is relatively small."

          Does the original data captured not get archived then? Or only post-processed data?

          1. Anonymous Coward
            Anonymous Coward

            Re: Space??? who needs space?

            If it's anything like the data we used to capture for seismic surveys on board a boat (or fleet of boats) absolutely everything is kept. You don't know what techniques might be developed in the future to reprocess that raw data, so you don't throw anything away.

            1. Paul Kinsler

              Re: absolutely everything is kept.

              Well, ideally. But raw data rates can be quite high, and storage is not always sufficiently cheap.

              1. Yet Another Anonymous coward Silver badge

                Re: absolutely everything is kept.

                Also the headline data rate is (I suspect) the raw antenna feed.

                The first step is to do digital correlation between beams - this contains all the "information" from the raw signal and subsequent filtering and combinations further reduces raw "data" while still retaining all the signal.

              2. Anonymous Coward
                Anonymous Coward

                Re: absolutely everything is kept.

                But raw data rates can be quite high, and storage is not always sufficiently cheap.

                I know. I think our (read: their - I no longer work there) systems were producing at least few Gigabytes of data per second - 20,000+ hydrophones producing multiple continuous audio streams running for hours or days at a time.

                The raw data feeds went directly to tape, and we had some processing systems on board that generated additional streams of data that also went directly to tape. Serious processing of all that data happened onshore in the data centres though, and I'm sure our customers also ran their own processing on whatever data we eventually provided them with.

                1. Forest Racer

                  Re: absolutely everything is kept.

                  From the ASKAP web site

                  With up to 36 beams per antenna and 36 antennas, ASKAP produces a torrent of raw data (approximately 100Tbit s–1). The digital signal processing system begins with digital receivers processing 192 signals which include 188 from each PAF, 2 calibration signals, and 2 spares for future applications such as radio frequency interference mitigation. The raw data is correlated and averaged at the observatory, producing an output visibility data stream of up to 2.4GB s–1 that is sent via optical fibre to the Pawsey Supercomputing Centre in Perth

          2. Anonymous Coward
            Boffin

            Re: Space??? who needs space?

            Don't know about SKA. Do know about particle physics stuff a long time ago, and answer is 'no'. When I knew it there were several levels of 'trigger' processing: first level might be 'have seen something that looks plausibly interesting, let's keep this for long enough for second level to have a look', second level would then say 'yes, this is interesting, let's keep it longer' and so on (forget how many levels, might have just been two but think was more) until eventually decision was made to write data to disk somewhere. Huge majority of data coming from the detectors was simply dropped at the first-level trigger, more dropped at second-level and so on.

    2. hoola Silver badge

      Re: Space??? who needs space?

      I agree, it is a lot but we probably generate far more than that with the general population storing crap in Farcebook, Instagram, WhatsApp and all the online cloud drive providers.

      You know, all this pictures of babies, animals and videos of people being moronic,,,,,

      1. Anonymous Coward
        Anonymous Coward

        Re: Space??? who needs space?

        Facebook apparently generates 1.46 exabytes of data per year.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like