...just make sure they don't have a microwave oven close by.
Once completed, the Square Kilometre Array (SKA) will be the biggest radio astronomy telescope in the world. "Biggest", though, really is too mild a term for the sheer size of this project. The first phase, SKA1, will be broken up into two instruments, SKA1 MID and SKA1 LOW, based on their frequencies. SKA1 MID alone is made …
This post has been deleted by its author
Silly BBC. Standard units of measurements must be standard (the clue is in the name) Association football pitches can vary in size from 4500-13000 square yards (no idea what that is in those silly french republican units), but the point is there is no way that the 'football pitch' can be used as a standard unit of area. Stick to the microWales! (Or possibly the square Snowdon - 1085 x 1085 square metres)
Considering the success Bitcoin had with fabricating parts specifically to solve the problem I wonder if this project wouldn't benefit from similar thinking. I'm sure some of what they are doing is going to require general purpose CPU's but I'd be prepared to bet that a lot of the early processing of the data is done with a few well known stable algorithms.
By the time this comes online... RRAM products will be hitting the streets.
So you're looking at 8+ TB per small 2.5" SSD Card as a start, if not a net new design in terms of a memory bus.
This should help to drive down the costs of SSDs and other tech. Also with RRAM, there's less heat and power.
I would imagine the computer will look quite different in the next 5 years due to changes in disruptive technology.
While you're right that next gen NVM techs and the associated low latency NVMe over fabrics tech will change server architecture, none of if will be anywhere close to cheap enough to store the amount of data being talked about here. Even spinny rust "Cloud Drives" using HAMR and shingles which will be about 10x cheaper than even the densest forms of 3D NAND will look expensive in the face of this amount of data.
If we care about keeping the raw data for long periods of time, then breaking it up into 100MB sized chunks and storing in on peoples personal machines in a kind of "SETI at home" might be a better option. There are already geographically dispersed global erasure coding techniques that would allow redundant copies of the data to be kept efficiently across a large globally distributed storage environment that would allow it to be self healing.
With only ten million people world wide each contributing 2 - 5% of their own personal storage it wouldn't be hard to scrounge at least another 100 PetaBytes of so of effective data capacity which would scale nicely over time.
Biting the hand that feeds IT © 1998–2021