How would Jeff Bezos protect an orbiting data center from solar flares?
Microsoft has hauled its data centre in a box, Natick, up from the seabed and concluded that data is indeed better, down where it's wetter, under the sea. The results have been a long time coming. Project Natick kicked off back in 2013 and a prototype of the underwater data centre was dunked in the Pacific, off California, …
The BBC article says "up to five years", so I would imagine that they've gathered enough information from it being down there, and they want to improve the design for another test, or start doing it commercially.
"Recent events" mean that demand for servers is higher, so they may be expediting the process to get more capacity up closer to the users sooner
No comment on the occasional pesky fishing trawler with anchors and difficulty to perform maintenance though?
The reason it was located there was that the area is controlled by the European Marine Energy Centre, a test site for tidal turbines and wave energy converters. No trawling because there's lots of "stuff" on the sea floor - cables, turbines, test rigs. It's also got some quite strong currents and decent winter storms, so was a good test of the pod getting "jostled" externally.
Aside from the practical considerations (no-trawl zone and their project partners have heavy-lift equipment available - see maintenance), this was partly for green credentials and also to show that they could run off a power supply traditionally viewed as unreliable. The suggestion is that these pods could be quite reasonably powered by an offshore wind farm (again, no-fish and good work-vessel availability) with a last-resort shore-power connection laid alongside the fibre-uplink. This would provide no-maintenance, self-contained edge-computing in remote areas. Just roll up once every x years and replace the pod wholesale.
"Over the last two years, researchers have seen a failure rate of an eighth of that seen in a control group of servers on land, running the same workloads."
Having a lower failure rate is obviously beneficial, but it seems to come at the cost of being extremely difficult to fix or replace the parts that do fail. Would this mean moving to an SSD-style of overprovisioning, where you start off with 10%ish more servers than actually needed and bring the spares online as the used ones fail, until the capacity finally drops enough that you need to replace the whole thing?
That was one small container. Once you start putting huge quantities in and there will clearly be a limitation on where you can put them, this may not be the case.
We have enough problems with the oceans warming without making it worse. Just like car pollution, each individual vehicle is not much, the trouble comes with numbers (CO2) and density (NO, Particulates)
hoola, there are over 321,000,000 cubic miles of water in the Earth's oceans (according to NOAA). By some estimates, "The Internet" (whatever that is) consumes about 200 Gigawatts of electricity per year. That's all the phones, tablets, desktops and servers world-wide, along with the networking and routing and switching that allows you to view all the
pr0n CuteCatPics[tm] your little heart desires.
Do you have the math skills to estimate how long it would take to raise ocean water temperature by even 0.001% of a degree (C or F, I'm not going to quibble), if all that power was dissipated into the oceans, with no losses whatsoever?
Biting the hand that feeds IT © 1998–2020