Can someone explain....
...after all this time, why we still need super-mega-computers to work out that a nuclear war will wipe most of humanity of the face of the earth?
A new range of SGI supercomputers will be used in nuclear weapons research at the UK Atomic Weapons Establishment. SGI has landed the contract to install a pair of Linux-based SGI ICE XA supers at the UK AWE, responsible for the warheads installed in the UK’s Trident missiles. The ICE XA was unveiled in November 2014 by SGI, …
Oh go on then, I'll bite.
First up they are not used to "to work out that a nuclear war will wipe most of humanity of the face of the earth". That's not AWE responsibility. They serve 2 functions.
First they verify that these things we taxpayers pay craploads of money to have still work decades after one was last tested. I as a taxpayer am interested that this is done. It is an economic viewpoint that is totally unrelated to my opinions as to whether we should have/keep them.
Second they verify that as they get old they aren't going to go kablooey unrequested. I'm interested that this is done from every viewpoint.
Ok, simple question then.
As the kit is now knocking on 20 years old. Why can't the last batch of supercomputers have done this 10 years ago? Surely the calculations are pretty much the same.
In 10 years will we need another batch of very expensive computers to run the same calculations.
What exactly is changing?
Just like paperwork and your desk the computation expands to fill the computer ...
Remember in computational science it is very rare that computers give exact solutions to the real physical problem. Rather approximate solutions to models of reality are what you get, and the game is to get the most accurate solution to the best model that you can solve. 10 years ago the computational facilities could solve certain models to a certain accuracy. 10 years of improvement allows
a) more accurate and complex models to be solved
b) already used models to be solved at higher accuracy
c) a mixture of both
I don't know what kind of models they are solving, but for b) it might be as simple as resolving features in the solution at 1km instead of 5km (c.f. weather forecasting). And the higher accuracy in this case really is very much better - I really don't want these things going kablooey unrequested, and like the poster above I'm interested that this is done from every viewpoint AND with the best accuracy that is currently feasible.
What happens for given x explosive force applied in y time duration is already well understood and has a lot of real-world data to support it before the Nuclear Test Ban Treaty. What I would be worried about is what might be the characteristics of the device having sat on the shelf for so many years especially in light of the refined models we developed over the years in materials science. Just the fact that you have nuclear material inside is going to alter the physical characteristics of that weapon when it comes time to detonate one and given that you can't fire one off to refine your models that way, you are stuck using ever more refined simulations of what should happen. Toss in the tolerance values way back when they were made mechanically is enough variance to give one fits.
Come to think about it, just the evidence collected by the space programs on what radiation does to various materials over long term exposure would be cause to go back and run the models again and again every time we add to the datasets. There's a lot else but...
But at some point, they'll still have to fire of these off to verify things. Manufacturing of these weapons isn't perfect due to tolerance build-up, etc. The amount of duds in conventional weapons is bad enough... Not working for a nuke is one thing, the case ruptures and the material is scattered locally. Premature detonation due to deterioration is worse as it might set off a daisy chain of "booms" at the storage facility.
SGI did go bankrupt a few years ago and its assets were acquired by Rackable, which then renamed itself SGI. The earlier SGI committed suicide by putting all its eggs in the Itanium basket. After the merger, SGI changed its NUMALink interconnect to work with x86_64 and since then has been doing a bit better. SGI is the still the only game in town if you need > 4096 CPU cores in a shared memory single system image.
They don't have to check they'll work or what the yield will be, they aren't going to be used, there's just the threat of use, if they are we're all are fcuked anyway. This is simply transfer of tax payers money to the 1%.
Personally if the bluff is called, I'd prefer ours didn't work whether the ones comming our way explode or not, at this point you all have lost.
"Your all talking Carp" very poor understanding of the subject. although I prefer "no one starts using them", the world is not made up of caring and rational individuals. nukes decay like cars or standard explosive devices, where some last long with care while others become junk. The problem is the core is consistently emitting high energy particles. these particles can damage electronics, plastic explosions, the metal casing, even cause a "phase" change in the nuclear material metallic structure(some of the crystal bonds deform) , etc... math models are just theories, as more studies are done and with very detail analysis of bomb is done, they can improve the model..... but no model will be 100% accurate because when these devices where manufactured, the tech didn't exist to measure true density of matter. And most people would prefer we didn't restart making new bombs....
Biting the hand that feeds IT © 1998–2021