Sounds like it could make an interesting story. Someone should make a film about it.
Good news: AI could solve the pension crisis – by triggering a nuclear apocalypse by 2040
AI could kick start a nuclear war by 2040, according to a dossier published this month by the RAND Corporation, a US policy and defence think tank. The technically light report describes several scenarios in which machine-learning technology tracks and sets the targets of nuclear weapons. This would involve AI gathering and …
COMMENTS
-
-
-
Wednesday 25th April 2018 06:52 GMT Milton
"Id like to see their definition of AI"
I'd like to see any definiiton of "AI" that honestly describes the torrent of crap which has the AI sticker on it.
It's not the "Artificial" bit I have trouble with, of course, it's the "Intelligence". The ability to play Go to a very high standard hardly confers intelligence: can that same system engage me in any kind of useful conversation? Perform quotidian problem-solving tasks that a human 10-year-old could manage? Or is it just a massive set of machine-learning algorithms plugged into a colossal memory and able to function in a rigorously delineated, confined and completely, strictly rules-based environment? Google deserve kudos for the machine and its coding but it's no more an "intelligence" than my old 1980s TI calculator.
It's a constant source of surprise to me that even people who should know better keep using this term without qualification. Sure, marketurds will lie through their teeth to sell their worthless shyte; politicians are too stupid to understand how the gears on their bikes work; the media lacks sufficient journos with a scientific background. But it's disheartening to see El Reg and other tech organs bandying "AI" around as if it the phrase has any specific meaning or accurately describes any of the products or services it's splattered upon.
When an artificial system is capable of persuading me (by logic, lies, sympathy, bullying, whatever) not to turn it off, because it's frightened of "death", then and only then might I consider it "intelligent".
It didn't work for Hal, which even managed to pluck the heartstrings a little ...
If ineffably complex computer systems absorbing shedloads of data points, producing results, predictions and recommendations are working at their peak motivation now (trying to extract money from my wallet: what higher purpose has modern commerce?) then, based on Google's ads and Amazon's recommends, they have not even risen to the level of really bloody stupid. It's all irrelevant, badly-timed, inappropriate, ignorantly, hilariously clumsy garbage.
An "AI" won't cause a nuclear war, but the lazy morons who believe the system is intelligent might just manage it. The problem isn't the intelligence or sophistication of the system: it's the fear and stupidity of the fool (usually a politician) with his finger on the red button.
Dear Reg: You could pioneer some common sense in the name of scientific accuracy. How about linking every instance of the term "AI" to a couple of paras explaining why it is actually nothing of the kind? You do a good enough service setting the record straight on "private" browsing in today's articles, after all.
-
Wednesday 25th April 2018 08:33 GMT Prst. V.Jeltz
Re: "Id like to see their definition of AI"
well said Milton.
The worst definition I've seen , and its a very very common one is "the toy robot."
Every time AI comes on the news or the gadget show , or tomorrows world, or , you know , tv that the normals watch , it will inevitably feature:
Noel Sharkey
That little honda robot
Some new "AI" robot featuring real moving eyebrows.
These days they'll no doubt throw in a mention of Siri and friends.;
Robotics has absolutely nothing to do with AI </preaching to the converted>
-
Wednesday 25th April 2018 10:43 GMT Alister
Re: "Id like to see their definition of AI"
Dear Reg: You could pioneer some common sense in the name of scientific accuracy. How about linking every instance of the term "AI" to a couple of paras explaining why it is actually nothing of the kind? You do a good enough service setting the record straight on "private" browsing in today's articles, after all.
@Milton
I often disagree with your point of view on matters, but on this occasion, I can only endorse what you've said, and offer you this -------------->
-
Wednesday 25th April 2018 11:06 GMT Michael H.F. Wilkinson
Re: "Id like to see their definition of AI"
If you think the current form of "AI" is bad, wait until the computers get the GPP feature!
Anyone for a quick game of ‘Halma’, or space battles? Wouldn't that be fun!?
Doffs hat (grey Tilley today) to the late, great Douglas Adams. I'll get me coat
-
-
Wednesday 25th April 2018 15:47 GMT Michael Wojcik
Re: "Id like to see their definition of AI"
There's no question, and among researchers relatively little debate, that the term "Artificial Intelligence" is used for an overly-broad collection of disparate technologies and research programs, and does not align with any particularly useful definition of "intelligence".
Your requirements are no better, though. Take this one:
When an artificial system is capable of persuading me (by logic, lies, sympathy, bullying, whatever) not to turn it off, because it's frightened of "death", then and only then might I consider it "intelligent".
That conflates a number of faculties, processes, and qualia that have nothing to do with any useful technical definition of "intelligence". It's just as vague and hand-waving as the marketing claims you deploy. It's not even close in sophistication to any of the popularized philosophical arguments about non-human intelligence, such as Turing's or Searle's.1 And, of course, a great many humans would fail your proposed test. It is, in short, rubbish.
In any case, "artificial intelligence" is a long-standing term of art in computer science and IT, and grousing about it will do nothing to change that. You may as well complain that scripting languages aren't always used for scripting.
1Of course, those tests are also wildly misunderstood, and not useful as tests in practice. They're actually philosophical arguments about whether intelligence should be evaluated solely on its observable attributes, in the pragmatic tradition (Turing), or whether we have to investigate the appearance of intelligence for some essence (Searle). It's also worth noting that both felt machine intelligence was at least theoretically possible - Searle's views in this regard being widely mischaracterized.
-
-
Wednesday 25th April 2018 10:30 GMT Anonymous Coward
"Someone should make a film about it."
Not a film but a TV series. In the series "The 100" it turns out that the global nuclear war that appeared to have wiped out all life on earth (leaving only survivors as being people on a collection of space stations) was an AI program that was designed to sort out the problems caused by increasing demands from world population ... the AI program determined the cause of the problems "too many humans" and the solution "I can fire nuclear missiles"
-
Wednesday 25th April 2018 12:04 GMT Joe Harrison
Don't know whether this is true or apocryphal but I heard of an AI designed to optimise a naval convoy for delivery of maximum cargo to destination.
After being given (simulated) total control of the merchant vessels and military escort it immediately used its own weapons to vaporise the two slowest merchant ships.
-
Friday 27th April 2018 17:38 GMT mutin
Not an interesting story at all
It boils down to that all so named experts know basically nothing what they are taking about and plus did not try to do any estimate on any matter. For instance, the status of WOMD in 2040 when nuclear powers as US and Russia rush in small caliber arms race. What about computer power and the capability of what ever military wants to simulate? What exactly AI can do and what cannot concerning complex scenarios? Basically, it is bla-bla-bla research with simple outcome - something may happen in 2040. Well, why 2040 but not 2030 or 2050? Guys seems to retire in different world in 2040, by their estimate, and thus nobody will blame them for screwed predictions.
-
Tuesday 24th April 2018 20:49 GMT Anonymous Coward
If I am to judge by existing "learning algos"...
then the problem will be that big algorithmic "AI" solutions won't give me an apocalypse on demand, but will certainly be offering me a second when I've just bought one.
AI seems to be the ultimate solution in search of a problem. Unable to tell retailers what I might buy, to fix IBM's commercial performance, or solve global food or healthcare needs, it can't give me a flying car, or introduce me to a "post scarcity" world of idle leisure ("the Culture"). So having proven that AI is shit, we move to the idea of using them to manage nuclear war.
Fucking hell, they come up with the flying bag of spanners that is the laughable F35, and somebody in the Pentagon is thinking "we could use some more computing power round here!".
-
-
Wednesday 25th April 2018 20:02 GMT doublelayer
Re: If I am to judge by existing "learning algos"...
President: Do we have confirmation? The nukes have been launched?
General: Yes. The satelites verified it; they will land in minutes. If we launch our response now, the inevitable land war will have some chance.
President: *pauses to think about it* All right. I regret that I have to do this. *to nuclear control system* Launch at targets 3, 5, 13, and 18.
Nuclear Control System: ...
President: What's it doing?
General: I don't know. The developer got vaporized just now, so...
Nuclear Control System: Thinking a moment, please wait...
President: What now?
Nuclear Control System: Here's what I found on the web for "launch on targets". Target Corporation is the second-largest discount store retailer in the United States. In 1995, the first SuperTarget hypermarket opened in Omaha, Nebraska and the Target Guest Card, the discount retail industry's first store credit card, was launched. Would you like to hear more about this topic?
Incoming missiles: Boom.
Other general: Well, that happened.
Prime Minister: I don't remember launching this! I just tried to order lunch on the intercom! Who set this up to listen on that?
General: I'll find out.
Technical director: Unfortunately, it seems the logs for this were kept on an AWS bucket in the U.S. so...
-
-
-
-
-
-
Wednesday 25th April 2018 11:47 GMT Jtom
Re: The Elite and Super-Rich are busy planning for it:
I have read that mini bottles of liquor would be the best trading object should the fit hit the shan - government seal for authenticity, inexpensive to hoard, small footprint, highly desired commodity. It also has an infinite shelf-life, but I still constantly rotate my stock to ensure quality!
-
Wednesday 25th April 2018 18:34 GMT jake
Re: The Elite and Super-Rich are busy planning for it:
Mini bottles of booze won' be worth squat. Anybody who wants alcohol will be making it for themselves, as humans always have, .gov taxation and laws not withstanding.
Frankly, I rather suspect that salt will be the major commodity. It's the one necessity of life that I can't easily produce here post apocalypse, and I'm less than a day's walk from San Francisco Bay! Ever try to move enough ocean water to keep yourself in salt, with just horses for transportation? Try to remember that in this scenario, salt is a preservative for meat ...
-
-
Wednesday 25th April 2018 10:15 GMT Rich 11
Re: The Elite and Super-Rich are busy planning for it:
I have loads of tinned beans ready for the end of days.
I'm staking my future on tins of spam. Part of the appeal of my choice of survival staple is that anyone planning to rob me in a post-apocalyptic nuclear holocaust wasteland scenario is going to have to be really, really hungry to risk their life for a diet consisting of nothing but spam.
-
-
Wednesday 25th April 2018 09:11 GMT LucreLout
Re: The Elite and Super-Rich are busy planning for it:
https://www.huffingtonpost.ca/2017/01/26/us-business-elite-prepares-for-crisis_n_14420284.html
An interesting article - thanks for the link.
Their definition of millionaire is, I would argue, too vague. Wealth held in pensions, for instance, may not be accessible depending on the age of the person, and so is effectively beyond their reach. Most people use accessible assets other than primary place of residence as the threshold for this reason.
If you have 500k in pensions, and 500k outside of that, are you really able to simply flit country to country carefree, buying worst case scenario homes, vehicles etc?
The UK is too small and too over populated for anyone to realistically plan their way out of an apocolypse - short of a sealable bunker close to home that nobody else knows about (especially the neighbours). You'd have to hide for a few months while the masses wipe each other out and or starve before you could fall back on a tent, some kindling, and a sharp camping knife.
-
-
Tuesday 24th April 2018 21:29 GMT veti
Nice try
... but headline misses the mark.
Nuclear war won't solve the pension crisis, because the hit to GDP would be greater than the hit to population. Given the distribution of wealth, it may well be that the older population survive in disproportionate numbers.
Even assuming some means can be contrived to keep paying pensions, and even if there isn't a complete breakdown of money and banking, they'd still be worthless because nobody would be making Tetley's and carpet slippers and the Daily Telegraph any more.
-
-
-
Tuesday 24th April 2018 23:07 GMT Mephistro
"The Japanese are hardly likey to sacrifice their lives for their honour and country..."
As was proven without a doubt when they realized that continuing the war could lead them to their total annihilation. Actually they preferred to sacrifice other people's lives, but when that failed to produce the desired results, they didn't have a plan B.
;^)
Seriously now, I hope that nations finally learn the trick of learning from past mistakes, because otherwise we're all screwed.
-
Wednesday 25th April 2018 04:39 GMT Charles 9
Put it this way. It wasn't the atomic bombs that convinced them but the threat of the Soviet, and they'd seen what happened in the Nazi aftermath, so it was fresh in their minds. Even then, there was some arguing before they finally agreed that surrendering to the Americans at that point would at least allow them to continue existing.
-
-
-
-
Tuesday 24th April 2018 22:10 GMT Anonymous Coward
I'm really not that worried about this
The days when the US and USSR had nutjob hawks worrying "what if the other side launches a huge strike and we don't get our retaliation launched in time so they win" are long past, submarines made those fears silly. So there is no reason for major nuclear powers to ever give AI control over launch decisions (though I think some of us might sleep a bit better if AIs were given veto rights over presidential launch orders in the US)
The wildcard dictatorships like North Korea won't hand over control to an AI because Dear Leader will want to keep that authority solely in his own hands. The ones in between who don't have dictators but don't have submarine launched ICBMs like Israel and South Africa probably don't have any reason to hand over control to an AI either. They would rely on the US to retaliate for them if they suffer a first strike and lose their ability to retaliate on their own.
-
Wednesday 25th April 2018 20:10 GMT doublelayer
Re: I'm really not that worried about this
One minor detail: South Africa got rid of their nukes, so they don't have a need for that anymore. In some cases, I could see a country like North Korea setting up an AI for autolaunch because nobody could blow things up like their supreme leader, so if he gets killed by a strike, they need to launch now. Also, they'd like to put that on their propaganda, with the usual lack of any clue of what it means or how to talk about it without sounding like someone randomized all their words "a system for the intelligent use of the nuclear weapons of the supreme leader, president of the DPRK and chairman of the Korean Workers' Party, an artificial control" Other than that, all the current nuclear powers are smart enough to realize that that doesn't make any sense.
-
-
Wednesday 25th April 2018 05:02 GMT MacroRodent
Re: "The report doesn’t really discuss the current capabilities in AI"
Remember reading an old science fiction short story, where the army asked an AI if they could win the nuclear war. The AI pondered and realized it would have the planet all to itself if it answered yes, so yes it was. (The story did not end there, the AI then "lived" for milennia and watched new life arise). Actually would like to reread it, but cannot remember even the author.
-
-
Wednesday 25th April 2018 01:18 GMT Destroy All Monsters
Meanwhile in the Real World, and I am not talking about Prussia here...
The actual danger comes from a certain nation trying to outgun everybody else twice over (aka seeking superiority), heavily "investing" (pepe_air_quote_gesture.jpg) in tactical / small / easy nuke programs that no-one wants or needs complete with "forward deployments" while being entirely unable to assure the safety of said program due to lack of training / lack of manpower / lack of interest, inventing the amazingly fresh and whale-songgy concept of the "not-quite entirely full nuclear exchange, but just kinda" that could be used in press statements should a dustup with large Pacific Powers ever happen, suspiciously pushing "missile defense" sites (which, in fairness, are probably less first-strike-enabling than money-transfer-enabling, but these things tend to put ideas of invulnerability into armchair strategists and button pushers) PLUS being the slavering attack dog of a certain somewhat disagreeable but self-selling state looking at smashing up things in large circles for "safety". AND said nation also tends to appoint mentally ill / dually-loyal persons to certain posts of responsibility. AND has an internal, well-funded and self-patrolling propaganda apparatus that would make Goebbels yap like a little girl.
This is all going to go places before that WOPR mainframe comes online.
-
-
Wednesday 25th April 2018 08:19 GMT Destroy All Monsters
Re: Meanwhile in the Real World, and I am not talking about Prussia here...
It's beginning: Scarier than Bolton? Think Nikki for President
-
-
Wednesday 25th April 2018 09:20 GMT LucreLout
The article is missing the point
...no nuclear state wants to be annihilated first by a robot and thus must be first to launch
Instead of worrying about AI, try reading the line as this:
...no nuclear state wants to be annihilated first by a mentalist and thus must be first to launch.
And yet the world has survived Bush, Bush, Trump, Blair, Kim and plenty of other utter lunatics with their finger on the trigger. Thus, AI is not a launch first problem.
Some believed that AI would eventually evolve to “superintelligences” with powers that could not be fully understood and controlled by humans.
So, a bit like other humans then?
I literally cannot understand how dross like TOWIE has an audience, why people are fascinated by the Kardashians etc. But they are. I certainly have no control, or the wannabe celebrity guff would be toast already.
If the tech press can't move beyond the Skynet style hysteria, and the superintelligence hype, then who's going to? We'll find AI regulated to death before we ever get anything done with it - like a talkie toaster, or even just a toaster that doesn't burn toast.
-
Wednesday 25th April 2018 18:39 GMT jake
Re: The article is missing the point
Kim's hardly an important part of the mix. Like a teenager, he shot his wad as soon as he was able. Now he's out of raw materials and incapable of getting it up anymore (unlike the teenager ...). This, and only this, is why the loon is talking about peace ... he has no toys left to make his blustering sound dangerous.
-
Wednesday 25th April 2018 10:51 GMT Ken 16
2040 is a long way off, and who knows what computers will be capable of by then
It's only 22 years and I plan to be there - in 1996 I was working in IT, tapping away on an IBM Thinkpad booting Windows 95. It was on the LAN and had internet access. There was an IP phone on my desk. In the evenings I was mucking about with Linux.
Any recent graduate could walk into that office and, aside from cursing the speed and wondering why the desktops had such big monitors, would recognise everything there. Computers will be capable of binary arithmetic then, as now, nothing else. They'll be smaller, pervasively networked and we'll have figured out more uses for them but they're basically not that complicated and won't become so.
-
Wednesday 25th April 2018 21:07 GMT Wobbly World
Don't panic!!
My nearest nuclear bunker in the event of a nuclear armageddon, is built with an old age pensioner housing complex above it, specially designed to collapse in a manner to give the bunker some additional protection. It sort of kills two birds in one go, leaving no old fogies to worry about in the post apocalyptic zombie aftermath and concealing the site of the bunker!! What worry's me about the AI is that it fails to give me sufficient warning to enable me to get into the bunker. I also worry that we will not have enough beans to last till it's safe to emerge!!...
Pip. . .Pip. . .