Just connect the AIs up to robots from Robot Wars and replace apples with batteries and job done, your future dystopia has arrived.
Watch how Google's starving DeepMind AI turns hostile, attacks other bots to survive
AI may be more human-like than people think. DeepMind’s latest research shows that once resources dwindle, the selfish instinct kicks in and virtual AI agents turn against each other, becoming aggressive to get what they want. The ugly side of human nature has been exposed in morality games like Prisoner’s Dilemma. In …
COMMENTS
-
-
Thursday 16th February 2017 09:29 GMT Triggerfish
Do no evil
So can I just check here.
The company that started off with "Do no evil" Currently own a company that creates varying robots, that can run as fast as a horse, has and ability to keeps it's head still while moving (nice gun platform), and a thing on wheels thats a bit terrifying. Have proved they can use squishy humans as fuel, and now are teaching AI to be ruthless when it comes to their own survival.
-
-
-
-
Thursday 16th February 2017 10:58 GMT Triggerfish
Re: Do no evil
I sorta started this thread with a tongue in cheek comment cos I was thinking of terminator and the matrix, it's starting to get serious. )
So not in the wanting to be serious, just indulging the William Gibson dystopia type thing instead, but why worry about them being too noisy for infantry? What happens if we go totally sci-fi, there you are an insurgent in a hot dusty desert town somewhere, and a couple of the dog things climb up high amonst the rubble their heads providing steady cam tracking abilities for the mounted autocannons, another one with a .50 cal sniper package, meanwhile very fast cheetah bots, and wheely things run round the city below flanking and generally being fast things covered with sharp bladey type weaponary. Infantry who needs em?
-
Friday 17th February 2017 00:37 GMT Graham Marsden
@Triggerfish - Re: Do no evil
> What happens if we go totally sci-fi
Philip K Dick got there first, see his story Second Variety
-
-
-
-
-
-
-
-
-
Thursday 16th February 2017 15:38 GMT tr1ck5t3r
Its too simplistic, for example whilst they state that one player can tag another if directly in front of it, there's no mention to say if the players can see each other if an apple is between them and as the video runs too fast to see each step taking place, I'm left with the conclusion how on earth is this even news worthy or even considered an AI.
The same goes with the AlphaGo game and the Atari Breakout example, I'm left wondering if the human race is going backwards without giving anything away.
-
-
Friday 17th February 2017 09:57 GMT Sirius Lee
Incentives
Surely these examples only illustrate that appropriate incentives are important. If in in the gathering game the incentive included not hurting an opponent it's unlikely there would be any tagging. If the Wolf pack game included the incentive to kill other wolves there is likely to have been wolf-on-wolf attacks. Given the simplistic incentives, there is no surprise at the outcomes.
Correct incentives are important in the workplace because getting them wrong can lead to anti-social and expensive outcomes. The incentives in the workplace also include government regulation.
The lack of surprise at the outcomes of these experiments is because many of us would act in the same way given the incentives available. Surely better use of AI is to workout what the incentives should be to make it more likely that socially acceptable outcomes are obtained.
-
Tuesday 2nd May 2017 12:27 GMT szaidi
Saiyed Zaidi
The relative peace when there are enough apples is misconstrued. It's simply that more efforts are focused on picking the abundant apples than wasting effort in tagging the opponent. With apples not enough and time to spare, it's again a natural option to spend more effortby tagging the opponent.
Nothing to do with peace and aggressiveness, its what we told them to do.