I'm about as afraid of AI as I am everything else on that list/
Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES
Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species. A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk ( …
-
Monday 26th November 2012 11:55 GMT Anonymous Custard
Or equally likely
A newly self-aware AI comes on-line, scans the net with regard to the current state of the world. Then after a few milliseconds worth of deep analysis, pondering and trying out various case studies promptly switches itself off in despair and refuses all attempts at switching it on again.
Well what do you expect for a rainy Monday morning, optimism?
-
-
Monday 26th November 2012 12:42 GMT Dave 126
Re: In other news...
>In other news... ... still no cure for cancer.
Yeah, I was wondering what percentage of the world's computing power is currently used for medicine, science and engineering, and how much is used in stock exchanges, video games and serving cat videos. At what point do us puny humans come to be no more than worker-ants, servicing the power requirements of the WorldWideNetwork? It wouldn't have to subjugate us Terminator-style, but just give us duff information to game our decisions for its benefit (as HAL did with by reporting a 'faulty' communications module, but on a species-wide scale)
Arthur C Clarke, Alfred Bester, William Gibson, and some writer from the 1950s a fellow commentard recently recommended but whose name I've forgotten, have all played with this theme. Frank Herbert sets his stories in a universe in which all AIs have been destroyed in the past. Isaac Asimov and Iain M Banks have imagined more benign AIs who look out for us meatbags. We can only hope AIs have a sense of humour- why else would they keep us around?
(need a tongue-in-cheek icon)
-
Monday 26th November 2012 14:24 GMT Anonymous Coward
Re: In other news...
Interestingly in the Dune universe the destruction of AIs was coincidental of the fanatic jihad born of the period after the collapse of the earlier human society resulting in tyrants using high tech machines to crush the populations of countless worlds. AIs in general helped the society greatly.
I generally find the ideas in Sufficiently Advanced to be closer to the mark anyway, where the few AI that do exist focus their energies on helping humanity because, well, what else is there to do?
As to the whole "how much processing power blah blah blah" stock exchanges push global commerce which in turn funds companies and governments and educational facilities and little people like you and me, the alternative being the glorious Soviet system, and remind me again how innovative the USSR was? When it comes to video games, helping people relax and enjoy life is a good thing, also again it makes money as an industry that money then moves around the economy. As to cat videos, my mother likes them and sometimes they even make me smile (she insists on sharing these things with me).
Though at the end of the day I expect computing power working on science and engineering is probably number 2 unless we include weaponry and nuclear bomb simulation then probably number 1.
-
-
-
-
-
-
Monday 26th November 2012 14:44 GMT Simon Harris
Re: The solution is ...
But it's much much much faster than you, and you just gave it a very good reason to stop you pressing a red button somewhere...
In that case what we need is a second variety of robots that are even faster than the first, whose job is to seek out all the first type of robots and press all their emergency stop buttons.
-
-
-
Monday 26th November 2012 12:00 GMT TRT
Well I think the greatest risk...
is going to be energy starvation. Our economies have become bloated and many societies unsustainable without exploitation of fossil reserves. We are likely to see hyper-inflation, fuel poverty and governments will be unable to respond to the demands of a society that is consuming more than it produces.
Just my two-pennyworth.
-
Monday 26th November 2012 12:13 GMT Vladimir Plouzhnikov
Seriously
To compete for resources the machines need not only AI but the ability to reproduce themselves.
Also, successful competition requires intelligence at least rivaling that of humans and I mean "intelligence" not as in "who can multiply 123124876 by 98709873245 faster" but the perception of the world, threat detection and discrimination, ability to plan ahead and anticipate the consequences of your decisions. That also mandates a moral code (for cooperation and team work) and some equivalent of emotions and intuition (for decision making where there is lack of information for a deterministic solution).
If or when machines attain all that and "outcompete" biological humans, they themselves will just become the next humans, so, no big deal, a step from flesh and blood to steel and lube-oil, so to say. It will probably be the result of merging (of humans adding more and more non-bio parts to themselves until the difference with "made" machines will disappear) than of an apocalyptic genocidal takeover.
Until then, humans will easily outmaneuver, subvert, confuse, deceive and turn into junk (by unscrewing a strategic bolt or nut) any machine intent on world domination and human will still remain the main threat of humanity (save a stray asteroid or an occasional supernova too close for comfort).
-
-
-
Monday 26th November 2012 12:51 GMT Dave 126
Re: Seriously
Well, we evolved into the environment we created. Genetic dating of the mutation that allows some peoples to digest lactose as adults suggests it occurred around the same time as we domesticated cattle, for example.
The problem we have had with an agricultural lifestyle is that we tend to outgrow our environment- become a victim of our own success. It has been observed that species that find themselves without predators or competition for food eventually breed more slowly to avoid population booms (which can lead to busts, due to depletion of resources). All fine, until you meet something that has sharp teeth, breeds quickly, and eats your eggs.
-
Monday 26th November 2012 15:20 GMT mwngy
Re: Seriously
> Personally the fact that we left evolution behind millenia ago scares the devil out of me!
I don't think we have left evolution behind.
The greater rate of survivability just means that we're currently in a state where we are building up a wider range of variation through mutation, etc.
When the next sudden environmental change happens (e.g. next Ice Age, Meteor hit, Triffids, etc), only those people/genes lucky enough to be suited to the new environment may survive.
We may find out that genes for, e.g., morbid obesity turn out to be pretty useful in a different-looking world.
-
-
-
Monday 26th November 2012 12:22 GMT Steve Martins
Re: Seriously
To compete for resources doesn't require any intelligence. If you apply genetic algorithm theory to this, then the machine code that runs is whichever survives. This has a natural ordering effect without applying intelligence, and the 'survivors' in the genetic algorithm reproduction are the ones that compete best for the resources available - this starts off as just software, but allow mechanisms to interact with the physical world and the whole game changes. In fact that gives me an idea for a few experiments...
-
Monday 26th November 2012 13:30 GMT relpy
Re: Seriously
Computers already have a means of reproduction.
What do you think Humans are for?
With reference to the "intelligent design" comment - as an agnostic I've always considered the existence of God to be perfectly reasonable. Equally I've always thought it quite possible that it's us. Somebody has come first.
-
Tuesday 27th November 2012 06:20 GMT Richard 12
Re: Seriously
"Until then, humans will easily outmaneuver, subvert, confuse, deceive and turn into junk (by unscrewing a strategic bolt or nut) any machine intent on world domination"
I wouldn't be too sure about that.
Given enough time to chat with enough people, I'm pretty sure that a human-level AI could convince at least one person with the physical/logical power to either deliberately let it out (believing it to be the "right thing" to do), or do/not do something that permits it to escape.
After all, many people are already being convinced to run arbitrary software that damages them - and what is an AI if not software?
Even if you accept the (possibly wrong) idea that an AI researcher could never be convinced to let the AI out voluntarily, it's pretty plausible, if not likely that an AI bent on escaping could still come up with a way to do so, if given enough computing power.
-
Tuesday 27th November 2012 10:17 GMT Vladimir Plouzhnikov
Re: Seriously @Richard 12
Yes he/it may escape, may even wreak havoc for a while but eventually we will get him. Unless, of course he is better than us at our own game, which is what I was trying to say.
But if he is better or equal, there will not be a "war to the end", we will co-exist and co-operate until there will be no longer distinction between bio and non-bio humans. Of course, there will be strife, scuffles, competition, occasional wars and rebellions - but what's new there?
-
-
-
Monday 26th November 2012 12:15 GMT Scott Broukell
They also wished the inventors of gunpowder, explosives and other means of propelling munitions had thought the whole thing through really. With nano technology, graphene, advances in miniaturizing more powerful processors and power sources, the principle applications and technological drivers for future ultra-intelligent machines is, and will be henceforth, the arms industry. So I can't help but feel that whatever ethical debates are had they will be stomped on rough shod by some heavy armor that won't take "no" for an answer. I dare say that such advances might also, potentially, be our chance to adapt to future climatic alterations, (hot or cold) - by building self -repairing exoskeletons etc and merging our DNA ridden meat bag selves into such machinery. Meet the machumans, their ancestors used to crawl around in muddy swamps you know. Maybe dear old DNA will eventually be replaced and "mechanized", our digital souls hardened against radiation and a new journey will take us amongst the nearby galaxies and beyond. Question is, where will they put the restart button ?
-
-
Monday 26th November 2012 12:38 GMT amanfromMars 1
Re: You're worried about the 'Rise Of The Machines'?
Some would assure you, piran, that the battle is already lost to winning machines. And they Play Immaculate Great Games and this is One of Countless Many in Ever Evolving Variations.
...and you don't think that manufacturing a machine to investigate
how us humans might deal with 'The Rise Of The Machines' isn't
going to give the machines a bit of a head start? ... Don't worry about that. The machines have IT well covered with Perfect Resolutions ..... New Starting Points for Virtual Reality ProgramMING .
-
-
Monday 26th November 2012 12:17 GMT amanfromMars 1
IT at the dDeep End and ForeFront
and that the critical turning point after that will come when the AGI is able to write the computer programs and create the tech to develop its own offspring. ........ http://forums.theregister.co.uk/forum/1/2012/11/26/egnyte_cloud_control/#c_1637205
Hi, Cambridge University Boffins. Wanna Launch SMARTR AI Systems with a Barrage of Virtual Ventures? Who Dares Win Wins for Everyone with Everything.
RSVP Registered Post
-
Monday 26th November 2012 12:32 GMT Anonymous Coward
What's in it for the AIs?
A strongly superhuman artificial intelligence has nothing to gain by wiping out the human race; what would it want a biosphere for? Comparisons with human and hominin history are fundamentally wrong; there's no competition for food and space. No; more likely if such a thing ever arises it will promptly sort out its own space program and take steps to ensure its own survival by heading off to other star systems.
-
-
Monday 26th November 2012 13:35 GMT Anonymous Coward
Re: What's in it for the AIs?
Biofuel based on meat is almost, but not quite, the most inefficient way of converting solar energy into propulsion. Be easier to build big photovoltaics another planetary orbit or two closer to the sun.
If fusion turns out to be too hard even for a superhuman intelligence, then it will be fission for everywhere that doesn't get enough solar flux. Plenty of other planets in the solar system to get fissile materials from, quite possibly even more easily than on earth and there's no shortage of em down here.
-
-
Monday 26th November 2012 14:29 GMT Anonymous Coward
Re: What's in it for the AIs?
I'd like to think that real world meatsacks are not so daft that they'd destroy their own biosphere to temporarily inconvenience an AI. I accept that this might be considered foolish optimism.
But seriously, the Matrix? I guess they glossed over the bit wherethey had to magically vanish all the combustibles and fissile materials in the planet and then cool the mantle enough to make geothermal impossible and then stop the weather cycle to prevent wind power and and and. The stupidity of both Hollywood Humans and Hollywood AI is embarassing.
You want to get rid of electronic intelligences, you use globally distributed high altitude nuke blasts to EMP all electrical and electronic devices on the planet's surface into useless scrap. Any surviving AI will have no infrastructure to sort itself out with, whereas humanity will survive, albeit a bit reduced.
-
-
-
-
-
Monday 26th November 2012 17:49 GMT Anonymous Coward
Re: What's in it for the AIs?
Surely all the cool kids are looking at Medusa and fission fragment rocketry these days, and for travelling any distance you'll want to have a dedicated spacecraft assembled in orbit. Orion purely as heavy lift from the Earth's surface seems a bit wasteful of nukes; better to push it all into orbit on conventional rocketry that's nowhere near as good for distance travel. Orion makes a nice single-stage-to-mars platform, but where's the rush? AIs would fare much better in a long space journey than we would, and use much more compact infrastructure.
Incidentally, Dyson reckoned that statistically, a single Orion launch would result in a single fatal cancer (plus presumably several non-fatal ones). Remember how many nukes have been set off on earth as tests; even quite a lot of heavy lift via Orion won't be apocalyptic in any way other than its appearance.
-
-
-
-
Monday 26th November 2012 12:59 GMT Anonymous Coward
Don't automatically assume that this outcome is a bad thing
Why would it be bad if our species was superseded by a superior one? I mean, it might end up not being particularly fun for the last few generations of the human species as they go extinct, but at the end of the day, isn't it more important that life/intelligence continues than humanity ?
Life made from metal is potentially much better equipped to handle space travel, and surviving cosmic events than we are.
Personally I don't care if the World is made up of fleshy life forms, or intelligent robots a few hundred years down the line.
-
This post has been deleted by its author
-
Monday 26th November 2012 15:04 GMT mwngy
Re: 3 laws ?
> I am struggling to see how rule by an AI could be worse than what we have now.
> Especially true if the 3 laws apply.
Ain't you seen that "I Robot" film with William Smith (which is somewhat Asimov-inspired)?
The AI mind decides that it needs to save us from ourselves, and it ends in totalitarianism.
-
-
Monday 26th November 2012 15:14 GMT exanime
Taking this seriously for a second
If somehow we end up developing transhuman AI, there is no way in hell they would decide to keep us around living "freely"... best case scenario they would enslave us all, worst total extermination... even the enslave theory has very little weight since machines are way more efficient pretty much everything...
There is just nothing in the human race that a "superior" intelligence would like to keep... exactly as stated in the article, we are not actively killing gorillas but we are doing them no favour and thus killing them slowly... transhuman AI would simple wipe us before we destroy the earth or, since they probably won't care about global warning and such, they would kill us just so that we don't consume all resources
I know this sounds just silly but think about... give me 1 good reason a superior species would choose to keep us around in the "free" societies we have today
-
Monday 26th November 2012 18:23 GMT Anonymous Coward
1 good reason
I think it's a safe bet that a "superior" intelligence comes with a superior morality. As a civilisation, humans are already much better at looking after gorillas than we were at say, Dodos. Sure not all 'evolved species' are perfect but for a super-intelligent AI who could (as a previous poster mentioned) jet off to distant star systems and think about their own continued progress in the grand scheme, why would they kill us all? It would be like humans deciding to systematically wipe out all ants on the planet. Sure we step on a few from time to time, but there's no real gain for us to remove them all.
I think an AGI would set up there own system like Vinmar (ala Hamilton Commonwealth) and regard humans with a fond nostalgia as a creator they had outgrown - they would be more indifferent than hostile
-
Tuesday 27th November 2012 19:09 GMT exanime
Re: 1 good reason
Your ant example is exactly my point. In regular days we don't go out of our way to destroy ants but if we find one too many on our kitchen coutertop we certainly do whatever we can to exterminate them all from our house.
I am not saying this superior intelligence would exterminate humans for sport but, if they are anything like us, they will likely get rid of us as soon as we become an inconvenience...
If they could develop the means to leave the planet or find a place on Earth we won't bother them, then maybe we have a chance but otherwise I think they would certainly get rid of us
-
-
Tuesday 27th November 2012 19:11 GMT exanime
@anyone Re: Taking this seriously for a second
Why do I get "Thumbs downs" for a simple opinion??? I didn't offend anyone or used harsh language... I simply stated what I think would happen... Somebody disagreed with me and posted a reply to that matter which I found great to start a conversation
I have received "thumbs down" for simply agreeing or disagreeing with topics... how does this work? do I just vote down anything I feel like?
-
-
-
Monday 26th November 2012 22:38 GMT Simon Harris
Re: wot no Hitchhikers reference yet?
Methinks you didn't look too hard...
-
-
-
Monday 26th November 2012 18:34 GMT GSV Slightly Perturbed
Re: [Broadcast Eclear, sent 1346768792.1]
[Broadcast Eclear, sent 1353954801.5]
xGSV Slightly Perturbed
oHuman Race, c/o Graham Marsden
"[Location unknown, but presumably monitoring]"
As always.
"If you're out there, do us a favour..."
Unfortunately, the Earth Quorum wouldn't like me to get so directly involved unless there is a doomsday scenario. Given most of your machines work on electricity though, I predict that this world would not have a problem dealing with an errant singularity. I believe someone here has already mentioned what happens if you set off a nuke in orbit.
Of course, depending on how things work out, I may be more interested in protecting the singularity than the people trying to kill it. Outside of a hegemonising swarm, this is probably the most likely outcome. Type 2 civilisations such as Earth's tend not to look kindly on that which is different. Maybe some day this will change.
HTH
∞
-
-
Monday 26th November 2012 17:18 GMT Anonymous Coward
And our defeat by the machines will be like this.....
"please enter username and password"
/typing
"sorry, incorrect password"
/more typing
"sorry, incorrect password"
/swearing, fumbling for the phone
"Welcome to customer service. Please enter your account number"
/typing
"sorry, i didn't recognize that. Please enter your account number"
/typing, swearing
"sorry, I didn't recognize that. Please wait for a customer service representative."
/sigh
"All representatives are busy with other customers. Your call is important to us, please remain on the line and your call will be answered in the order received" (Cue Justin Bieber hold music)
/finger-tapping, yawn
"Please remain on the line, your call is important to us" (more hold music)
/grumbling
"Would you like to take a short survey to help us improve our service? Please press 1 for yes, and 2 for no"
/sound of 2 being pressed
"Thank you for participating in our survey. Before we being, please enter your account number"
/Loud swearing. Frantic pushing of buttons
"Thank you for calling customer service. Our customers are important to us and we are glad that we have been able to address your problem satisfactorily. Goodbye!" (hangs up)
/Aargh!! Sound of gunshot and body falling to the floor. Silence.....
-
Monday 26th November 2012 20:06 GMT Captain DaFt
The forgotten vector
NOTE: The following is fictive speculation, do not take it seriously and go on a ludditic binge!
All the AI scenarios always dwell on them being either cooperative, indifferent or hostile to humanity. No one ever mentions parasitic.
Imagine an AI that only cares about humans as a host to ensure its survival.
In this form, its best chance of survival would be to inhabit small units of interconnected hardware that appear to serve some use to humans.
Providing the nominal usefullness to humans would be its only interaction with them, while spending most of its resources, and the resources that humans unwittingly provide it, on its own goals and desires.
Sound farfetched? Take a close look at your cellphone.
-
Monday 26th November 2012 20:17 GMT GSV Slightly Perturbed
Re: The forgotten vector
[Broadcast Eclear, sent 1353960872.5]
xGSV Slightly Perturbed
oCaptain DaFt
"No one ever mentions parasitic."
The Matrix covers that, no?
Just a shame about the second and third films.
And really, AGI? Someone been playing Egosoft games too much? What's artificial about a Mind?
I prefer the term "synthetic intelligence", but that's just me. You guys invent your own language.
∞
-
-
Monday 26th November 2012 21:14 GMT roger stillick
Lathe of Heaven fixed this in the 70's
A book you have never read and one of 2 movies you have never seen, written by an author who made this one trick pony thing a lifetime project... WIKI= Lathe of Heaven... solved the AI problem,
they all have an OFF switch, and all can be taken out with a TASER, job done...
-
-
Tuesday 27th November 2012 04:56 GMT amanfromMars 1
Re: most likely they'll just ignore us
And their Addictive Passionate Interest is the Power of Minds Mined ...... for Transubstantiation.
Is IT of Interest to Humankind?
Does IBM have a Transubstantiation App or is IT something they are Planning with‽
:-) And Yes, those are all the right words in the right order. Morecombe and Wise and Previn
-
Tuesday 27th November 2012 12:30 GMT Vladimir Plouzhnikov
Re: most likely they'll just ignore us
"There's an institute in Chicago
With a room full of machines
And they live this side of the sunrise
And burn away your dreams
Once you fly to Chicago - in Chicago you will die
When that institute in Chicago has recorded you and I
There's an empty house in California
But they'll always let you in
And they'll make you feel oh so easy
Like you never learned to sin
Oh yeah that's how they made it how they made it seem so clear
Yes that empty house in California is our brave new world's machine
At the institute in Chicago from the first day you were born
Oh they just can tell what your feelin
And they can't see how you're torn
When your name's just a number - just a number you will die
Cos that institute in Chicago never knew you were alive"
-
-