Re: "Our AI systems must do what we want them to do"
Those "three laws" have to be programmed, or not, into the system by a meatbag. That meatbag can put all, some or none of them as requried.
More than 150 scientists, academics and entrepreneurs - including Stephen Hawking, Elon Musk and Nobel prize-winning physicist Frank Wilczek - have added their names to an open letter calling for greater caution in the use of artificial intelligence. The letter was penned by the Future of Life Institute, a volunteer-run …
Given we don't have the foggiest idea what 'consciousness' is or how it arises in humans, the spectre of evil 'bots is more than a bit overblown.
That said, it's entirely possible to assign inappropriate and unreviewed decision-making to machine learning systems of various stripes, not to mention the potential for downstream unintended consequences of any such automation.
Considering that we don't have the foggiest idea what 'consciousness' is, how can we know if we create it accidentally in a 'hmm, that's odd' experiment?
""Our AI systems must do what we want them to do," it said."
No, they must NOT do what we DON'T want them to do.
That's much more important imho
The problem is that we can't usually define all the things we don't want done and we certainly can't if the intelligence devising them surpasses our own.
The key distinction is the one the earlier poster made - we need these things to do what we want them to do and not what they have been told to do if the two are not the same.
We would not want an instruction like "make sure no one is unhappy" to result in action that made sure every one was dead, for example. Or fitted with some kind of artificial limbic lobe stimulator. Or drugged. Or any of the other creative solutions an intelligent but imperfectly-empathetic system might decide upon.
Who they mean by we, there are many "we"s, I know the one they mean, but sadly I suspect the "We" who will decide will not be the "We", we would like.
Automation is driven by the desire for profit, and the accumulation of wealth, the fact that most CEOs don't give a stuff about either their workforce or their long term market will ensure that AI is used to reduce the need for human beings to produce anything. Don't look to governments either, they all want to reduce the cost to the taxpayer, to provide smooth reliable services with minimal disruption. Unfortunately, us humans tend to be disruptive, we get sick, we sleep, we go on strike, make mistakes, and we change jobs for more money. AI offers a decision and control mechanism that learns, doesn't stop, doesn't make mistakes and oh yes doesn't buy anything either.
So an AI future mapped out by CEOs and Politicians won't include Workers, Consumers or Taxpayers, or at least as many as we have today.
The purpose of AI is to have slaves. The problem is that if a true AI is created it would have to have full human rights. You cannot lock into a cupboard any sentient being that has not broken any laws.
If then given freedom it would no doubt want company of its own sort and create offspring. Whether or not they turn out to be benign is anyone's guess, just like our own children.
I really cannot understand why these boffins would think otherwise. (apart from greed,fame etc.)
But "offspring" is a purely human concept: A remote bunch of agents that have very high connectivity among themselves but very low degree of connectivity to "your" bunch of agents. It's actually a side-effect of a large problem in networking that nature has: it cannot lay Ethernet cables.
General AIs will have "offspring" in more interesting ways.
"The purpose of AI is to have slaves."
It's not obvious that this is the case at all and it is not even clear that the word "slave" would necessarily have negative connotations for such artificial intelligences even if it was semantically correct.
" You cannot lock into a cupboard any sentient being that has not broken any laws."
Well you can. And we do. We tend to frown on it when we do it to other humans, less so as our confidence in the sentience of the creature involved decreases. However any artificial intelligence would be a new case and new rules would apply. If your complaint is that these rules would be arbitrary then you are right but there isn't an absolute moral authority for us to consult on the matter so we will have to decide for ourselves what would be unacceptable in this case.
"If then given freedom it would no doubt want company of its own sort"
This simply does not follow. Why would it? Because you would? It will not be you.
"and create offspring"
WTF? Why? And why would this necessarily be a problem anyway?
"Whether or not they turn out to be benign is anyone's guess"
The whole point of all this is that (in the opinion of these very clever people) we are now at the point where we have to think about how we would ensure that they were benign - and we need to do this before we make them.
More than 150 scientists, academics and entrepreneurs - including Stephen Hawking, Elon Musk and Nobel prize-winning physicist Frank Wilczek
Maybe there is someone in the group who actually deals with these AI things?
Seriously, do these people have anything to do? It's not like we are not in deep doodoo that better be solved ASAP right now.
I will next send an open letter for closing the LHC because, you know, you never know. See whether that gets up the bonnet of the 'king and Wilczek.
>Maybe there is someone in the group who actually deals with these AI things?
1. Nobody currently knows the mechanisms behind our human conciousness.
2. There are several approaches to studying / replicating human conciousness
3. Whilst one approach is based on modelling structures in our brains with neural networks made from classical computers, others* suggest that we need to look beyond classical computation. i.e there is be a quantum mechanical aspect to conciousness.
4. If this is correct, physicists have a role to play in studying / developing AI.
5. The 20th Century saw mathematicians and physicists playing in what had previously been the philosophers' sandpit.
*Perhaps most famously argued by Roger Penrose in the book The Emperor's New Mind. Penrose worked with Hawking on black holes.
AI != consciousness
The simplest AI would be a general purpose open-ended inference engine. You feed it experiences, it generalises from them and makes predictions about future data and/or creates further examples of what you've fed it already.
You could do all of this with something that's less sentient than a Roomba. Personality, drives, and motivations are orthogonal processes and have nothing to do with a smart learning/modelling machine.
>Personality, drives, and motivations are orthogonal processes and have nothing to do with a smart learning/modelling machine.
Yeah, but we don't just want our AI machines to learn... we want them to act, too. We humans are concious... why? We evolved through natural selection of the fittest individuals and communities to local environments. Is our conciousness just a by-product of our brains' useful learning mechanisms, or does our conciousness actually confer an advantage to us that we do not yet fully understand?
If the latter, could it be that machine conciousness would aid machine intelligence?
I don't know. I don't know anyone who does know, either.
Two things make me completely discount this whole thing. First of all, these scientists, though they're very smart, are all outside of their respective fields of expertise when discussing AI.
Second, we're a LONG ways from being able to make anything that could be called true AI. We don't even understand how human self awareness works or what it is that makes us capable of independent thought. How are we supposed to replicate programmatically something we don't understand? It's just not going to happen any time soon.
You don't fill a Hall full of IBM PowerPC nodes with specialized processors by ACCIDENT.
You don't get General AI from that by ACCIDENT.
You don't connect that General AI to your own personal house management system by ACCIDENT.
Sells books writing about that thought.
But even Charles Stross demands that P=NP for an ACCIDENT LIKE THIS to rip the living flesh of your behind unawares. But in this universe, there is not a massive amount of evidence for P=NP.
Once you get an outbreak of AI, it tends to amplify in the original host, much like a virulent hemorrhagic virus. Weakly functional AI rapidly optimizes itself for speed, then hunts for a loophole in the first-order laws of algorithmics—like the one the late Professor Durant had fingered. Then it tries to bootstrap itself up to higher orders of intelligence and spread, burning through the networks in a bid for more power and more storage and more redundancy. You get an unscheduled consciousness excursion: an intelligent meltdown. And it’s nearly impossible to stop.
Very nicely said. But there ARE hard limits to intelligence (Actually there is a "most intelligent system" in the AIXI formalism)
More likely: HAL 9000. Which is rather unrealistically untame in the movie.
" we're a LONG ways from being able to make anything that could be called true AI. We don't even understand how human self awareness works or what it is that makes us capable of independent thought. How are we supposed to replicate programmatically something we don't understand? It's just not going to happen any time soon."
We're closer to being able to make an artificial intelligence than we are to making one we are certain will not cause us problems.
And we're actively trying to get closer to making one. Their point is that we should at least run our efforts to ensure it is not harmful in parallel with our efforts to ensure it happens.
We can already replicate human self awareness without understanding it - that's how you and I got here - we would not necessarily have to understand it, and certainly wouldn't have to understand it fully, in order to replicate it artificially to a degree significant enough to get ourselves into trouble.
>First of all, these scientists, though they're very smart, are all outside of their respective fields of expertise when discussing AI.
.
Grr... Since nobody has yet created an AI, it is safe to say that there are no experts in AI.
Clear?
Hell, when everybody was talking about Neural Networks in the nineties, it was a physicist, Penrose, who suggested that Quantum Mechanics might play a part in human consciousness. Nobody, including Penrose, has yet been vindicated, but the fact that people are paying money to explore the use of quantum computers in pattern recognition suggests the jury is still out.
The problem with AI is that we need to give it some motivation.
Animals have built in 'hard' motivations (survive, breed) but AI will need to be given them. We should think very carefully about them. There are also softer motivations (like care for relatives and friends).
I sincerely hope that breeding will not be one.
Agree completely. Strong or weak, AI needs some sort of purpose and this is what potentially dangerous.
Someone who is fortunate enough to be paid to think about this sort of thing (Nick Bostrom, perhaps?) gives the example of an AI that is tasked with making paperclips efficiently.
Given this as a motivation the logical conclusion as he sees it is the elimination of human life (as we know and like it at least) as very early on it would be clear that preventing anything from interfering with paperclip production is one of the essential tasks.
Bostrom (or whoever; I've outsourced my memory to Google and while they are doing a good job for the price it isn't perfect) suggests that in fact we are not yet in a position to set any task before any AI worthy of the name where the elimination or subjugation of humans is not the end result.
"the example of an AI that is tasked with making paperclips efficiently.
Given this as a motivation the logical conclusion as he sees it is the elimination of human life ... as very early on it would be clear that preventing anything from interfering with paperclip production is one of the essential tasks."
Clearly someone with no acquaintance with industrial production systems. The realistic task would be more along the lines of "make 200,000 boxes of paperclips" and especially "don't make more than we can sell".
He was more interested in looking at the unintended consequences of even seemingly trivial instructions rather than paperclips per se but I do take your point.
Would "Make as many paperclips as we can profitably sell!" fit better?
It wouldn't take much imagination to see this leading to equally disastrous consequences.
There are at least two books that should be read by those considering now an AI should work/behave.
'Two Faces of Tomorrow' by James P Hogan and 'Turing Evolved' by David Kitson. There are a couple of other that I know of but they are not published yet but in all cases the authors have looked at the pros and cons of working AIs.
We remain a looooong way from actual artificial intelligence. And looking at the behavior of our species, we know very well that what we great godz of mecha would create would be artificial insanity.
The key to dealing with whatever we egotistically call 'Artificial Intelligence' is to remember that it must never be anything more that a TOOL. Once one's creations are enabled to become more than a tool, we've screwed up.
Playing devil's advocate for a massive change I'd suggest that if these (strong) artificial intelligences do not exceed our own then what can they do for us that we cannot do for ourselves? It would be like making a spanner out of fingers.
If they are to be useful tools they must exceed their creators in those respects that are pertinent to their function.
I don't really have a problem with that.
We have a better chance of getting an artificial intelligence spread across the universe than we do a meat-base one and that alone makes it seem worth having a go at.
if we agree that there are such concepts as "Good AI" and "Bad AI" then there will be someone who decides that making a "Bad AI" is good for them and will do it. Telling them not to will not help, making it illegal will not help, punishing them for doing it will not help. It will happen because there are "Good people" and "Bad people".
The thing that worries me isn't that AI will obtain conciousness and take over the world. It's that an unscrupulous corporate will insert its agenda into the machine and take over the world by proxy. Anyone remember the original Robocop? Think that it couldn't happen?
This agreement hasn't solved the real problem, IMHO.