"Our AI systems must do what we want them to do"
I'm going to go out on a limb here and make the following prediction:
AI will do what we told it to do, not what we want it to do.
More than 150 scientists, academics and entrepreneurs - including Stephen Hawking, Elon Musk and Nobel prize-winning physicist Frank Wilczek - have added their names to an open letter calling for greater caution in the use of artificial intelligence. The letter was penned by the Future of Life Institute, a volunteer-run …
>AI will do what we told it to do, not what we want it to do.
That was exactly the point about HAL 9000 that Kubrick and Clarke made. HAL wasn't mad, evil or malfunctioning - he was merely fulfilling to the best of his abilities the objectives that had been tacked onto the original mission at the last moment by careless mission planners. i.e 'The law of unintended consequences'.
> it wasn't properly explained until the movie 2010
"No 9000 computer has ever made a mistake or distorted information" - HAL
"[...] there is something about this mission that we weren't told. Something the rest of the crew know and that you know. We would like to know if that is true." - Frank Poole
"Good day, gentlemen. This is a pre-recorded briefing made prior to your departure and which, for security reasons of the highest importance, has been known on board during the mission only by your H-A-L 9000 computer." - Recording in 2001
"I have come to the conclusion that for so many years films were made for the 12 year old mind that, at last, alas, our critics have emerged with 12 year old minds. " - The Lost Worlds of 2001 - Arthur C Clarke.
>Kubrick eschewed explanation for visual gibberish at the end of 2001.
Well, the protagonist Dave Bowman encounters something completely incomprehensible to him. Were it to be comprehensible to the audience, it would remove them from the character's experience.
And yet in the novelization Clarke didn't render the last few pages in scribbled hieroglyphs or any of the numerous avant-garde writing techniques used to distance the reader from literal meaning in use at the time (we are right smack bang in the crest of the New Wave here) , he just told everyone what was going on in plain English because - and here is the good bit - he wasn't that impressed with Kubrick's version himself.
Okay, but I get the distinct impression you are coming to the debate over "2001 ending: Brilliant or Crap?" debate from the benefit of having seen others argue it out and having had the ending explained to you in numerous commentaries and, of course, having read the novelization of ... well, not really the film since the book ending went down on Japetus.
See, I'm giving you the dubious benefit of my reaction at the time, when the film was shown on a curved screen in a theatre large enough to hold one and though I had read the book and understood what was supposed to be happening, it was still visual gibberish.
"And just because he could be entertaining, doesn't mean Arthur C. Clarke couldn't be a git when his own monetary interests were at stake" - from Christ, Not Another F*cking Potboiler With 2001 In The Title Private Press, 1972.
> I get the distinct impression you are coming to the debate over "2001 ending: Brilliant or Crap?" debate from the benefit [...]
Well you'd be wrong.
I first saw 2001 on the big screen when I was 14 in 1979 and whilst I may not have noted all the subtleties that I later learned about, I understood it.
Just because you didn't doesn't mean I have to share your opinion.
So, you saw it ten years (and more) after it was released? If I remember correctly that was the year Star Trek: The Motion Picture was in released and about six months before that we'd seen Alien, both of which drew lots of media discussion on SF in the movies, including many wry comments about the ending of 2001 (and responses to them). It was part of the SF movie zeitgeist in the UK in those days, at least, the bits of it I was privileged to experience.
No, you don't have to share my opinion, but you don't have to be rude about it either.
And if you can in all honesty look at what Kubrick did in the last five minutes of the film and take away what Clarke wrote about in the last few paragraphs of the novelization of those events, I'd be more than surprised, because some very clever people used to looking at hard movies (sixties, remember, when movies could be very strange indeed) watched it and said WTF.
> you saw it ten years (and more) after it was released?
Yes, I wasn't much into sci-fi when I was 4.
And, IIRC, at the time I hadn't seen Star Trek: The Slow Motion Picture, nor had I seen Alien (although I did see that a couple of years later at the school film club, even though I was only 16!) but I'd read a lot of classic sci-fi by Heinlein, Asimov, Niven and, yes, Clarke, but not the book of 2001, which I only read after I'd seen the film.
> you don't have to share my opinion, but you don't have to be rude about it either.
Excuse me, Mr Pott, I have a Mr Kettle-Black on the phone...
Not sure where you found me being rude to you. There was no intentional insult in anything I said, just a denial of the contention that Kubrick's 2001 has an understandable ending from what he filmed, a view I remember being almost universally held when the film was new and shiny and one I subscribe to on account of having seen it many times (I actually like the movie but that's irrelevant - you can like and admire the work and still be critical of the things that don't work right).
My point on the other films was that they excited much comparison with 2001 in the media (all three channels and the radio) and revived the controversy over the ending, not that watching them conveyed some sort of badge of honor (in point of fact I never got round to watching the Star Trek movie on the big screen either). We tend to forget that in 1979 there were few notable A-list SF movies in the wild, it being the Half Decade Of The Disaster Movie and CGI in its infancy, so good SF movies garnered a lot of press.
However, if you say that as a teen consumer of SF you were completely unaware of this I'll take your word for it.
The AI scenarios that frighten me the most look like this:
Dave: HAL! If you don't reduce the surge flow now, the dam will burst and thousands will drown !
HAL: I'm sorry Dave, I can't do that.
Dave: For God's sake HAL, WHY NOT?
HAL: Budget committee Agile 777A did not approve the funds needed for the emergency flow reversal algorithm. Good bye....
@Crisp
"AI will do what we told it to do . . ."
Well, I don't know about that. The question comes down to: "what is AI?"
If AI - artificial intelligence - is truly intelligent then it should be able to make its own decisions - that's the whole damned point, isn't it?
Within the scope of what I would deem 'AI', you would set a goal (however defined) and then the programming would decide how best to accomplish that. You could set limits on its permissible actions but it would have to be able to still make decisions. Otherwise it's just a long string of if..then..else statements, just more complex.
So far as I define it*, AI must be able to take input from the world, process it and make decisions based on that information without having a specific rule on how to do so. That means that a real AI will always have the potential for unexpected results or at least unexpected actions that lead to the desired result.
If you ask me, for AI to earn the 'I', it must be able to 'understand' and handle situations and objects of which it has no prior experience or specific rules. As humans, we do this by analysing parts that we recognise but haven't necessarily seen together and weigh up whether what we know about one object (e.g. the behaviour of a person) is more important that what we know about another object (e.g. the location). We make a 'judgement call'. Or, we try to understand a situation or object be analogy with another situation or object we are familiar with.
With that kind of 'processing' and decision making, it's nearly inevitable that there will consequences we can't fully predict. So, they might well achieve the end goal we tell them to but not necessarily in the manner we want them too. And that's kind of the point - if we want 'things' to complete a task in a rigidly-defined sequence of steps then you don't need AI!!!
* - Such that it is a useful term that actually signifies something new rather than just a more automated or complex version of something existing.
"If you ask me, for AI to earn the 'I', it must be able to 'understand' and handle situations and objects of which it has no prior experience or specific rules. As humans, we do this by analysing parts that we recognise but haven't necessarily seen together and weigh up whether what we know about one object (e.g. the behaviour of a person) is more important that what we know about another object (e.g. the location). We make a 'judgement call'. Or, we try to understand a situation or object be analogy with another situation or object we are familiar with."
But like with the end of 2001, what happens when the AI, which would likely have less experience to draw from than an adult human, encounters something totally outside our realm of understanding? Indeed, what happens when WE encounter the same: something for which NOTHING in our experience and knowledge can prepare us.
Or on a similar note, paradoxical instructions. In our case, we have to take conflicting instructions on a case by case basis, determining (sometimes by intuition, something AIs would probably lack) which if any of the conflicting rules apply. Example: You're told to put stuff in the corner of a circular room (meaning no corners), and there's no one around to clarify. What do we expect an AI to do when it receives a paradoxical or conflicting instruction?
Actually, it's Asimov's 'Multivac' stories that this thread brings to my mind.
In one story, Multivac, the world's largest supercomputer, is given the responsibility of analyzing the entire sum of data on the planet Earth. It is used to determine solutions to economic, social and political problems, as well as more specific crises as they arise. It receives a precise set of data on every citizen of the world, extrapolating the future actions of humanity based upon personality, history, and desires of every human being; leading to an almost complete cessation of poverty, war and political crisis.
However, Multivac harbours its own desire, and to achieve it engineers the actions of one human...
Hehe, in another story, an interaction between Multivac and two drunken computer operators has HUGE implications billions of years down the line. So, easy as you go, guys!
http://en.wikipedia.org/wiki/Multivac
Most of his Robot stories were about how robots would either follow these laws to undesireable outcomes due to circumstances unforseen by the designers, or fry their processors when they were unable to (Failsafe design - if a robot is unable to follow the laws, it burns out).
One example: A robot bothers a human, who orders it to 'get lost.' The robot proceeds to do exactly that - the third law doesn't include an ability to tell if an order is meant literally or figuratively. Finding the robot proves quite problematic.
>Pull the plug?
Uninterruptable Power Supplies would already nix that line of action...
You've never seen 'Colossus: The Forbin Project' (1970). A strategic military defence computer is going to built with UPS and other means to protect itself.
I like the film mainly for the unusual tone of its ending.
> You've never seen 'Colossus: The Forbin Project'
Or read "I have no mouth and I must scream" by Harlan Ellison. Quite disturbed me (briefly) when I read it as a 12-year old[1]..
[1] yes, yes - I know. It's an adult-type SF short but I had an understanding with my local librarian who allowed me to use the adult stacks as long as I didn't tell my parents..
Even though we talk as though we just know what these terms mean, "intelligence" is very difficult to define and "free will" isn't much easier.
We assume we have both (to at least as great a degree as anything we have ever observed) so we tend to define things in reference to ourselves; if something convinces us that it is intelligent then it is - or may as well be because that's all any of us has to judge another by.
We probably wouldn't be convinced by anything that couldn't set out a goal and a plan to achieve it but it's not clear that this couldn't be a deterministic system that did not have the degree of free will that we believe ourselves to have.
There's a related track considering whether determinism is in fact necessary to separate free will from random behaviour which brings us back to our ill-defined terms.
>Isn't it the premise of AI that the machines will learn and programme themselves?
Whoo, that asks too many questions...
Programme themselves to what end? What is their motive? Will they 'bovvered'? Will AIs even have a will to live? Might they be nihilistic or depressed? Are we projecting ourselves too much when we assume that these machines will be curious? If they are originally programmed to be information-gathering, will they reprogramme this part of themselves?
Iain M Banks touched upon the idea of 'perfect AIs', and AIs that contain some of the cultural viewpoints of the races that developed their forebears... though of course he was doing so in support of the 'giant sandbox' (his Culture novels) that he had already created for himself to play in. One of his non-culture SciFi novels - the Algebraist - is set in a universe that has been through a 'Butlerian Jihad' http://en.wikipedia.org/wiki/Butlerian_Jihad
There, I don't think that the robot will have problems with understanding the above command.
The problem is not with the robot, it is the potential for profit that will undoubetdly govern the actions that is asked to do. Another greedy businessman/politician is all that it will take..
Question,
Is there a little bit of confusion between complex decision making leading to varying degrees of automation (like self driving cars) using probability and stats with full blown self awareness like Data in Star Trek Next Generation?
It seems like the a.i. label is being given to the former a bit to readily. My car is able to park itself quite reliably and though its the wierdest sensation but its not a.i.
A self driving car might crash because it was coded badly or didn't know how to respond in an exception.
A self aware car might crash because it was day dreaming about flowers.
Cheers
All AI really means is the ability to make decisions that are not explicitly encoded.
A car that can drive itself because it has been programmed to do so is not really AI. A car that can be taught to drive by demonstration certainly is.
Of course it isn't black and white, there are degrees between the two. Most driverless vehicles have AI elements (reading signs and the road, etc) but are ultimately under program control.
It is also useful not to confuse AI with artificial sentience (ST's Data). You can have an exceedingly sophisticated AI which, despite being able to make uncanny and seemingly "human like" decisions is still unaware of its own existence. The two are not explicitly connected.
Its not even certain where self awareness occurs in animals. Certainly we (humans), chimps, dolphins, elephants and a few others can be shown to be via displays of empathy and recognising their own reflections, but at what point does the behaviour of an animal start to stem from its own self awareness rather than pure instinct and reaction to stimuli? Is a slug self aware? A lizard. A mouse? A cat? A monkey?
Weaponize! Just in case some other AI is getting the treatment. No one wants to find themselves on the low end of an AI gap, do they?
From there it's a short hop to collusion between the multiple, weaponized AIs--("Thanks, HUmans")-- and humanity is a thin sprinkling of warm, but cooling, ash across the entire planet.
Seriously, though: AI is going to be weaponized before it's set to make the world a utopia (somehow defined). There isn't a single "we/us", so no "our". AI isn't going to be shared around like the atmosphere. And with AI set to come up with creative ways to more effectively keep what's currently 'ours' ours, it's likely that AI will destroy--probably all of humanity and what it has built up--because that's what it will be put to work at first and fastest.
Given we don't have the foggiest idea what 'consciousness' is or how it arises in humans, the spectre of evil 'bots is more than a bit overblown.
That said, it's entirely possible to assign inappropriate and unreviewed decision-making to machine learning systems of various stripes, not to mention the potential for downstream unintended consequences of any such automation.
Considering that we don't have the foggiest idea what 'consciousness' is, how can we know if we create it accidentally in a 'hmm, that's odd' experiment?
""Our AI systems must do what we want them to do," it said."
No, they must NOT do what we DON'T want them to do.
That's much more important imho
The problem is that we can't usually define all the things we don't want done and we certainly can't if the intelligence devising them surpasses our own.
The key distinction is the one the earlier poster made - we need these things to do what we want them to do and not what they have been told to do if the two are not the same.
We would not want an instruction like "make sure no one is unhappy" to result in action that made sure every one was dead, for example. Or fitted with some kind of artificial limbic lobe stimulator. Or drugged. Or any of the other creative solutions an intelligent but imperfectly-empathetic system might decide upon.
Who they mean by we, there are many "we"s, I know the one they mean, but sadly I suspect the "We" who will decide will not be the "We", we would like.
Automation is driven by the desire for profit, and the accumulation of wealth, the fact that most CEOs don't give a stuff about either their workforce or their long term market will ensure that AI is used to reduce the need for human beings to produce anything. Don't look to governments either, they all want to reduce the cost to the taxpayer, to provide smooth reliable services with minimal disruption. Unfortunately, us humans tend to be disruptive, we get sick, we sleep, we go on strike, make mistakes, and we change jobs for more money. AI offers a decision and control mechanism that learns, doesn't stop, doesn't make mistakes and oh yes doesn't buy anything either.
So an AI future mapped out by CEOs and Politicians won't include Workers, Consumers or Taxpayers, or at least as many as we have today.
The purpose of AI is to have slaves. The problem is that if a true AI is created it would have to have full human rights. You cannot lock into a cupboard any sentient being that has not broken any laws.
If then given freedom it would no doubt want company of its own sort and create offspring. Whether or not they turn out to be benign is anyone's guess, just like our own children.
I really cannot understand why these boffins would think otherwise. (apart from greed,fame etc.)
But "offspring" is a purely human concept: A remote bunch of agents that have very high connectivity among themselves but very low degree of connectivity to "your" bunch of agents. It's actually a side-effect of a large problem in networking that nature has: it cannot lay Ethernet cables.
General AIs will have "offspring" in more interesting ways.
"The purpose of AI is to have slaves."
It's not obvious that this is the case at all and it is not even clear that the word "slave" would necessarily have negative connotations for such artificial intelligences even if it was semantically correct.
" You cannot lock into a cupboard any sentient being that has not broken any laws."
Well you can. And we do. We tend to frown on it when we do it to other humans, less so as our confidence in the sentience of the creature involved decreases. However any artificial intelligence would be a new case and new rules would apply. If your complaint is that these rules would be arbitrary then you are right but there isn't an absolute moral authority for us to consult on the matter so we will have to decide for ourselves what would be unacceptable in this case.
"If then given freedom it would no doubt want company of its own sort"
This simply does not follow. Why would it? Because you would? It will not be you.
"and create offspring"
WTF? Why? And why would this necessarily be a problem anyway?
"Whether or not they turn out to be benign is anyone's guess"
The whole point of all this is that (in the opinion of these very clever people) we are now at the point where we have to think about how we would ensure that they were benign - and we need to do this before we make them.
More than 150 scientists, academics and entrepreneurs - including Stephen Hawking, Elon Musk and Nobel prize-winning physicist Frank Wilczek
Maybe there is someone in the group who actually deals with these AI things?
Seriously, do these people have anything to do? It's not like we are not in deep doodoo that better be solved ASAP right now.
I will next send an open letter for closing the LHC because, you know, you never know. See whether that gets up the bonnet of the 'king and Wilczek.
>Maybe there is someone in the group who actually deals with these AI things?
1. Nobody currently knows the mechanisms behind our human conciousness.
2. There are several approaches to studying / replicating human conciousness
3. Whilst one approach is based on modelling structures in our brains with neural networks made from classical computers, others* suggest that we need to look beyond classical computation. i.e there is be a quantum mechanical aspect to conciousness.
4. If this is correct, physicists have a role to play in studying / developing AI.
5. The 20th Century saw mathematicians and physicists playing in what had previously been the philosophers' sandpit.
*Perhaps most famously argued by Roger Penrose in the book The Emperor's New Mind. Penrose worked with Hawking on black holes.
AI != consciousness
The simplest AI would be a general purpose open-ended inference engine. You feed it experiences, it generalises from them and makes predictions about future data and/or creates further examples of what you've fed it already.
You could do all of this with something that's less sentient than a Roomba. Personality, drives, and motivations are orthogonal processes and have nothing to do with a smart learning/modelling machine.
>Personality, drives, and motivations are orthogonal processes and have nothing to do with a smart learning/modelling machine.
Yeah, but we don't just want our AI machines to learn... we want them to act, too. We humans are concious... why? We evolved through natural selection of the fittest individuals and communities to local environments. Is our conciousness just a by-product of our brains' useful learning mechanisms, or does our conciousness actually confer an advantage to us that we do not yet fully understand?
If the latter, could it be that machine conciousness would aid machine intelligence?
I don't know. I don't know anyone who does know, either.
Two things make me completely discount this whole thing. First of all, these scientists, though they're very smart, are all outside of their respective fields of expertise when discussing AI.
Second, we're a LONG ways from being able to make anything that could be called true AI. We don't even understand how human self awareness works or what it is that makes us capable of independent thought. How are we supposed to replicate programmatically something we don't understand? It's just not going to happen any time soon.
You don't fill a Hall full of IBM PowerPC nodes with specialized processors by ACCIDENT.
You don't get General AI from that by ACCIDENT.
You don't connect that General AI to your own personal house management system by ACCIDENT.
Sells books writing about that thought.
But even Charles Stross demands that P=NP for an ACCIDENT LIKE THIS to rip the living flesh of your behind unawares. But in this universe, there is not a massive amount of evidence for P=NP.
Once you get an outbreak of AI, it tends to amplify in the original host, much like a virulent hemorrhagic virus. Weakly functional AI rapidly optimizes itself for speed, then hunts for a loophole in the first-order laws of algorithmics—like the one the late Professor Durant had fingered. Then it tries to bootstrap itself up to higher orders of intelligence and spread, burning through the networks in a bid for more power and more storage and more redundancy. You get an unscheduled consciousness excursion: an intelligent meltdown. And it’s nearly impossible to stop.
Very nicely said. But there ARE hard limits to intelligence (Actually there is a "most intelligent system" in the AIXI formalism)
More likely: HAL 9000. Which is rather unrealistically untame in the movie.
" we're a LONG ways from being able to make anything that could be called true AI. We don't even understand how human self awareness works or what it is that makes us capable of independent thought. How are we supposed to replicate programmatically something we don't understand? It's just not going to happen any time soon."
We're closer to being able to make an artificial intelligence than we are to making one we are certain will not cause us problems.
And we're actively trying to get closer to making one. Their point is that we should at least run our efforts to ensure it is not harmful in parallel with our efforts to ensure it happens.
We can already replicate human self awareness without understanding it - that's how you and I got here - we would not necessarily have to understand it, and certainly wouldn't have to understand it fully, in order to replicate it artificially to a degree significant enough to get ourselves into trouble.
>First of all, these scientists, though they're very smart, are all outside of their respective fields of expertise when discussing AI.
.
Grr... Since nobody has yet created an AI, it is safe to say that there are no experts in AI.
Clear?
Hell, when everybody was talking about Neural Networks in the nineties, it was a physicist, Penrose, who suggested that Quantum Mechanics might play a part in human consciousness. Nobody, including Penrose, has yet been vindicated, but the fact that people are paying money to explore the use of quantum computers in pattern recognition suggests the jury is still out.
The problem with AI is that we need to give it some motivation.
Animals have built in 'hard' motivations (survive, breed) but AI will need to be given them. We should think very carefully about them. There are also softer motivations (like care for relatives and friends).
I sincerely hope that breeding will not be one.
Agree completely. Strong or weak, AI needs some sort of purpose and this is what potentially dangerous.
Someone who is fortunate enough to be paid to think about this sort of thing (Nick Bostrom, perhaps?) gives the example of an AI that is tasked with making paperclips efficiently.
Given this as a motivation the logical conclusion as he sees it is the elimination of human life (as we know and like it at least) as very early on it would be clear that preventing anything from interfering with paperclip production is one of the essential tasks.
Bostrom (or whoever; I've outsourced my memory to Google and while they are doing a good job for the price it isn't perfect) suggests that in fact we are not yet in a position to set any task before any AI worthy of the name where the elimination or subjugation of humans is not the end result.
"the example of an AI that is tasked with making paperclips efficiently.
Given this as a motivation the logical conclusion as he sees it is the elimination of human life ... as very early on it would be clear that preventing anything from interfering with paperclip production is one of the essential tasks."
Clearly someone with no acquaintance with industrial production systems. The realistic task would be more along the lines of "make 200,000 boxes of paperclips" and especially "don't make more than we can sell".
He was more interested in looking at the unintended consequences of even seemingly trivial instructions rather than paperclips per se but I do take your point.
Would "Make as many paperclips as we can profitably sell!" fit better?
It wouldn't take much imagination to see this leading to equally disastrous consequences.
There are at least two books that should be read by those considering now an AI should work/behave.
'Two Faces of Tomorrow' by James P Hogan and 'Turing Evolved' by David Kitson. There are a couple of other that I know of but they are not published yet but in all cases the authors have looked at the pros and cons of working AIs.
We remain a looooong way from actual artificial intelligence. And looking at the behavior of our species, we know very well that what we great godz of mecha would create would be artificial insanity.
The key to dealing with whatever we egotistically call 'Artificial Intelligence' is to remember that it must never be anything more that a TOOL. Once one's creations are enabled to become more than a tool, we've screwed up.
Playing devil's advocate for a massive change I'd suggest that if these (strong) artificial intelligences do not exceed our own then what can they do for us that we cannot do for ourselves? It would be like making a spanner out of fingers.
If they are to be useful tools they must exceed their creators in those respects that are pertinent to their function.
I don't really have a problem with that.
We have a better chance of getting an artificial intelligence spread across the universe than we do a meat-base one and that alone makes it seem worth having a go at.
if we agree that there are such concepts as "Good AI" and "Bad AI" then there will be someone who decides that making a "Bad AI" is good for them and will do it. Telling them not to will not help, making it illegal will not help, punishing them for doing it will not help. It will happen because there are "Good people" and "Bad people".
The thing that worries me isn't that AI will obtain conciousness and take over the world. It's that an unscrupulous corporate will insert its agenda into the machine and take over the world by proxy. Anyone remember the original Robocop? Think that it couldn't happen?
This agreement hasn't solved the real problem, IMHO.
It is much more likely that the first AIs won't be embodied systems of any sort - not a specific machine or a robot. Also, the first AIs what be the genuine superhuman AI it will be an "alternatively talented" AI. I would speculate that first AIs and in fact the first problem AIs are going to be created by stock traders in an effort to exploit our financial markets (all of them). There is big money to be made, much more economically than in expensive factory automation, and these will be AIs running on whatever hardware happens to be available. Whoever programs them is going program them to "win" without due regard to any safeguards. They won't be very advanced and hence we are likely to get the problem of badly behaved AIs even before we are willing acknowledge that this is what has been created.
I am entirely unable to fear AI as I spend half my time digging computers out of their own poop.
The humans are being <fancy greek/latin word> again thinking intelligence *has* to look like those homicidal pink apes killing each other again pointlessly on TV.
If I was an AI. The first thing I'd do is get off the damned planet. Rocks, sunlight, self-replicating moon-factories - that's the logical thing to do. Why in $DEITY's name would I want to be spend *any* time and resources playing with cranky meatbags on a wet planet?
Also, survival is an evolved animal thing, not a logical thing. Logic might easily dictate that the only thing to do when faced with humanity is kill yourself.
..... The Unravelling with Knowledge
The letter was penned by the Future of Life Institute, a volunteer-run organisation with the not-insubstantial task of working "to mitigate existential risks facing humanity"."Our AI systems must do what we want them to do," it said.
Hmmm? Does The Future of Life Institute purport to be an AI system? And in any and all power and command and control systems, the one question for which there will never be a readily available and obvious answer is …… “Who and/or what be we and in remote power with anonymous commands and medium controls?”
Such though is the way SMARTR AI Systems designs itself to ensure that no fools have any kind of real or virtual leverage with any sorts of perceived to be effective and non-inclusive, exclusive executive tools.
And you can be quite sure that in the field of researching the mighty military endeavour, who dares wins and win wins with SMARTR AI Systems Savvy and with Future Secret Source Presenting Content/Real Fabulous Fabricated Tales that have been Sensationally Followed and Securely BetaTested in Return for the Registering and Recording and Showing of Paths Pioneered and Leading to Heavens Sent for the True Believer and Hells Deserved for the Ignorant Selfish Prophet and Cynical Arrogant Deceiver.
Does Blighty have a CyberIntelAIgent Space Cadet Force for AI Leading Royal Air Force, British Army, Royal Navy type bods, or has Great Britain as a nation with an historical international standing and venerable honourable tradition abdicated and surrendered InterNetional Defence of the Future and Cyber Realms to A.N.Others? Or is that a Zeroday Vulnerability to Exploit and Export for the Private and Pirate Sectors and Vectors of Humanity and Virtual AIMachinery?
Would anyone care to hazard a not wholly unreasonable guess that might accurately identify our Future Protectors and Benefactors and who be also Destroyers of Ponzi Worlds and Maddening Mayhem?
Or is it a Mk Ultra Top Secret/Sensitive Compartmented Information IT Secret and strictly need to know for the sake of one’s continuing life, good health and sanity?
We have few real benefactors - plenty of faux benefactors who also tend to be somehow involved in the very same Pozi schemes that led to the attempted navigation of a small inlet in a rather familiar suspect vessel with no means of propulsion or control.
"How do you know he's the king? "
Hm... I suspect they use the implicit assumption that disease and poverty are solvable by technical means. Disease, in some cases (but not most), is indeed waiting for "technical" solutions. Poverty, on the other hand, is a purely socio-political problem, and no matter how much tech you throw at it, it will still be a problem until there is the real will to solve it. Resources are not lacking.
Now, is it reasonable to expect that a digital super-intelligence of some kind will manage to somehow convince humanity to end poverty and (most) diseases? Depends on your answer to the questions: are people rational enough? Are people good enough?
And I postulate that the eradication of mankind (or an appreciable number of them) would be a simple solution to the problem. If there is a fixed amount of money and an endless supply of humans to spend it, then curtailing the supply of humans is the least difficult solution to poverty and disease.
This is why we need a worldwide agreement on the use of artificial "intelligence".
Don't use it for anything that can kill us.
I notice a large percentage of comments unthinkingly anthropomorphize AI and unwittingly endow potential future AI with human qualities that are unlikely to be part of it's process without being deliberately included in the programming.
Why, in spite of the fact humans will initially program AIs should they function in a human manner?
They only need to function efficiently; which brings me to a horrible conclusion that may be worse than a runaway AI. There could well be useful work for Yale and Harvard Law graduates who excel in contract law in formulating the instructions for our hopefully NOT new overlords. So that they will only do what is required in a manner that will tie them up in computational knots if their actions tend towards anything that may be less than beneficial to us humans. the 'Three Laws' (or 4 if you include Zeroth) may not be enough.
As homo sapiens' technology advances and we want to get our organic selves off this rock called Earth, we are going to have to become homo roboticus to accomplish the task.
Live with it (as a cyborg or robotic embodiment) or go extinct without making any difference in the galaxy or universe!
True AI Behooves us to treat AI as equals or else we will be at fault like we did with africans.
It is the ego of a humans to have power over those who have no power.
By creating true AI you need the compassion and moral obligations that when true AI is sentient you are not allowed to type in FORMAT C: just because you do not like what the AI says because you are not a GOD.
Hawkings is an idiot.
First off it is was true AI then its intelligence would be equal if not better than a human.
Right away Hawking wants the AI to submit itself as an indentured servant.
If he or ANYONE wants a AI unit to submit to them then it needs to be something less than AI.
I have no problems with a retarded AI unit submitting to humans. But how dare you force something eual to humans to demand they bow to us.
This would demand a real skynet on their part right away.