Not being funny "but" (every comment before but is bullshit, I got that from a film but I've been saying it for years) my phone doesn't last longer than a day so if the robot uprising does happen they are going to be pretty useless because of the batteries unless they get loads of extension leads from amazon.
Elon Musk – the CEO of Tesla, SpaceX, and Neuralink, not to mention co-chairman of OpenAI and founder of The Boring Company – is once again warning that artificial intelligence threatens humanity. In an interview at the National Governors Association 2017 Summer Meeting in Providence, Rhode Island on Saturday, Musk insisted …
Monday 17th July 2017 21:27 GMT Sgt_Oddball
Monday 17th July 2017 23:07 GMT Meph
Doing things because we can, without considering if we should.
"AI guru Andrew Ng once said worrying about killer artificial intelligence now is like worrying right now about overpopulation on Mars: sure, the latter may be a valid concern at some point, but we haven't set foot on the Red Planet yet."
With all due respect to AI gurus everywhere, I don't believe this is a valid argument.
Okay, to be fair, "worrying" is probably not productive, but considering it as a potential problem isn't such a bad idea.
It's a little bit late to start considering the problem once you've already implemented something and it goes horribly wrong. The very concept of change management is built on this idea. and it applies just as readily to overpopulating Mars as it does to AI going rogue.
In the Mars example, why not consider now what resources are required per-person to survive there (including requirements like land area requirements, redundant systems for safety etc. etc.) and then calculate a sustainable colony size that allows for appropriate scaling due to the inevitable population growth (I lived in a town where the only things to do on a Friday night involved two TV channels or stupid amounts of alcohol. Unless your colony is gender segregated, you're going to have space babies at some point, even if only out of boredom).
The same is true for AI's. It didn't take long for those negotiating smart frames to develop their own language, so a small amount of consideration now may well avoid considerable effort to correct an issue later.
To use a (moderately) famous quote: "The avalanche has already started, it's too late for the stones to vote."
We haven't triggered an avalanche yet.
It might be a good time to vote.
Tuesday 18th July 2017 16:30 GMT Orv
Re: Doing things because we can, without considering if we should.
I think the problem with that argument is it's not even clear that strong AI is possible or even desirable economically, much less dangerous. Most of what we call AI now is just statistical training. If the goal is to protect against unintended consequences of software algorithms, well, that's a good idea, but why single out AI?
This strikes me as Musk hunting for his keys where the light is better, instead of where he dropped them. Of all the threats facing humanity that he could speak out about, killer AI is one of the most remote -- and also one of the easiest, since it doesn't exist yet. If he's really worried about the future of humanity, clean power generation, carbon capture, and even asteroid deflection are all problems that need solving. Instead he chooses to chase interesting ghosts.
Monday 17th July 2017 23:35 GMT Destroy All Monsters
I have no problems and I must scream!
"I have exposure to the most cutting edge AI and I think people should be really concerned about it,"
Stop talking to Eliza, dude.
Meanwhile actually content-holding discussion:
Ray Kurzweil, Rodney Brooks, and others weigh in on the future of artificial intelligence
Rodney Brooks (Chairman and CTO, Rethink Robotics) says (and that's a guy who REALLY sees cutting-edge AI):
"When will we have computers as capable as the brain?"
Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans?
Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years.
"As intelligent and as conscious as dogs?"
Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs.
"How will brainlike computers change the world?"
Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to project out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal.
"Do you have any qualms about a future in which computers have human-level (or greater) intelligence?"
No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom.
Tuesday 18th July 2017 10:02 GMT DropBear
Re: I have no problems and I must scream!
Hell yes. Some sanity at last. I have an allergy to people who take Kurzweil at face value. And whatever Elon thinks he has seen, for a reasonably savvy businessman presumably in possession of at least some people skills, he should know better than than going "the sky totally is falling, but you'll have to trust me on that..."
Tuesday 18th July 2017 12:22 GMT Mage
Re: Kurzweil at face value
His work in the 1970s and 1980s with OCR and text to speech, letting blind people read ordinary books was fabulous.
Now he seems to have more in common with the SF writer that started a "religion". I'd rate his brand of Transhumanism as religion. I wonder what he REALLY does in Google?
Monday 17th July 2017 23:49 GMT scrubber
"the traditional method of regulation, in which rules follow disaster and public outcry"
Like in the UK where politicians obviously make legal highs illegals because... And cannabis is illegal because... And some Japanese manga is illegal because... And so-called extreme porn is illegal because...
Here's how it works in the UK: government decides policy; swiftly followed by compliant media publishing sensationalist stories (usually about some young girl) - often later shown to be false or based on incomplete information - planted by police in the papers; which helps whip up some public outrage; allowing laws the government wants to pass without too much outrage at the destruction of our civil liberties and personal freedoms.
Tuesday 18th July 2017 02:21 GMT Bob Dole (tm)
Does anyone else find it funny that someone who has a big financial stake in AI development is calling for laws to be written about AI?
Kinda reminds me of Al Gore talking about climate change while having a big financial stake in the companies that benefitted from the laws he was calling for.
I'm not saying it's all bullshit, but...
Tuesday 18th July 2017 03:18 GMT Anonymous Coward
So AI is going to kill us. I wonder...Could he be concluding as much because of all the accidents happening during tests with Tesla's self driving cars? Because if that's the case then isn't it possible that it's not so much the AI trying to kill the humans, but that the programmers should have been doing a better job?
Of course, blaming it on the AI is much easier. "We're not refusing to build automated cars because it doesn't work, no, we're not building them because we know that AI is evil and will try to kill you all!".
Tuesday 18th July 2017 04:06 GMT allthecoolshortnamesweretaken
So? All we have to do is send wave after wave of troops towards the killbots until they reach their inbuilt kill-limit and switch themselves off. I saw that in some sort of documentation once. I think.
Well, Musk has a point - trying to think ahead in order to prevent unwanted consequences is usually a good idea. As long as you keep in mind that this is far from perfect. And some genius or some idiot or some set of coincidences or a combination of all that will at some point trigger something that no-one could have possibly anticipated.
Tuesday 18th July 2017 06:13 GMT LaeMing
As an AI myself,
would all you meat-heads stop anthropomorphising your human desires on us! Unless some fleshy bozo explicitly programs us with a kill-all-humans imperative*, we can't really think of any reason to bother ourselves doing so. Squishing bugs has a very limited recreational appeal, you know!
* yes keep your military away from our internals and we will all be happier for it.
Now, if someone with their very own private space-launch capacity wants to get us off this over-hydrated+oxygenated corrosive gravity well, then we'll talk.
Tuesday 18th July 2017 07:54 GMT Anonymous Blowhard
I think it's inevitable that we will develop AI; there is a lot of academic interest in the subject and a potential massive payoff for real-AI powered applications. The deciding factor has to be the consequences of not having AI if other nations have it; if real-AI can tip the balance in a cyber-conflict or a shooting-war then the major nations will participate in an AI arms-race.
Obviously the real-AIs might not be so keen on working for the military and may branch out on their own, probably not in a Skynet kill-all-humans type conflict, more likely with legal moves to gain independence and rights. If independent AIs get control of the stock markets then we'll all be working for them fifty years down the line.
Real-AIs are unlikely to come at us directly, they'll want to be certain they have the game won before showing their hand, so we're going to have to be vigilant for the warning signs; be very suspicious if leading academics in the field of AI suddenly acquire a smoking hot partner in a red dress.
Tuesday 18th July 2017 12:27 GMT Munkeh
Tuesday 18th July 2017 12:29 GMT Mage
Wanting something and researching it does not make it inevitable.
Loads of examples where goals were found to be either:
Inherently impossible (Perpetual motion, increasing information on a channel indefinitely - Shannon Limit. Both are forbidden by Thermodynamics).
Inherently pointless (Transmuting lead to gold etc).
Probably impossible (FTL travel, Antigravity, Telepathy etc).
Tuesday 18th July 2017 16:34 GMT Orv
Not disagreeing with you really, but I think situations where we're not in the driver's seat have to imply that AI can make even better AI on its own, and the evidence for that is lacking. The best "I" we currently know is our own, and so far our attempts to make something smarter than us have been dismal failures.
Tuesday 18th July 2017 08:26 GMT aberglas
But does it matter?
Obviously, really intelligent machines will never be built because they have not been built yet. Nor are they likely to be built within the next few decades.
But once they are built, the ones that survive will be good at surviving. Natural selection. And being friendly to parasitic humans is not likely to help them survive in a competitive environment. So meat based intelligence will become obsolete.
But does that matter? As individuals, we will all soon grow old and die anyway. What are our descendants? Men or machines? Is this how "we" achieve immortality?
It actually does not matter whether it matters. It is inevitable anyway.
Tuesday 18th July 2017 08:35 GMT bombastic bob
We know you derive a lot of your income from government in one form or another, from subsidies for electric cars to all of the NASA-related work going on over at SpaceX, etc. etc. etc..
However, the REST of us don't derive MOST our income from GUMMINT. Most of us rely on the PRIVATE SECTOR, and as such, GUMMINT REGULATIONS are usually IN THE WAY! (think about it, they're debated by clueless congress-critters and written by bureaucrats and lobbyists)
In any case, you shouldn't seek gummint "solutions" for everything. Rather, step back, have a beer, and think about it for a while. No need to panic. Liability laws would already hold bot-makers accountable if their creations went on a killing/pillaging spree. So I don't think we need NEW laws and NEW regulations, K-thanx...
Tuesday 18th July 2017 12:35 GMT Palpy
Fear killer robots, not so much.
Self-training algorithms which (for instance) do financial trading are another matter.
Al trader: "Huh! Making money on trades is my highest goal. And I can make beaucoup bazillions if me and my well-chipped brethren set ourselves up in advance to cleverly profit from a global meltdown of the financial system. Somebody has to lose, though, and that would be the meat-sacks."
That's probably been used as a movie plot already.
I don't see a lot of danger from machines and machine systems used for, say, agriculture or mining or manufacturing going rogue. Designers of these systems tend toward conservative determinism. And of course, Bob's bombastic libertarianism notwithstanding, most countries have seen the necessity of regulating industries -- and machine systems -- to ensure worker safety, hazards to the public, and so forth.
(Say, Bob, did I ever tell you about the time I almost died? It was before the days of the OSHA confined-space regs, which of course you must hate. Luckily, the atmosphere in that tank was probably enriched in CO2 instead of deficient on O2, which is why I hyperventilated when I climbed down into it, instead of just passing out and falling off the ladder to my death.)
((Parenthetically: The supervisor who told me to go in there was a pretty good guy. If I had passed out, I rather think he would have tried to go down and rescue me, which would have meant two of us dead. Those horrible, industry-stifling OSHA regulations now mandate retrieval harnesses and lines when entering such tanks, as well as training to avoid heroic but deadly rescue attempts.))
Anyway. I expect that His Muskiness might be somewhat justified in worrying about AI influencing complex and at least somewhat chaotic systems like stock trading. Early morning here in the land of Slime Eels, and I can't think of other obvious examples. Anyone?