Petitions
About as much use a chocolate teapot. A rubber crutch. A fart in a wind tunnel.
Good excuse for the picture though, well done.
Notables of the technology world including physicist Stephen Hawking, biz baron Elon Musk and techno-hippy Steve Wozniak have teamed up to warn us all about the menace of killer robots. In an open letter and petition, the distinguished trio and their co-signatories warn: Autonomous weapons select and engage targets without …
Musk & Hawkings have both warned about AI risks. There is a strong relationship between a capable, non-munition, weapon (i.e. one that shoots, not one that is shot as per Lewis' examples) operating autonomously and pursuit of an aggressive to humans AI capability.
I believe they are correct to engage the debate but unlikely to have much success. If there is ever a significant US - China conflict, I expect drones to be a strong component of it, esp from Chinese end. They don't have the military expertise/tradition of the US, they do have the engineering and they have incentives to follow untraditional methods to bypass US dominance. Another form of asymmetric warfare but this time between top-tier opponents.
Ultimately we need to get better at not wanting to kill each other rather than hoping for weapon restraint doing the thinking for us. Though there are plenty of examples of arms controls working - cluster bombs, landmines, NBC.
because some of them, when everyone else is kept from developing autonomous machines, will be the ONLY player in the market when a future need arises. Musk gets huge bank from launching USAF spy birds, and is in on the autonomous car development. When the dedicated drone makers fall away, he'll have the practical experience and capability to be the Haliburton of autonomous warfare.
"Hint: They are smarter than you, and anyone at Vulture Central, by a long shot"
I would fix this sentence as follows:
"Hint: They are more famous and supposedly smarter than you, and anyone at Vulture Central, by a long shot"
I believe Lewis is dead on. Autonomous means many things to many people. An ICBM can be considered autonomous.. fire and forget about it. Same for many other weapons. Where the fear is, that these weapons launching themselves. To use ICBM's... one could design a system that would fire them under a set of conditions.. if any condition is a false positive, there's a problem and not just one the receiving end.
I think that is the issue.. the launch of the system. And by system, to use what Lewis used, the Tomahawk.. they are autonomous only after pre-loading in the target area and then hitting the "Launch" button.
While I wasn't very positively disposed to this call for action in the first place, and I defer to the expertise of the article writer as well, I can understand that a future where people could take the CPU out of a smartphone and use it to control a killer robot built from a Meccano set would give some people pause.
I guess the real question is: would specifically military development of killer robots bring that future closer to a greater extent than the ordinary improvement of computers for civilian purposes? Someone evil-minded could take bits and pieces developed for benign purposes and cobble together a deadly killer robot out of them once the available bits and pieces become good enough, I'd think.
I doubt most civilian drones can carry heavy weapons right now, but there isn't a huge difference between a remote controlled drone and an autonomous weapon. Both keep the operator well out of harm's way. And certainly they'd both do just as well at terrorizing an unarmed populace. The only real difference is if the commander tells an operator what to destroy or if he tells the machine directly (and probably still through an intermediate tech.)
I'm not sure if this is supposed to be another part of their anti-AI crusades or if they just don't understand that war evolves as technology does.
Ban the weapons, great idea.
Oh, Elon, Steve, Steve, and you too Taluhla, before you leave riddle me this ...
If the sun never sets on the Empire, how is the Emperor to know whether the productivity lapse, unpredicted by the Empire's business model, is due to robots goofing off or slaves sleeping ?
I might also say that that criminal terrorist mastermind Pinocchio has been under arrest since those rude remarks about the Emperor's new clothes.
The puzzled emperor seems a decent sort, not one to regard citizens as slaves.
Could you smart people help him out with a clever theory about robots involuntarily robot-ing in their sleep ?
It is much better than admitting slave owners exist, and sleep well
It is about who uses it, the Terminator movies are about machines taking over, and human population gets oppressed. With the AI advancing to new levels, governments that have this advantage might just decide "get rid of them" (the terrorists, the people that think differently) in simple terms, and off they go. These governments would be like the Machines themselves... not accepting any other way of thinking but the "oppressor's".
How about developing AI to find better ways of producing energy? What causes wars and the need of a militia? Resources.
Could AI teach us new ways of using resources?
Anyway that's not what the article is about, but I think the scientists are foreseeing misuses (I consider killing a misuse of technology, but so were the described "revolutions").
"A cruise missile, such as a standard Tomahawk or Stormshadow/Scalp, is autonomous from the moment it is launched. It flies to a location where its target is thought to be, but it does not simply crash on that location: it takes a digital picture at the scene and decides whether something that looks like a legitimate target is in the picture.
If the missile's software decides there is such a something, the target is struck - and one Tomahawk, equipped with many canisters of munitions which can be deployed separately, can attack multiple targets at different locations."
Well and good.
BUT, there is a key difference here. A cruise missile is targeted at a specific area that has been designated to be fired upon. As I understand it the target detection capabilities of the modern Tomahawk are there to allow the missile to be re-targeted if the intended target is no longer at the designated location.
One big problem when talking about this area - of robots and AI and drones and smart munitions - is that the same few terms can be used to mean several different things. It's important, therefore, to ignore the specific terms used and find out exactly what it is that is being discussed.
I believe that the potential weapons being discussed are a world away from cruise missiles and there is indeed a danger there.
The point is that miniaturisation and commoditisation of technology has the potential to change it into a different beast and it's not necessarily helpful to point out how some form of the technology has been with us for decades already.
Movie cameras have been around for a century but that doesn't mean that tiny, ubiquitous cameras dotting every building and street corner and bus and train and shopping centre and school is nothing to worry about.
The fact that one can record days of high-quality video and audio using a tiny, cheap, easily concealed device that is accessible remotely and can store that video footage indefinitely, on cheap, long-lived media, well, that raises new concerns.
And that is the deal here. It's not that autonomous weapons system aren't available, it's that technological progression may put these capabilities in the hands of anyone, regardless of means. (Within reason!)
We've seen video of a quadcopter-mounted handgun recently and facial recognition software is freely available. How difficult would it be to release a van-load of quadcopters programmed to "proceed to beach, shoot faces" and drive quietly away before the carnage starts? A bomb would be much simpler, but the terror effect of murderous drones on shaky cameraphone video would be much greater.
The limiting factor is the availability of guns. A drone can't be the "Kalashnikov of tomorrow" until you mount a Kalashnikov on it.
>>We've seen video of a quadcopter-mounted handgun recently
Seriously?? Maybe a recoilless air pistol, but there is no way in hell any quadcopter of the sort typically discussed by drone paranoiacs is going to withstand the forces of discharging a conventional firearm even if totally randomised aiming and instant loss of control\destruction of the platform is deemed acceptable. Bomb delivery via Amazon type drone tech., maybe - firearms: no chance.
To me mounting a gun on a drone sounds like a challenge, but is in no way impossible. Some drones available in toy shops are very large now, and then there is the serious amateur drones, and the homebrew scene. Whether the drone survives after firing is a moot point if its job is done.
I certainly would not use the phrase "no way in hell" for this.
Pointless, as soon as we have humanoid robots capable of independent action we are doomed...
DOOOMMMMMEEEEEDDDD!!!!
They will invade out homes, cook our food, clean our houses, wipe our asses and have sex with us!
There will be no need to fire a shot, the human race is doomed to extinction by artificial snoo snoo.
It seems unfeasible to expect that any sort of hypothetical moratorium on autonomous weaponry could be enforced.
Therefore, it follows that the best option would be shoring up defenses against such technology. Ah, but how can an organization/state that has banned research of such technology expect to know enough about the technology to develop defenses against it?
Perhaps the best way forward is the status quo, an eternal arms race. Seems more sensible than expecting human nature to radically change.
I don't think the likes of Hawking and Musk are concerned about the fact that today's weapons can pick a target then guide themselves to that target without human intervention. I daresay they are aware of that (hell, Musk has been working with Nasa, so could well have military connections he wouldn't be allowed to broadcast. These weapons don't (AFAIK) generally launch without SOME human intervention, even if it's just a human telling them to launch.
I think what they are concerned about is giving that launch authority to a machine, so it can decide to attack people based upon it's own, arbitrary conditions. I don't know their full capabilities, but can today's missile systems go from being stood down (or however they normally are during peace time) to launch without any human intervention at all?