All my supposedly witty comments in your headline and sub-headline.
I suppose the only thing left is to take off and nuke it from orbit.
The use of fully automated AI systems in military battles is inevitable unless there are strict regulations in place from international treaties, eggheads have opined. Their paper, which popped up on arXiv [PDF] last week, discusses the grim outlook of developing killing machines for armed forces. The idea of keeping humans in …
Not always even then. The tech will get used where it's felt that the advantage of using it outweighs the advantage of banning it. And even then, breaking the ban is always an option. Tech doesn't get uninvented, just because it's banned.
And even when banned, there are always worse options. For example, in 1990 France announced during the war to kick Iraq out of Kuwait that they considered chemical weapons a weapon of mass destruction - and so as they didn't have any of their own to retaliate with they would be force to fall back on their nuclear umbrella.
Also it's much harder to ban what you can't verify. Strategic nuclear weapons are quite big and easy to spot. And if you can hide the specific weapon, you can't hide the infrastructure and manufacturing.
AI is just another kind of software. Is that server rack in your missile system running normal control software or some sort of AI? Now prove it. Erm?
Not that I believe any kind of useable AI is even remotely plausible in the next few years, so it's a moot point. And certainly nothing you'd trust to do more than offer assistance.
Spoken to the Norks about these treaties? Iran too (to a lesser extent)?
Both the American and Russians have developed chemical weapons under the guise of using them on civilians (all perfectly legal and above board) so there's always ways around these things.
Maybe we should also be looking to give these things emotions to aid in trying to avoid being overly hostile or is that just human hubris wanting something that understands consequences?
"Spoken to the Norks about these treaties? Iran too (to a lesser extent)?"
Never mind them - there's another country that fancies itself as defender of the world which has a history of not signing up to treaties intended to make war more 'humane', along with a reluctance to hold its military to account for the occasional events that most other countries consider to be war crimes.
“I think that if it can be shown that implementing AI in weapons systems, even in a comparatively simple 'human in the loop' case, creates inevitable pressures to full LAWS systems, that nations may be interested in avoiding an expensive arms race that would produce questionable value.
They only need to be cheaper/more effective for them to be adopted without thinking through the consequences
and more HyperRadioProACTive
That's when AI stops Moronic Teaching with Presentations Perfecting Answers to All Problematical Questions.
You might like to think of El Reg as a Tentacle or Prime Prize Node Lode of such an AI, and with Almighty ProgramMINGs to Boot with Stealthy AIdDrivers on/in Virtually Free Hosting Platforms.
The Object of Current Exercises, and AIRaison d'Être, the ExCommunion and Exorcism of Leading Cracked and Hacked Systems Plunging Deep Darkside into the ...... well, the Heavenly Abyss with Real Horny Angels is never going to be a Bad Rad Fad Fab, is it? It's a COSMIC Staple. And Immaculate Root Source of All Future EMPowerments.
Not really. It gives nothing away accept the name of the pub :)
And there surely has to be a time limit on spoilers. It's over 15 years old now that movie :)
Madness, where does the time go. And as I get older I still find it strange that a 15 year old movie still looks good today as if it was made yesterday. You can't say the same if you were watching a 15 year old movie from the 70s in 1990.
"The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots."
- The Secret War of Lisa Simpson, in which Willem Dafoe's Commandant delivers a graduation speech to his cadets at a military academy.
...some government asshole/idiot somewhere willing to use shit like this, regardless.
It's like a universal constant of human behaviour : "Somewhere there's always a big enough idiot in charge to do *anything*".
"Bad" people/organisations/governments do whatever crap they think they can get away with, right?
I haven't seen any discussion anywhere yet where taking humans out of the equation is not perceived as some sort of threat. Terminator's fault, probably. I'd rather see these questions addressed:
- Will a fully automated AI system have any incentive to try and kill as many civilians as it can find, seek out non combatant children, mothers, elderly people in whatever hiding place they might have found, to finish off the job etc. just because they somehow get off on the action?
- Would a computer system want to torture prisoners and make them pose for selfies to impress its mates and give their families at home a chuckle?
- Would an AI system then try to cover its tracks, because national honour etc.?
I can only guess at the answers to this, but at first glance it seems that the move to AI warfare might save quite a few lives.
AI removes the human element the same way a gun or atomic weapon does.
After it leaves the hands of the user, it then becomes a fire and forget killing machine. A bullet cannot change mid course. And we may fool ourselves into thinking AI/Automated killing machines will. Hopefully the first to find out their errors will be the last to attempt it, when the friend/foe system fails miserably.
Once an AI figures out that children become combatants with guns after some time has lapsed, they become legitimate targets. Women can fight too (and in a large all-out war, probably will) so can become combatants at any time and thus are legitimate targets. The elderly can follow pretty much the same reasoning.
An AI probably has no incentive to torture, unless this can allow it to extract information from meatbags it can use to destroy it's targets, in which case it'll probably be way more effective at doing so since it won't have ANY sense of sympathy or "justified levels of force" at all.
And an AI WILL probably cover it's tracks once it learns the stupid humans might try to stop it achieving it's goals if it uses unconventional means to fulfill it's mission.
The best way to avoid catastrophe is supporting regulation and prohibition of LAWS. “Like chemical and biological weapons, for weaponized AI, 'the only winning move is not to play.'"
More Rotten Pie in the Sky Thinking from that and those not into Absolute Command and Remote Virtual Control .... unless under their Absolute Command and Remote Virtual Control Control. The abiding and rapidly growing most evident problem for established hierarchical systems is .... they do not possess the smarts to in any effective way deal with the opportunities that now would be presenting themselves to those of another mind and with the means easily made readily available to take full overwhelming advantage of intelligence and information gathered and released for and/or from unaccountable and almighty leading positions.
It is just as bad and misguided as imagining Parliamentarians being made
fully more fully aware of Secure Secret Services Services as be highlighted in another story today.
amanfromMars  ..... airing a contrary view on https://www.independent.co.uk/news/uk/home-news/mi5-data-breach-safeguards-investigatory-powers-act-javid-a8913506.html
Well now, .... that's encouraging. MI5 flexing and beta testing their not inconsiderable muscle and brain power pool in the New State of Performing Media Arts and Great IntelAIgent Games Play and to wonderfully disturbing chilling effect by all accounts ....... or by the account of those of the politically inept and effete ......... and apparently with nobody outside of MI5 Core Offices exactly aware of such Secure ACTive Defence Operations, nor able to identify which particular and peculiar specialities are responsible.
That's about as good as it gets nowadays for those into the provision and maintenance of Secret Stealth Projects and Programs.
Seems like the MOD have their very own Operating Skunkworks. And just in time for all the Fun of the Fare too.
Bravo, .... Encore, please . It is exactly just what corrupt and collapsing systems need.
Cometh the hour, cometh the man and Spooky Salvation .....with CyberIntelAIgent Security Systems Solutions ‽ .
What parts of Secure Secret Services Services do Parliamentarian types not understand and plot to undermine and seek to destroy with damaging revelations. Good luck with that Sedition.
>>>until food is free for all and plentiful<<< That's the main reason I think growing crops for ethanol production instead of food is a very bad idea.
As for leaders - Douglas Adams puts it best..
"It is a well known fact that those people who most want to rule people are, ipso facto, those least suited to do it. To summarize the summary: anyone who is capable of getting themselves made President should on no account be allowed to do the job."
Humans evolved to live in small group tribes. Physical evolution is slow compared to what could be called subsequent human social evolution.
With each new generation it is a challenge to educate people to be sociable and cooperative with everyone - not just who they perceive as their local tribe. When the system falters then the basic trait is ripe for exploitation by a populist demagogue - who can convince sufficient members of a tribe that they are under threat from others.
...and many others across the world. It appears to be a pandemic. In some countries such people actually have taken power - in others they are threatening to become significant power brokers.
Time to dig out John Gunther's journalistic books "Inside Asia" and "Inside Europe" - written in the 1930s. I have not read "Inside USA" but this review is interesting.
It's still just science fiction. We don't have this kind of AI. People are talking about it real soon now, but they've been saying that for decades. File it along with nuclear fusion and flying cars...
As someone said above, we've had automatic targetting for ages. Many modern air defence systems have an automatic target selection mode, anything designed for multiple missile defence pretty much has to. If dealing with a single aircraft you might command it to fire on that specifically, but often you're monitoring it while it priotirises targets.
Or humans are assigning the targets and the computer is determining the order they're engaged.
The UK also used to have an air launched anti-radar missile with a parachute, that could slowly float to earth waiting for a radar of certain programmed characteristics to turn on, and then go and blow it up.
But the idea that battlefield infantrymen are going to be equipped with AI auto-targetting anytime soon is ludicrous! They'd all fall over trying to carry the server racks for a start! Some sort of info using noise and cameras to track incoming rounds and suggest where they're coming from is probably possible now, and mini-homing missiles must be possible as well. But the military do worry about blue-on-blue quite a lot. It's why they train their forward air controllers so much - and I just don't buy that this tech is likely, or that frontline grunts have got the time to operate it. There have been trials of augmented reality type helmets for infantrymen since the 80s at least, and very little of that tech seems to have been deployed, because it's so damned distracting. And fragile - military tech must be "squaddie-proof".
One are where we're closer to danger is the modern digital battlefield management stuff. The integration of so many sensors with comms, so that data is shared around different units and HQs. If you can build a picture of where the enemy are back at HQ, where you can have loads of computing power, that's where the temptation might be to have firing orders issued to your weapons platforms automatically, or more likely semi-automatically. But the human intervention time probably isn't as critical there as the suggested defensive auto-fire systems they're worrying about.
I much more buy the idea of humans being set to priorities computer actions - such as dogfighting for example. Except that how many air-to-air engagements get to dogfights, and how many are done at missile range? And if you're engaging the enemy over the horizon, you're already relying on IFF now, and have been for decades. Making a fighter that can pull higher G's than humans requires the computers to be able to reliably do everything, something we're years from. Otherwise you're stuck with keeping the meatsack pilot alive and conscious, which means they'll need a role.
And actually current military doctrine in Western forces has been about making firing decisions harder, not easier. Because the price of causing casualties (civilian or blue-on-blue) is so high. So why would everyone suddenly reverse course?
Some sort of info using noise and cameras to track incoming rounds and suggest where they're coming from is probably possible now, and mini-homing missiles must be possible as well.
According to several articles at places like this one: https://nationalinterest.org/blog/the-buzz, these are already well into development. Things are moving faster than most of us realize.
NBS-78548788748953758934>$2 - "So, what do you think happened to the inhabitants of this planet?"
CS-8u8789498989&556789^^X - "The stupid ****ers thought they could control their AI masters."
NBS-78548788748953758934>$2 - "That would explain the mess!"
(NBS = Non Biological Sentience, CS = Crystalline Sentience)
Asimov's 3 laws as far as robots go. Artificial Intelligence isn't intelligent, yet. It's ok at pattern recognition, but that's bayesian math. decisions still have to be programmed in by someone intelligent.
still, though, machines should not be killing people, ever.
> the aircraft flying over look a lot more like Mig-29s to me
Those are Boeing F/A-18 Hornet fighter jets.
Not sure they are US Army though. The F/A-18 Hornet is primarily used by the US Navy and the US Marines.
The photo appears to be a composite though - the jets above appear to be from a different shot than the guys below. And the guys below look more like Marines or SEALs rather than Soldiers. Based on their rifles and gear.
Can't really see much detail.
I've come back and had another look at this, and whatever they are (and I suspect they're really CGI), they can't be F-18s (normal or Super), because the engine exhausts are too far apart.
(I can't seem to easily find any pictures of either jet from a similar angle to the picture).
The slightly canted tail-fins rule out the F-14, F-15, and Su-27 too.
> "the aircraft flying over look a lot more like Mig-29s to me
they look definitively very MIG-29-ish:
The fins of the F-18 are above the engines, closer to the center, those of the MIG-29 are completely on the sides of the engine pods.
> The fins of the F-18 are above the engines, closer to the center
Unlikely that the DoD official photo decided to show MIG-29's while pretending they are F/A-18's.
The F/A-18 fins are way at the back.
Also, the MIG-29's have 3 cylindrical components below the body: the two engines on each side and a third one in the center, which appears to be a fuel tank.
The birds in the El Reg photo clearly show only two cylindrical components below the body, on each side, namely the two engines. That's the visual signature of the F/A-18, not of the MIG-29.
> "Unlikely that the DoD official photo decided to show MIG-29's while pretending they are F/A-18's"
they are not pretending that these are F-18s, you are
> "Also, the MIG-29's have 3 cylindrical components below the body"
no they don't, they have 2 (the 3rd is an optional drop-tank)
Until of course terrorist and terrorist friendly countries get their hands on this kind of technology, or some Russian hackers hack into and preload the drones with hidden code where they takeover the devices just for funsies to attack Facebook headquarters, Mark Zuckerberg's home, the White House, Trump Tower, etc.
Good luck agreeing to treaties.
just for funsies to attack Facebook headquarters, Mark Zuckerberg's home
There's a problem with that? Ok.. I'll be serious. That could and will be problem once the tech is available and some country decides to hand it out to another countries "freedom fighters:. At that point, who ever has the best and most will win and it won't be humanity. We humans will just be collateral damage.
This post has been deleted by its author
I can't believe anyone will ever admit to one being hacked or some other bug. Any messy incidents will just be blamed on "terrorists" and more robots will be sent in to clean up any other "terrorist sympathizers" in the area. We'll just see the sanitized version on the news so that we can keep blaming it on "others".
Awhile back I was involved in a system for use on a Main Battle Tank. The idea was that the commander identified the target and the "system" moved the gun, locked on and fired. No gunner involved. The last I heard was that it was scrapped when it almost put a live round through the range munitions bunker! Not AI but interesting outcome of getting the target wrong - oops!
Humanity continues to explore the universe by licking a finger and sticking it in every metaphorical electric socket.
...and if that doesn't result in anything much more than a light sting, the tongue goes in next (followed by any convenient genital).
If it does get stung bad at any point in this process, use another finger.
1) As time progresses, will an actor relying on some sort of LAW have a military advantage over those who do not?
2) Is there any credible way for an outside agent to reliable determine if a bad actor is developing or deploying LAWs?
Yeah, that's all you need to consider. We cannot stop the spread of nukes. We cannot stop the spread of chemical agents. We cannot stop the spread of biological agents. We're not even able to stop the spread of genetic agents. Each of these agents requires a physical component at all stages of the process. Not so with the AI portion of a system. Just as we have continued to advance our capabilities in the each of N, B, and C, we must advance our capabilities with the new technology.
Note that cruise missiles are already functioning as autonomous AI. A general was quoted explaining why in these pages, I believe, within the last two years.
It would seem that a failsafe would be built in. Now if it's actually triggered such that the weapons are recalled is a different question. The catch is what you discuss are the deliverables, not the means to get them to the "target". AI is that means but not the payload. I think a cruise missile, ICBM, etc. can be triggered by a command to self-destruct but perhaps not.
Carpet bombing is the systematic removal of a large target by multiple large conventional bombs, ideally slightly overlapping their blast radius. See Rotterdam or Wesel.
Cluster bombs are a single munitions case that releases or ejects smaller submunitions.
One drops hundreds of thousands of tons of payload, the other typically drops maybe a ton or so at most.
So no, not the same thing at all.
If AI takes over to manoeuvre a fighter plane in a dog fight, it will most probably be required to also take out the target, land the plane and call for an ambulance to extract the unconscious pilot.
Who are we kidding! Within a couple of decades, only the most impoverished nations will be sending manned planes into combat - to be torn to shreds by AI fighters doing supersonic Tron like turns.
The Royal Navy could have saved itself a fortune in aircraft carriers by simply re-equipping a container ship to carry a million suicide drones controlled by a 1980's arcade machine running Missile Command.
The machines will do the kill, then they will immediately check all of the extant court cases on killing and coverup/denial, and then automatically generate Press releases and situation reports based on the most promising of their active research. Faster to kill, faster to coverup and deny.
Since AI is very unreliable and does the wrong thing most of the time in a real world situation that is not kept simple like a factory, no one should ever rely on AI. Like the Aegis System on the USS Vincennes, that shot down Iranian airliner. No computer system is every going to be reliable because they will never have the full, real world perspective we have. At best, AI will simply fail, but it could get much worse, and the enemy could use our own AI against us.
It wasn't the Aegis system at fault for shooting down that Iranian airliner. It was a whole chain of circumstances and mistakes culminating in an almight screw-up by the captain and crew. In fact they weren't even able to engage it for a couple of minutes because they were panicking so much they failed to put the fire-control password in correctly a few times.
Aegis does have auto-targetting because it was designed to deal with saturation missile attacks on carrier groups - but to shoot down one plane you'd probably be operating it by selecting the target yourself.
The Iranians should have re-routed their civilian aircraft away from a combat zone where their military forces were engaging the US Navy in international waters. But it was possibly the Republican Guard freelancing, so they may not have bothered to tell the rest of the chain of command what they were up to.
The Vincennes gave unclear radio warnings, which the Iranian aircrew didn't realise were meant for them. And the captain and radar crew misinterpreted the flight profile of the plane and seem to have panicked themselves into believing it was an attack profile, when it was just climbing up to cruising altitued as normal.
What idiots? There are no AI weapons. There is no AI.
While there's ongoing research, nobody is talking about an AI controlled system linking everything together. If AI ever does work - which I'm highly sceptical about in my lifetime - it would logically be deployed in places where decision speed is vitally important in reacting to obvious immediate threats. So defensive systems, and autofire at close targets.
Philip K Dick's Second Variety (also available as a mediocre film) painted the really chilling truth about autonomous weaponry long before the technology to build it was available: a machine does the thing that you designed it to, and carries on doing it whether or not it serves any purpose.
It makes The Terminator look like a nursery story. https://philipkdickreview.wordpress.com/2014/05/07/second-variety/
Paris to cheer us all up again.
Like - why doesn't the scientific community grow some and tell the powers that be to......
Then use our decent sized brains working out how to save the planet and not destroy it!
FFS ..... Anon Coward (there are nutters out there - I've worked with them)
Good questions, Anon Coward (there are nutters out there - I've worked with them.
Are they terrified or not interested enough yet? Do you want to destroy that, FFS? Make it increasingly extremely costly to be ignorant and arrogant powers that used to be leaders in command and control ‽ .
That's a universal language they just might be able/enabled to understand perfectly and surprisingly quickly, given what they are right to fear whenever they lose other folks' fortunes along with their own cash cows.
So desperate they become to control your mind, they lose control of their own ... ... Cliff Thorburn
Yes, quite so. It is as simple as that, CT. :-) And that opens up a whole new vista of platforms to exploit or eradicate/support or deny future succour and limitless bounty.
The only hope is to replace politicians with AI systems, thereby inserting a healthy portion of dysfunctionality into the decision neural network, since irrationality is required to optimize politics. This insertion of random, counterproductive behavior will cause a high enough level of failure to perform as predicted to serve as a check on unfettered deployment of LAW systems by the AI masters who will be making the decision.
The only hope is to replace politicians with AI systems, thereby inserting a healthy portion of dysfunctionality into the decision neural network, since irrationality is required to optimize politics. This insertion of random, counterproductive behavior will cause a high enough level of failure to perform as predicted to serve as a check on unfettered deployment of LAW systems by the AI masters who will be making the decision. ..... zonardave
An Alien Translation ..... for Extraterrestrial Other Worldly Transubstantiations
Whether the only hope is to replace politicians with AI systems, thereby inserting a healthy portion of functionality into the decision neural network, since rationality is required to optimize politics, is but one novel narrow restrictive possibility, for there are always other equally disruptive avenues to open up and exploit for progress and lead. This insertion of spontaneous, productive behavior will cause a high enough level of success performing as predicted to serve as a reality check on unfettered deployment of LAW systems by the AI masters who will be making the decisions ensuring a constant and unassailable fair advantage ........ and not unfair because such strategic replacements and virtual emplacements are available to all.
:-) And what number do you put on that worth to Oligarch type Virtual Machines? .... Millions, billions, trillions ..... gazillions?
I suppose that question is all relative and fully depends upon the particular and peculiar levels of a Secret Secure AI Service required, with pay peanuts get monkeys a realistic driver of prospects for projects and programs.
They just don't use the latest tech yet. For instance, a sea mine is programmed to decide when to blow up to damage or destroy a ship, without any "person in the loop". Loitering aerial munitions image recognize armored vehicles and pounce, what "person in the loop"? You mean the one that pressed the "launch" button?
So how do you define a killer bot? Uses AI? Has a computer built in (ha! lots of those already). Wanders around looking to kill a meatbag?
Even if someone makes up rules to follow, the gray area will literally kill you. It would be OK to hunter/kill another bot... but can you tell an unmanned bot from a manned one? Collateral damage is OK or unmentioned? (loophole big enough to drive a killer bot through that one) What does the killer bot do when the terrorist is using human shields and about to kill you (the "man in the loop"; wait for authorization which means the "man in the loop" dies, or kill the terrorist meatbag and few human shields w/o authorization from a "man in the loop"), sort of like what does the autonomous land vehicle do when the decision is kill the driver or kill the pedestrian?
The lawyers will swarm, the bots will kill them, then the Robot Overlords (the 1% folks who pay the programmers to control the bots) will rule. Love your soma, its all you get besides algae pap and Soylent Green. Unless you are a bot programmer or an Overlord. I'VE MADE MY CHOICE WORM! GROVEL AND LOVE IT!
Biting the hand that feeds IT © 1998–2021