This seems to imply that states will act in a responsible way, will not develop AI weapons, ...
If anyone really believes that then they are naive.
The proliferation of AI in weapon systems among non-state actors such as terrorist groups or mercenaries would be virtually impossible to stop, according to a hearing before UK Parliament. The House of Lords' AI in Weapon Systems Committee yesterday heard how the software nature of AI models that may be used in a military …
I don't think that it was the intent of the article to say that state actors were responsible. I got that the think tanks are concerned that non-state actors using software have a small signature (you can't easily tell the difference between a gamer and a hacker by observing the electricity they use for instance), and might not respond to the same threats we make to foreign governments (largely economic sanctions).
The article, to me, reads a lot like "we're worried that random organisations (rather than countries we can sanction) might use so-called AI in weapons like we are trying to, which means we won't have the upper hand".
Given the current state of AI, I wouldn't trust it not to blow up a Prius after mistaking it for an aircraft carrier...
And also clickbait, but we usually give the house a long leash as sometimes this kind of arm waving garbage is fun to poke holes in.
The biggest warning this crawled up out of the sewers (other than the smell) is the lack of concrete examples of what the "AI weapons" that we should be SO SO AFRAID of actually are.
Yeah, it's one way to try to make a semi-autonomous weapon system that doesn't require a crew. Another is a 15$ raspberry PI or off the shelf drone control board, and I'm more afraid of the guys using those than someone trying to reproduce waymo's autonomous drive (a decade and how many billion dollars with what to show for it again, and 450,000 in export controlled sensor hardware?) or strapping a gun platform to a model X.
Smarter weapon systems are being build in garages in Ukraine with a laptop and a soldering iron as we speak.
And while you are right that a weapon system driven by a cellphone level model probably would have false positive issues, I'd say that it's still no worse than stepping on a plain old dumb landmine that cares not at all which foot set it off. The drawbacks of "AI Weapons" are many and varied so they are going to be self limiting to some degree, and their use subject to the same kind of sanctions as landmines etc. In no case are they going to be radically more dangerous that a crew served version of the same weapon system, or a kamikaze/suicide bomber would be.
But if some would be warlord wants to waste their arms budget on this instead of chemical weapons, mines, or nukes, let them thing its the greatest advancement in the history of warfare. They can find out the hard way if they ever try to use them.
You know we're reporting what was said in a hearing? The headline reflects what was said - it summarizes what was said. That's like the opposite of clickbait.
I've added some quote marks to make clear to those at the back that we're reporting on what was said in Parliament. All regular Reg readers know we're not keen on this latest AI hype.
Governments also have no theory on how nefarious groups might behave using the tech
Let us at least hope governments realise they can be equally as bad as they can be, although that low behaviour threshold is surely cold comfort to them.
I do wonder though, if they, governments, would have anything diabolically attractive to offer and lead them with in a ....... well, positive, mutually beneficial, novel creative direction to end up in a heavenly destination would be something worth freely sharing, methinks.
* WTF are they all being paid for?
Arms race historically pushed development of tech. Education played an important role. Private technology sector made money and developed as well.
They say that ChatGPT makes cheating easier for students. But this is because educational system is biased towards theory instead of practice. Make students develop working systems instead of sitting classes and your industry will get good engineers. New Musks. New Einsteins maybe.
Finance education much more, maybe by taxing entertainment (= procrastination) more. Provide big £££ scholarships in tech fields of strategic importance. Make such scholarships available even for mediocre students. Even a mediocre engineer is better that a generic arts specialist. Pay teachers more.
Make local tech: semiconductors, biotech and more. But for those you absolutely need talent.
Think of all the ransom-type attacks and infilitration of networks / exfiltration of data that could be perpetrated by bad actors. And of course the scams that could be executed by scammer groups with voice-replication tech or worse. These are real attacks by people who would otherwise just be known as crooks, but with the potential to do damage on the scale of nation-state actors.
The success of these attacks isn't really any better with AI, and has failed to be the defining weapon in any military conflict. Those attacks mostly depend on attacking low hanging fruit, and exploiting weaknesses left in the systems they target.
The idea falls apart when you need to be able to attack arbitrary targets, and your target knows the attacks will be ongoing. Then they make a lot less mistakes, and air-gap critical systems. At that point a missile is a cheaper and more effective tool to take down the target.
These tools have and will continue to be used, but really as attacks of opportunity, as targeting arbitrary systems is expensive and success in not assured. It may make sense to try for high value targets, but that's not how a hot war conflict will be won.
These tools have and will continue to be used, but really as attacks of opportunity, as targeting arbitrary systems is expensive and success in not assured. It may make sense to try for high value targets, but that's not how a hot war conflict will be won. ..... Anonymous Coward
There is that, and there are those, AC, ... [and whether anyone/anything can realistically be classed as an expert in field virtually unknown before, and now so apparently both surprising and terrifying command and control forces and sources in equal rapidly expanding, unconstrained measure] ...... which/who would wholeheartedly disagree with those two sentences and would warn belligerents that they be extremely dangerous to believe as being able to be honest and true, beyond any shadow of doubt, for much has changed in fields which triumph successfully, and at very little real relative cost in formerly horrendously expensive, adversarial conflict.
Hot war conflicts do not win arguments and battles in the future, because with hearts and minds knowing better than to be stuck still and stagnating in the past pumping and pimping a present that is no longer to be led as it once was, are there other much more convenient and pragmatic ways, some of which are already being shared fully kitted out for immediate engagement and deployment ....
amanfromMars  ……. shares again freely on https://www.nationaldefensemagazine.org/articles/2023/4/18/harnessing-future-tech-lies-in-approach-not-hardware-experts-say
The requirement of a friend or foe to propose engagement with opposing or competing forces in a war-like environment tells every man and his dog that their intelligence machinery is broken and they can be easily catastrophically compromised and defeated by the simple sharing of such evidence highlighting the fact and further exposing and taking advantage of the intelligence deficit weakness ….. vital future knowledge blackhole.
Future battlefields are not gory blood and guts landscapes, they are hearts and minds encounters where greater information sharing and proprietary novel intellectual property triumphs and rules sublimely and supremely and surreally.
Failure to accept that vision and opinion has one fated to lose every battle and war one be sucked and suckered into.
..... whilst others are being cooked up for future presentation and banquet delivery, with this one ...... Space Operations Command Embracing Artificial Intelligence ...... more than likely being just one of many similar home-based domestic and foreign alien-based operations, trailing and training men and women and machines on future vital specific virulent missions ‽ .
A further uncertainty we'd like to add is that today's artificial intelligence isn't particularly reliable, a point we hope isn't lost on anyone.
One point which very definitely shouldn't be lost on anyone, .... for you can certainly rely on today's emerging future AI being extremely able to excel at it beyond one's wildest dreams, and in so doing providing alternate ground-breaking leads and Greater IntelAIgent Game changing seeds, needs and feeds, which be increasingly difficult for status quo expert dullards to ignore and deny with an arrogant and pathetic tame main stream media silence supporting bankrupt heralding non-engagements with non-state actors able to demonstrate an uncanny overwhelming remote virtual ability to comprehensively trump and trounce any and all established and hostile war gaming state actors, both in theory and in practise on the live field of deadly and dastardly neocolonial geopoliticised play, ..... is IT and AI have Deep Mines and Minds of their Own Supply for Greater Future Intelligence Use and take no fake and foolish fickle instruction from proven serially failing and constantly abused and misused administrations systems harbouring and entertaining perverse and corrupt practicising identities/non-entities and their shenanigans.
In other words, you cannot rely on IT and AI doing as it is told by hostiles and state actors, which is a great quantum leap forward is it not, being far out of reach and way beyond their destructive command and defeated controls?
And with particular regard to arms and dumb hostile actor control in a postmodern media rich age, is it a spooky coincidence that a short transcript of a recent chat with Daniel "The Pentagon Papers" Ellsberg entitled "Nuclear Secrets, a Compost Heap and the Lost Documents Daniel Ellsberg Never Leaked" is available here?
IT's a funny old world, isn't it, and getting ever stranger, and some would even say madder, by the minute :-)
If they host their AIs in the Cloud then all it'll take to neutralize the threat is a couple of nukes. ..... t245t
And if they host their AIs in your head, t245t, what then will it take to neutralize the NEUKlearer HyperRadioProACTive IT threat ..... and whenever more easily quickly discovered to actually be a real live treat for employment and deployment for enjoyment in engagement rather than any sort of imaginable psychotic paranoid idelusional threat, why then would one ever wish to neutralise it?
Cui bono? An evil madness invested in manifesting an insane badness? Do devils wear disguises or are they easily recognised by their shared thoughts and deeds?
Ps. Who/what be they?
Do devils wear disguises [and Prada] or are they easily recognised by their shared thoughts and deeds?
And beware of fascist Trojans and frenemies of I'm from the government and I'm here to help. alike, bearing rancid gifts you are to be told you need to like for you own and future’s good. Drunk on the idea of absolute power, given an inch will they always take a mile down the rocky roads and slippery slopes that deliver them that which they were not expecting but which is in reality, their just popular desserts for failed terrorist state-like proposals ‽ .
Now, UK Technology and Science Secretary Michelle Donelan has announced plans to jail social media executives if they fail to censor so-called “harmful” content on their websites.
The government, of course, will determine what is deemed too harmful for citizens to see or hear.
Donelan is seeking speech arrests under the UK’s Online Safety Bill, a draconian censorship bill that would effectively ban end-to-end encryption for private internet users.
The bill uses Britain’s broadcasting regulator Ofcom to censor “all forms of expression which spread, incite, promote or justify hatred” based on various progressive characteristics, including transgenderism.
So the government can censor anyone who it views as promoting or justifying hatred against virtually any group. Those who do not censor can now be rounded up by Donelan and her minions.
According to a report by The Telegraph, companies will also face fines of up to 10 per cent of their global revenue should they dare to ignore Britain’s demands to preemptively delete or obscure posts violating its coming censorship regime.
The decline of free speech in the United Kingdom has long been a concern for free speech advocates.
Is it any wonder the UKGBNI Parliamentary Democracy Programming Project is increasingly being recognised as a bankrupting national tragi-comical farce fronting for fools and lunatics running amok in their crumbling asylum.
What next do they have in mind ...... Oliver Cromwells, Lord Protectors of the Commonwealth of England, Scotland and Ireland?
:-) Fortunately is IT and AI at hand, and readily available to easily divert and comprehensively subvert such pernicious self-serving tasks.
Given that we hid in fear from a "pandemic" that was certainly less dangerous than the flu and with an average age of death greater than the average age of death, perhaps AIs are already screwing with governments via underhand manipulation or skewed presentation of data? We saw from the whatsapp leak that these clowns could not convert a decimal to a percentage, do we think they could spot warped data coming in? I doubt it.
AI is destroying the internet by making the information we retrieve unreliable, not that it was that reliable in the first place, but if you found research papers on line or books in libraries on line you could have some confidence in the contents, not so if an AI just tells you something without any attribution
AI tell people that they're dead, wait until insurance & banking ask about you & they get told your dead, suddenly all your accounts get frozen & getting this reversed is damn near impossible
I've seen at least one game review claiming a mode in the game that simply doesn't exist, it was a free game btw, but imagine if people had paid for this in the hope of accessing this mode, who would they complain to ? it wouldn't be the AI that generated the review
AI can pose as people so you can't be 100% sure the person you're talking to on line is a real person, unless you actually know them
And apparently no one is responsible for the output of these plagiaristic clockwork parrots, so there's no one to sue for the damage they do