Ask the AI to secure itself.
Microsoft eggheads say AI can never be made secure – after testing Redmond's own products
Microsoft brainiacs who probed the security of more than 100 of the software giant's own generative AI products came away with a sobering message: The models amplify existing security risks and create new ones. The 26 authors offered the observation that “the work of securing AI systems will never be complete" in a pre-print …
COMMENTS
-
-
-
Friday 17th January 2025 10:51 GMT Ken G
We used poisonous gases (With traces of lead)
And we poisoned their asses (Actually their lungs)
Binary solo
Zero zero zero zero zero zero one
Zero zero zero zero zero zero one one
Zero zero zero zero zero zero one one one
Zero zero zero zero zero one one one one
Oh, oh,
Oh, one
Come on sucker,
Lick my battery
-
This post has been deleted by its author
-
-
-
Sunday 19th January 2025 22:04 GMT Fruit and Nutcase
"Zero1Guy"
Over in Australia, "Zero1Guy" is tinkering away with Zero 1, including developing an interface to DCC...
"...a YouTube channel dedicated to the continued use of the Hornby Zero 1 model railway control system in the 21st century."
https://www.youtube.com/@zero1guy
Zero DCC
https://www.youtube.com/watch?v=nudz7MXzfmc
16 Controllers
-
-
-
-
-
-
-
-
Saturday 18th January 2025 08:59 GMT Andy_bolt
Re: Say no to PyRIT software
Linux may have better security than windows but the user experience in Linux remains that painful that we’re still nowhere near desktop Linux taking off outside the programming community.
I’m not a programmer. I’m relatively able to do things in windows. Every few years I’ll give Linux a go for a week or two but the pain of it isn’t worth the security (at least for me, and based on the uptake of Linux this isn’t isolated)
-
Sunday 19th January 2025 08:04 GMT Anonymous Coward
Re: Say no to PyRIT software
My 75 year old father installed ubuntu himself on his laptop without assistance and without telling me about it.
He's been using windows his whole life and said he'd just had enough of the poor quality of Windows.
My brother was the same, though he's slightly more savvy, but certainly little more than a 'user'. I was somewhat astonished on both counts and so so proud.
-
Monday 20th January 2025 09:28 GMT Anonymous Coward
Re: My 75 year old father installed ubuntu himself on his laptop without assistance
My dog installed ubuntu on my laptop by himself. He said he read about Windows telemetry on a local forum and decided enough was enough.
So proud, I never ever mentioned operating systems in my life before and hadn't realised there was an alternative to Windows.
-
-
-
-
This post has been deleted by its author
-
This post has been deleted by its author
-
This post has been deleted by its author
-
This post has been deleted by its author
-
-
This post has been deleted by its author
-
This post has been deleted by its author
-
-
-
-
-
-
-
This post has been deleted by its author
-
-
-
Sunday 19th January 2025 21:48 GMT MachDiamond
Re: Article Summary
"With lots of boffins highly-educated in both LLMs and security, it may be possible to mostly-secure LLMs."
It may not be possible depending on what you want the machine to do. To secure it, some sort of constraints have to be put in place that might hinder it from doing the job expected. It doesn't save time or advance anything if the AI just keeps repeating "I'm sorry Dave, I can't do that".
-
-
Friday 17th January 2025 09:13 GMT Howard Sway
All of this right as Microsoft injects artificial intelligence into every software application
Do the researchers know that Microsoft has always released software they know is full of security holes, because getting to market first and making piles of cash are a much higher priority for them? Expect this report to be buried very quickly, and replaced with some "look! it can write your emails for you!" guff, followed by "MIcrosoft takes security very seriously" statements whenever the latest LLM fuelled disaster occurs.
-
Saturday 18th January 2025 14:28 GMT Michael Strorm
Re: All of this right as Microsoft injects artificial intelligence into every software application
> Expect this report to be buried very quickly
My suspicion is that MS already saw how bad the report was, had decided never to release it in the first place and told their internal AI system to keep its contents strictly confidential.
And, well... here we are.
-
Sunday 19th January 2025 21:52 GMT MachDiamond
Re: All of this right as Microsoft injects artificial intelligence into every software application
"Expect this report to be buried very quickly, and replaced with some "look! it can write your emails for you!" guff,"
I doubt it.
Will it ride my horse for me or take my car out for a Sunday drive without my needing to be there?
One thing that would be handy is if I could buy a model set that understands PCB routing of high speed circuits and I can sit back and let it route a board for me that takes into account grounding, inductance/capacitance and track spacing that works every time. It can take all night if necessary while I go do something else.
-
Monday 20th January 2025 07:57 GMT Fruit and Nutcase
Re: All of this right as Microsoft injects artificial intelligence into every software application
while I go do something else.
While you go for a walk with your Boston Dynamics "Rebel" and get into some situation, to be rescued in the nick of time by the arrival of Boston Dynamics "Champion"
-
-
-
Friday 17th January 2025 09:57 GMT rgjnk
Shocking
'The case study is suggested as having the potential to “exacerbate gender-based biases and stereotypes.”'
You mean a statistically based model will output something weighted by the material it ingested? Well there's a surprise.
Stereotypes may often have some grounding in reality, and they'll definitely show up in all the text and imagery used for training because it's an inevitable consequence of there being a stereotype or bias in the first place; the model recreates what exists around it.
The only way you're going to dial that stuff out is using artificial datasets that only represent the desired views which are themselves not going to be neutral but just another set of biases and stereotypes...
Just like most of the other flaws this is fundamental to the technology and as such is a risk that can't be fixed or robustly mitigated.
Next they'll be complaining about black box models that can't be properly validated because of the way they're created.
-
-
This post has been deleted by its author
-
This post has been deleted by its author
-
This post has been deleted by its author
-
This post has been deleted by its author
-
-
-
-
-
This post has been deleted by its author
-
-
Friday 17th January 2025 12:17 GMT Caver_Dave
Re: Finally !!!!
As someone who worked with Neural Nets since last century and has worked in software certification for nearly 2 decades, I can say that on a small scale certification has been achieved i.e. the weights for the NN are loaded in at the start of each execution and so are repeatable and testable.
On anything more than a couple of thousand nodes it is just not practical to keep reloading, and obviously weightings are going to change over time, and so what is running is not what was tested.
-
This post has been deleted by its author
-
-
-
Friday 17th January 2025 10:03 GMT Blazde
Is any non-trivial computer system ever totally utterly secure? Some say yes
..and they're wrong.
The usual Microsoft haters will spam these comments, but the situation for neural networks is even more dire than for procedural code because the dimensionality of the input, output, and intermediate state is that much greater. If you test that space against adversary you will always find it lacking. You can't sanitise input without destroying the neural net's killer-app ability to generalise on inputs its never seen before. You can't sanitise output without neutering its usefulness to the level of expert systems with a fixed number of outcomes. You can't threaten them with prosecution and imprisonment if they aid the threat actor because they don't have a self-preservation value system like typical humans do. All you can really do is make sure they're not tasked with anything too important.
-
-
Saturday 18th January 2025 00:01 GMT Blazde
Re: Is any non-trivial computer system ever totally utterly secure? Some say yes
You can't hope to sanitise them sufficiently, because the input to output mapping is not smooth. You have a bunch of similar inputs and the network performs as expected for all of them. Can you extrapolate that finding to other nearby inputs? Sadly not, there are often little singularities and odd folds lurking. Artifacts of the non-linear activation functions.
With procedural code that's almost always the case too and the situation is still very bad, but there you can reason about and modify the inner workings of the software, partition parts of the input and output and sanitise them in isolation. Fuzz this function, put a filter here to ensure theses bits can never communicate off-protocol, reduce the interrelatedness of the all this functionality to minimise edge-cases, etc.
-
-
-
-
Sunday 19th January 2025 21:58 GMT MachDiamond
Re: Meanwhile in other news...
"Google reports halving code migration time with AI help"
and that might be one place where a specialized AI can be of use. It's also a case where both inputs and outputs are clearly defined and the gubbins in-between are known in one language. That's far different from "here's a bunch of input, what do you make of it?"
-
Monday 20th January 2025 09:33 GMT sabroni
Re: Meanwhile in other news...
Translating code between languages has been going on a lot longer than this LLM guff. It's not a new problem and it's one that has exisiting, robust solutions.
Spending 100s times the power to do it in a less robust and rigorous way seems pretty fucking stupid.
-
-
Friday 17th January 2025 11:10 GMT Bebu sa Ware
Who'd 'a thunk it?
tonic ld have thought a system with zillions of parameters that processes input in ways that anyone not omniscient can hope to begin to understand would be a doddle to secure. <Not>
I suspect the capability × security product is bounded above with utility likely being a montonic function of capability at least in the range of feasibility, I would expect the utility × security product also to be bounded above.
Possibly refreshing that cloister bells are being tolled by Microsoft insiders although it would be unsurprising if, in the not too distant future, Russinovich et al. were to depart Redmond "to explore exciting new opportunities" as this is often the case when you aren't enthusiastic about imbibing the corporate lemonade.
-
This post has been deleted by its author
-
-
Friday 17th January 2025 12:48 GMT amanfromMars 1
Nation Shall Speak Peace Unto Nation .. Otherwise CHAOS* Prevails and Does a TakeOver MakeOver
Methinks the bigger picture being slowly and grudgingly and painfully realised, both physically and virtually, is that no systems, executive, elite or SCADA administrative, Microsoft or otherwise, are safe and secure from AI.
The soundest of Sterling Stirling advice then to heed and seed and feed is therefore to play nice.
*Clouds Hosting Advanced Operating Systems
-
This post has been deleted by its author
-
-
Friday 17th January 2025 12:55 GMT JavaJester
Check yourself before you wreck yourself
Anthropic wants AI to operate a computer. Any email, webpage, or message could inject commands. Even the camera and microphone could inject commands. The microphone is particularly useful against an air-gapped system. If a miscreant can trick anyone near an air-gapped machine to play a video or audio on their phone with inaudible commands they can send it commands without the need for any connection to the machine itself.
AI needs to become much more mature before we treat it as a trusted system to do things like operate a computer.
-
This post has been deleted by its author
-
This post has been deleted by its author
-
-
-
Sunday 19th January 2025 09:02 GMT Flightmode
Re: Check yourself before you wreck yourself
My dad was cat-sitting for his neighbors for a few weeks a couple of years ago when they were back home in Germany. He went to their house a couple of times per day and usually stayed about half an hour after feeding to give the cat a bit of company should she want it (she rarely did).
I went with him a couple of times when I visited, and on one occasion we sat talking in the living room when I for some reason came to think about this very strip and told him about it (only I misremembered and said 200 rolls of toilet paper).
…only to have a crisp female voice respond from a remote corner of the room ”I’m sorry, but there seems to be something wrong with my Internet connection so I can’t process your order”. Luckily they’d powered off their router before hearing out. (The fact that she said it on German somehow made it worse…)
-
-
-
-
Friday 17th January 2025 14:37 GMT xanadu42
Security By Design
So, "...the work of securing AI systems will never be complete"
But the "...cost of attacking AI systems can be raised..."
(As argued by Mark Russinovich) By using "... defence-in-depth tactics and security-by-design principles"...
I know that Mark Russinovich is the original author of a number of the Sysinternals applications (some of which I use on a semi-regular basis) so he has a good understanding of Windows' inner workings...
Unfortunately the large number of issues related to Windows 11 updates over the last few months (which appear to be increasing over time) suggests that Micro$oft is a LONG, LONG, LONG way from correctly implementing "... defence-in-depth tactics and security-by-design principles" that actually work
-
This post has been deleted by its author
-
This post has been deleted by its author
-
-
-
Friday 17th January 2025 15:08 GMT Gordon 10
Working for the Dept of the Bleeding obvious
This really makes no sense and the only conclusion you can come to is that no software of reasonable complexity is 100% secure.
Securing an LLM is really no different than securing any other tecnology component that accepts an input. You secure the end point, limit the entry paths and santise the inputs and outputs and test test test.
Yes there are some novel attack vectors - but so was a SQL injection attack once upon a time.
Is this really not just the MS Security team making the case for the next 10 years of their employment and bonuses?
-
This post has been deleted by its author
-
Saturday 18th January 2025 04:29 GMT Paul Hovnanian
Re: Working for the Dept of the Bleeding obvious
"This really makes no sense and the only conclusion you can come to is that no software of reasonable complexity is 100% secure."
True. But one can take a reasonable stab at a failure mode and effects analysis. Those systems that have devastating effects can be made simple and reliable enough to minimize the multitude of failure modes. But that's not how we are building LLMs. Since everything including the kitchen sink is hoovered up to build the models, we can never be sure what the modes really are. The solution is not to use these models output for any consequential tasks. Lose a few chess games? No problem. But we don't give them the launch codes for the missiles.
-
Sunday 19th January 2025 11:09 GMT nobody who matters
Re: Working for the Dept of the Bleeding obvious
<........."Is this really not just the MS Security team making the case for the next 10 years of their employment and bonuses?"......>
Or is it just a pre-emptive lame excuse for allowing themselves to help themselves to the contents of everybody's devices regardless of privacy or data protection consents?
I don't know, but it seems to make for an even more compelling reason to keep well away from both MS and AI.
-
-
-
This post has been deleted by its author
-
-
This post has been deleted by its author
-
-
-
Sunday 19th January 2025 22:03 GMT MachDiamond
"Their decades long track record proves they do not have one single clue on how to do absolutely anything securely."
I don't think they're that stupid, but those trying to secure things are being handed an ever moving target as more "features" are injected to give the OS poofier lips and a bigger bottom.
-
-
Friday 17th January 2025 19:04 GMT Tron
There are security risks and security risks.
Having malware freeze you out of your system and steal your data is a serious problem.
'Exacerbating gender-based biases and stereotypes' is nothing in comparison.
Governments may whine on about 'harms', people being called names online and whatnot, but that is nothing compared to real security risks that see medical data looted or infrastructure taken over.
AI 'risks' are presumably limited to the stuff AI is allowed to do. And no sane enterprise is going to allow this stuff on their system to do more than help an intellectually-challenged cubicle slave write an e-mail a bit quicker. If they do, no insurance company should cover them.
-
This post has been deleted by its author
-
Saturday 18th January 2025 20:04 GMT Duncan Macdonald
Re: There are security risks and security risks.
Unfortunately there are a number of enterprises that are not sane and even more that do not care in the slightest about adverse effects to innocent people. (Examples - Big Tobacco, the leaded petrol lobby, Putin's war with Ukraine, CEOs trying to destroy unions and environmental legislation.)
-
-
-
This post has been deleted by its author
-
This post has been deleted by its author
-
-
-
-
This post has been deleted by its author
-
This post has been deleted by its author
-
This post has been deleted by its author
-
-
-
-
Saturday 18th January 2025 10:29 GMT steelpillow
The price of freedom is eternal vigilance
The whole approach misses the fact that Black Hats are developing AIs whose sole purpose is to pwn, poison or kill the White AIs. Once the White Hats grok this, they will begin developing AIs whose sole purpose is to do the same to the Black AIs.
This is just the latest chapter in the Neverending Story. spAI vs spAI. After that it'll go commercial, with AI-on-AI malware for sale on the dark web.
-
Sunday 19th January 2025 23:58 GMT SuperG
Honesty - a breath of fresh air.
Or maybe MS is thinking of all the shareholder lawsuits they'll see once MS gets roundly sued over an AI hallucination that cost someone dearly. Best to put the pipe-dream of a secure AI to rest before it grows legs.
Meanwhile, Washington's spooks are now urging a public/private AI partnership, never mind the fact that they blacklist any Chinese company with even the faintest wiff of a connection to "CCP" to it. Look to the Chinese to give the sanctions wheel a spin.
-
-
Tuesday 21st January 2025 00:54 GMT MachDiamond
Re: Honesty - a breath of fresh air.
"They're just slowly preparing for when inevitably they're going to have to admit that AIs don't work all that well and that they sunk billions into a useless development that they'll never recover"
That's only going to happen when the investors with fists full of cash slow to a trickle.
-
-