Reply to post: Re: Researchers discover how to hack an ML model

Creep learning: How to tamper with neural nets to turn them against us


Re: Researchers discover how to hack an ML model

Actually, recently on this El Reg, it was described how the "hackers" can imperceptibly modify the input stimulus to cause the AI output to be slammed to the pegs (producing idiotic output). Examples were given of the image recognition case (chrome toaster and banana), as well as audio.

So the hackers only need to borrow a system, or buy an example from eBay. Then experiment with it endlessly (i.e. about two weeks) until they uncover a built-in weakness. Then they use this to attack in the wild against the stock system, unmodified. They don't need to touch or modify the actual system to be attacked in the wild. And their example is only "examined" and "tested", not modified.

[Mythical-example-for-illustration-purposes Alert] So they walk up to the bank vault at 3:00am holding large spinning pink and yellow lollipop distraction disks over their heads, the AI gets into an unforeseen state, an output goes High, and the bank vault immediately opens for the bandits.

That specific example is unlikely, but it's unlikely multiplied by infinite possibilities. So this sort of nonsensical attack will occur instantly (about a month) with the seemingly-inevitable widespread naive deployment of AI with "hidden" (undefined) layers. It's my opinion that this is all crystal clear, so it mystifies me how the "AI Boffins" or their Press Office can't see this coming.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon


Biting the hand that feeds IT © 1998–2021