
Researchers discover how to hack an ML model
... by having write access to the files that make it up. Popes and woods come to mind.
I wonder if they would have got the research grant if they phrased it as I did?
So in summary - you need write access to either the memory or disk where the model code is stored. In fact lets call it the "Application". You need write access to the Application space - and possibly the OS space on a good implementation.
Lets take a IRL example - say the iPhoneX
I have full write access to the OS level Application partition where the FaceID ML model is stored. I have reverse engineered the FaceID ML model enough to understand how to tweak it.
Do I
a) Only allow my ugly mug to unlock the phone.
b) Rampage through the whole OS looking for a much higher value target.