Reply to post:

Machine-learning boffins 'summon demons' in AI to find exploitable bugs

Lee D Silver badge

The biggest problem with any kind of ML or AI. Unverifiability.

The reason we used machines in the first place was to give us answers we were certain were correct, not subject to human error or interpretation or carelessness or exhaustion. Elimination of errors past the problem-input phase mean that we can use computers for mathematical proofs, even, which is the highest rigour of application.

But ML or AI (which STILL DOESN'T EXIST) - we have absolutely no idea how it arrived at the answer, if the answer is correct (without verifying it against some other more rigorous system), or whether the answer will still be correct once we plug in different starting conditions or change the problem slightly. We are literally clueless.

So when it comes to security, and saying what happens when people deliberately put in invalid, out-of-bounds, taxing inputs into the same systems, and expecting to be able to predict or bound the results, we stand absolutely no chance.

Things like ML have a place, but that place is in providing an answer that you can accept is sometimes incorrect. It's almost a form of analogue computing or fuzzy logic. Their place isn't in anything you care about, anything important, anything where inputs are untested and unbounded, or anything that you can't allow to go wrong or which you might need to "tweak" later to account for such.

Great for the Kinect guessing whether you've made your dance move right or not. But now consider how to view Tesla's Autopilot and similar systems.

Training anything - dog, cat, AI - on input alone and then "certifying" them for a particular job after a lot of input is ridiculous. Because when they hit unexpected input (which, by definition, is anything you haven't trained them on), their actions are unpredictable. It's why a vast chunk of most modern programs is nothing more than checking inputs, handling exceptions and overflows, and bailing out if things aren't as you expected.

When you start looking at security, that chunk gets bigger and bigger and bigger and even the humans make mistakes because they DIDN'T SEE an attack vector when the program was written. AI isn't going to change that, it's just going to make it worse by being unpatchable (because we don't understand what it's actually doing, and certainly can't change JUST that bit of their behaviour) and unpredictable even if it appears to pass all the tests.

There is literally nothing to stop a ML or AI agent from suddenly throwing out a completely random answer purely because the input wasn't in its training, or wasn't in the same kind of pattern as in its training.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon