back to article Techniques to fool AI with hidden triggers are outpacing defenses – study

The increasingly wide use of deep neural networks (DNNs) for such computer vision tasks as facial recognition, medical imaging, object detection, and autonomous driving is going to, if not already, catch the attention of cybercriminals. DNNs have become foundational to deep learning and to the larger field of artificial …

  1. Anonymous Coward
    Anonymous Coward

    >They're a multi-layered class of machine learning algorithms that essentially try to mimic how a human brain works...

    This is a common misconception. The artificial "neurons" in a neural net are certainly inspired by biological neurons in nature, but the networks we construct out of them are very often deliberately un-like how biology constructs the soft-and-mushy version of the real world brain. For example many artificial deep neural nets are deliberately acyclic ("feedforward"), which isn't the case in the brain.

    What we're generally aiming to mimic is the outcomes of the brain working - the learning part - rather than the underlying mechanisms or structure of the brain itself.

    It's fair to say some classes of neural nets are biologically inspired in their structure, but the fundamental reality is we have as-good-as-no-idea how the brain is really functioning at the cellular level so we're really being inspired by the current-best models for how the brain might work. Models inspiring models. Turtles all the way down.

  2. HildyJ Silver badge
    Thumb Up

    Sad truth

    In the malware battle offense almost always outpaces defense.

    That said this seems like a reasonable way to eliminate one vector and I hope we will see an implementation.

    Unfortunately, I assume that another vector will be found, if it hasn't been already, and the battle will start again.

    1. ThatOne Silver badge

      Re: Sad truth

      > demand in such industries as healthcare, banking, financial services and insurance surging

      Fortunately in this case it won't affect anything vital or even important... *rolls eyes*

  3. Duncan Macdonald Silver badge

    The training data is one of the problems - the source code for the DNN is another

    If the training data for a DNN is generated by a third party then you are trusting that third party (which may be like trusting Microsoft to produce error free code!!). If a DNN is to be used in a critical job then the training data needs to be examined before it is used to train the DNN.

    The source code of the DNN also needs to be checked for backdoors.

    If either the DNN source or the training data is not available then the DNN should not be trusted in any critical job.

    (Unfortunately there are far too many stupid bosses who will insist on a particular product despite security holes because they are bamboozled by the salesmen.)

    Icon for what should happen to people who use untested DNNs for critical jobs ====>

    1. ThatOne Silver badge

      Re: The training data is one of the problems - the source code for the DNN is another

      > If the training data for a DNN is generated by a third party then you are trusting that third party

      Why would you say that? "Trust" has nothing to do with it, if you chose a specific third party data it's only because it's cheaper than the alternatives, ideally free.

      Logic says the market will quite soon be flooded by "free" poisoned/biased training data everybody will rush to use. After all, time-to-market is the only important thing, and financially they don't have much to worry about, if the stuff hits the fan somebody else will get splattered.

      (Didn't downvote you.)

    2. Whitter

      Re: The training data is one of the problems - the source code for the DNN is another

      One can imagine a future where many a site will use NLP models pulled from <wherever> to power their websites, or models for reading barcodes, or models for correcting gramar, or models for <insert task here>.

      They will be the same users that pull <whatever> set of javascript dependencies to sanitise an input string and have neither the interest nor the skill to debug 'their' work.

  4. Anonymous Coward
    Anonymous Coward

    A model trained for N parameters

    can be hookwinked with input based on N+1 variables.

  5. DerekCurrie


    ...will be hackable.

    The state of both coding and 'Artificial Intelligence' is so poor that we can count on AI being hackable. So much for it being scary.

    What's scary is what we humans DO with AI, such as furthering our ambitions to kill one another with CRMMs, Coward Remote Murder Machines. :-(

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022