back to article Sponge code borks square AI brains, sucking up compute power in novel attack against machine-learning systems

A novel adversarial attack that can jam machine-learning systems with dodgy inputs to increase processing time and cause mischief or even physical harm has been mooted. These attacks, known as sponge examples, force the hardware running the AI model to consume more power, forcing it to behave more sluggishly. They behave in a …

  1. Kevin McMurtrie Silver badge
    Paris Hilton

    I've seen this before

    Isn't this what nearly everything on the Internet does to your brain?

    1. Chris G

      Re: I've seen this before

      I think something similar will be used as a part of Electronic Counter Measures against autonomous weapons systems, assuming it isn't already a part of ECM.

      1. You aint sin me, roit
        Coat

        Dazzle

        We've seen this before haven't we?

        The separation of sponge attacks from adversarial ones seems artificial, particularly if sponging results in denial of service or forcing a cut-off. To take their example, if the answering system ignores words that it can't recognize in a given time period then the result - no answer or an incorrect one - is the same as an adversarial attack, where the input causes a mistake. In both cases a mistake is made.

        Alternatively, if you ignore the overhead and let a sponged attack run to completion it might not complete in time to stop your Tesla self-driving into a wall. At which point you might not be in a position to care whether the AI was fooled or was just taking its time...

        Mine's the one with haphazard stripes with a matching face mask->

    2. Anonymous Coward
      Anonymous Coward

      Re: I've seen this before

      I have seen it too! Using the example in the article, it reminded me of a Philip K. Dick story where the humans are trying to stop the robot factory from working so try to contact the AI by saying that the "milk is fizzled"

      Quick Internet search - the story is called Autofac and it was "pizzled"!

      https://en.wikipedia.org/wiki/Autofac

  2. Mike 137 Silver badge

    Artificial intelligence

    "Slipping it a question that contains a typo like “explsinable" instead of “explainable” can confuse the system and slow it down"

    Humans can typically handle this very well - usually able to read text at normal speed even if it's littered with typos. Indeed they usually go unnoticed, which is why proof reading is so hard. The critical factor is that for the AI machine the words have no meaning - they're just "tokens".

    1. Jim Mitchell

      Re: Artificial intelligence

      As I understand it, the human brain processes text very poorly. "starts with exp, ends with ble, is medium long, in this context read as explainable". I don't see what this has to do with "meaning".

      1. Anonymous Coward
        Anonymous Coward

        Re: Artificial intelligence

        You might have a point. Many a time I've read an entire paragraph in a book or paper whilst being slightly distracted and realised at the end that I dont have a clue what I just read even though I read it perfectly.

      2. Anonymous Coward
        Holmes

        Re: Artificial intelligence

        For the native language speaker, the brain is constantly trying to predict the next word in a sentance and if the misspelled word is close enough to that word it will continue without noticing. E.G. many might not notice I misspelled sentence.

        Human recognition of all sorts relies on context. And we have Trillions of synapses to work with.

    2. Brewster's Angle Grinder Silver badge

      Re: Artificial intelligence

      The human brain is massively parallel, though. It also has the advantage of a lot more data - it's looking at the shapes, not a set of bytes.

      There's probably a whole bunch of neurons representing portions of letter-shapes which trigger. The neuron which recognises "explainable" sees most of its triggers and says "could be me - not 100%". But no other neuron triggers so the next level goes with that.

      And of course, if that subconscious process goes wrong, there's the conscious process which can recognise the mistake and retask the subconscious process to examine it more closely.

      1. Anonymous Coward
        Anonymous Coward

        Re: Artificial intelligence

        Which is why reading (and speaking and writing) a new language is initially so hard and slow. *Everything* is done at the conscious level until practice makes it become subconscious.

    3. katrinab Silver badge
      Meh

      Re: Artificial intelligence

      If a text is full of typos, or in a not very legible font, or poor handwriting, or something like that, it does take longer to read.

    4. Doctor Syntax Silver badge

      Re: Artificial intelligence

      "why proof reading is so hard"

      Proof reading one's own text is much harder than proofing someone else's.

  3. Cuddles

    Not such a great protection

    "There is a simple method to preventing sponge attacks... In other words, sponge examples can be combated by stopping a model processing a specific input if it consumes too much energy."

    If we're talking about things like real-time data processing for an autonomous car, ignoring inputs entirely doesn't sound a lot better than handling them too slowly. Either way, the car is unable to understand its surroundings and becomes a danger to everything around it. This method could be useful in non-critical roles - if you're doing something like bulk image recognition you just have to compromise between a reduced data set and computational efficiency. But in any situation where any or all the input data could be important, the attacker wins either way. If the whole point of your AI is decide which data is important, throwing it out before that decision can be made is just as bad as taking too long.

  4. John Smith 19 Gold badge
    Coat

    OMG The Langford Death Parrot

    is real.

    At least for machine learning, or "Self tuning multi layer neural networks* as I like to think of them.

  5. Doctor Syntax Silver badge

    At a slightly higher level than the example, a verbatim transcript of a speech by John Prescott could be hugely damaging.

    1. Anonymous Coward
      Angel

      Not to mention tweets from you know who.

    2. amanfromMars 1 Silver badge

      A Novel Virulent Adversarial Attack Vector for/from the Stealthy Intelligence Sector

      Yes ..... having access to the best of intelligence sources is all very well ...[and one assumes those and/or that which are presented as being instrumental in leading national governance system of operation have such easy readily supplied access] ....but knowing both what not to do with it and what to do with it in order to greater benefit all is something else completely different and would range from being extremely difficult to next to impossible if one is lacking in the intelligence department oneself.

      And it is also coincidentally disturbingly dangerous, for intelligence abhors an empty vessel vacuum and effortlessly migrates to where it is truly appreciated and that can be where one's opponents and competitors hang in the hood.

  6. ILLQO

    I see 3 lights

    Interestingly enough I see research into this becoming a huge benefit/negative for people as well. We can learn how to defend or change how AI processes or discards data leading to improving information input to AI while at the same time if we understand enough about the processes that cause that we can figure out ways of affecting our own neural nets.

    Imagine a world where we can provoke behaviors with color combinations (in a fashion this is already done with institutional wall colorings set to enforce certain mood sets --aka we avoid warm colors in areas where we wish people to be quiet) or sentences in native languages.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like