
I've seen this before
Isn't this what nearly everything on the Internet does to your brain?
A novel adversarial attack that can jam machine-learning systems with dodgy inputs to increase processing time and cause mischief or even physical harm has been mooted. These attacks, known as sponge examples, force the hardware running the AI model to consume more power, forcing it to behave more sluggishly. They behave in a …
We've seen this before haven't we?
The separation of sponge attacks from adversarial ones seems artificial, particularly if sponging results in denial of service or forcing a cut-off. To take their example, if the answering system ignores words that it can't recognize in a given time period then the result - no answer or an incorrect one - is the same as an adversarial attack, where the input causes a mistake. In both cases a mistake is made.
Alternatively, if you ignore the overhead and let a sponged attack run to completion it might not complete in time to stop your Tesla self-driving into a wall. At which point you might not be in a position to care whether the AI was fooled or was just taking its time...
Mine's the one with haphazard stripes with a matching face mask->
I have seen it too! Using the example in the article, it reminded me of a Philip K. Dick story where the humans are trying to stop the robot factory from working so try to contact the AI by saying that the "milk is fizzled"
Quick Internet search - the story is called Autofac and it was "pizzled"!
https://en.wikipedia.org/wiki/Autofac
"Slipping it a question that contains a typo like “explsinable" instead of “explainable” can confuse the system and slow it down"
Humans can typically handle this very well - usually able to read text at normal speed even if it's littered with typos. Indeed they usually go unnoticed, which is why proof reading is so hard. The critical factor is that for the AI machine the words have no meaning - they're just "tokens".
For the native language speaker, the brain is constantly trying to predict the next word in a sentance and if the misspelled word is close enough to that word it will continue without noticing. E.G. many might not notice I misspelled sentence.
Human recognition of all sorts relies on context. And we have Trillions of synapses to work with.
The human brain is massively parallel, though. It also has the advantage of a lot more data - it's looking at the shapes, not a set of bytes.
There's probably a whole bunch of neurons representing portions of letter-shapes which trigger. The neuron which recognises "explainable" sees most of its triggers and says "could be me - not 100%". But no other neuron triggers so the next level goes with that.
And of course, if that subconscious process goes wrong, there's the conscious process which can recognise the mistake and retask the subconscious process to examine it more closely.
"There is a simple method to preventing sponge attacks... In other words, sponge examples can be combated by stopping a model processing a specific input if it consumes too much energy."
If we're talking about things like real-time data processing for an autonomous car, ignoring inputs entirely doesn't sound a lot better than handling them too slowly. Either way, the car is unable to understand its surroundings and becomes a danger to everything around it. This method could be useful in non-critical roles - if you're doing something like bulk image recognition you just have to compromise between a reduced data set and computational efficiency. But in any situation where any or all the input data could be important, the attacker wins either way. If the whole point of your AI is decide which data is important, throwing it out before that decision can be made is just as bad as taking too long.
Yes ..... having access to the best of intelligence sources is all very well ...[and one assumes those and/or that which are presented as being instrumental in leading national governance system of operation have such easy readily supplied access] ....but knowing both what not to do with it and what to do with it in order to greater benefit all is something else completely different and would range from being extremely difficult to next to impossible if one is lacking in the intelligence department oneself.
And it is also coincidentally disturbingly dangerous, for intelligence abhors an empty vessel vacuum and effortlessly migrates to where it is truly appreciated and that can be where one's opponents and competitors hang in the hood.
Interestingly enough I see research into this becoming a huge benefit/negative for people as well. We can learn how to defend or change how AI processes or discards data leading to improving information input to AI while at the same time if we understand enough about the processes that cause that we can figure out ways of affecting our own neural nets.
Imagine a world where we can provoke behaviors with color combinations (in a fashion this is already done with institutional wall colorings set to enforce certain mood sets --aka we avoid warm colors in areas where we wish people to be quiet) or sentences in native languages.