back to article Can your AI code be fooled by vandalized images or clever wording? Microsoft open sources a tool to test for that

Microsoft this week released a Python tool that probes AI models to see if they can be hoodwinked by malicious input data. And by that, we mean investigating whether, say, an airport's object-recognition system can be fooled into thinking a gun is a hairbrush, or a bank's machine-learning-based anti-fraud code can be made to …

  1. gerdesj Silver badge


    What happens if I write "iPod" on my face?

    I think it's called "adversarial" something or other but that sort of trick is so obvious to real intelligence (you) yet we consider a machine clever when it works out how to play chess or go.

    I'm going to need to see a machine explain "time flies like an arrow, fruit flies like a banana" convincingly before I worry about giant pepper pots flying up stairs and murdering me and the missus in our bed.

    1. steelpillow Silver badge

      Re: iPod

      I am more worried for people who wear Ford Mustang T-shirts so the driverless car thinks they are still thirty yards away.

      "These are small, but the ones ones out there are far away." -- Father Ted

  2. steelpillow Silver badge

    Quis adversariat ipsos adversarii?*

    Next step is to discover that most adversarial tweaks are useless, just some random garbage the AI will ignore.

    So we will get an AI to train itself up in producing more effective adversarial data.

    And set two clones against each other to probe and tighten up each other's adversarial algorithms.

    Except that will produce its own bizarre effects.


    * (Apologies for dog Latin)

  3. Cuddles Silver badge

    Can your AI code be fooled by vandalized images or clever wording?


POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021