back to article Nice 'AI solution' you've bought yourself there. Not deploying it direct to users, right? Here's why maybe you shouldn't

It’s trivial to trick neural networks into making completely incorrect decisions, just by feeding them dodgy input data, and there are no foolproof ways to avoid this, a Googler warned today. Tech vendors are desperately trying to cash in on the AI hype by offering bleeding-edge “machine-learning solutions” to all your …

  1. Neil Barnes Silver badge

    It's also sometimes difficult for fleshy meatsacks

    to identify unambiguously an object from a flat photo. For a 3-d object, we tend to move our head around and try and see more detail from hidden sides/surfaces -- I wonder if there's any mileage in training a recogniser that way and whether it would then have a better guess at what the flat version might be?

  2. T. F. M. Reader Silver badge

    "Developers should attack their own systems by generating adversarial examples"

    Better yet, vendors should hire professional "adversarial testers" and not ship "solutions" till the "red team" is satisfied. A fairly standard practice with penetration testers today. Well, fairly standard in some circles... Some narrow circles...

    First, the profession of adversarial testers needs to be created though. Any VCs 'round here? I have a startup to sell you...

  3. revenant

    "No one really understands why machine-learning code is so brittle"

    For me, the answer to that question is hinted at early in the article -

    To be safe, the input data should be thoroughly sanitized, or the AI software should be banned from handling user-supplied information directly..,

    That bit immediately made me think of how children develop - we limit and simplify their inputs and don't trust them with anything complex until their brains have learned enough (and that's not just information, but also complexity of associations of information in their brains) to move on to more real-life things.

    That's a process that takes years and a lot of effort, and they can still end up not appreciating that crayons aren't for eating, and food is not for drawing with.

    Until they scale up the systems and their training, I don't think this stuff is going to give us anything better than a crude imitation of human decision-making.

    1. Bronek Kozicki

      Re: "No one really understands why machine-learning code is so brittle"

      I think the premise above in false. Everyone who worked on this for a little will understand pretty well why machine learning models are so brittle. They are nothing else but heuristics trained to recognize a particular correlation. The meaning of heuristic is pretty clear. The correlation is also a hint although more subtle - the fact that something "looks like" a bottle only means that there is a strong correlation between a particular set of pixels and a "bottle" label, nothing else (in particular, it does not mean that the pixels actually show a bottle).

      How that correlation was arrived at? By some heuristics. How does that heuristics work, actually? Whoa, back off, we just threw lots of data at it and some got stuck. How did we arrive at the situation where ML is treated as an oracle? A friend pointed at this recently

      1. Andrew Commons

        Re: "No one really understands why machine-learning code is so brittle"

        The last link given in the Reg piece goes to a neat piece of research that used adversarial examples thrown at an image recognition application to conclude that it triggered on texture rather than shape.

        This highlights the real problem...they do stuff but we don't know how and as a result have no idea how they will behave if the input goes off-piste. So, dressing a wolf in sheep's clothing actually works with the current technology.

        1. vtcodger Silver badge

          Re: "No one really understands why machine-learning code is so brittle"

          they do stuff but we don't know how

          Sort of like eight year olds with a hammer, chisel, and an antique watch. What could possibly go wrong?

      2. Anonymous Coward
        Anonymous Coward

        Re: "No one really understands why machine-learning code is so brittle"

        There was the nice paper that showed that an AI system hyped as successfully interpreting chest X-rays was, for example, identifying 'fluid on the lung' by spotting the chest drains that doctors put in before sending the patient to X-ray when they diagnose fluid on the lung.

        At the end of the day hyped AI is way too often saying 'It's black not white' on the basis of a 50.1% vs 49.9% probability that it is black, rather than having the honesty to say 'Dunno guv'

        1. Bronek Kozicki

          Re: "No one really understands why machine-learning code is so brittle"

          Another good point. A well-presented ML will typically return a prediction, which is interpreted as probability value. If the value is exactly 0 or 1 then either the model is broken or rounding error got in the way. Typically, for a great quality model and where the match is perfect, the value might be up to 0.97, perhaps 0.98. It is up to humans to actually read this value and think "hmm, this could be something else with 3% probability".

          But then, humans just love survivor bias : if the ML was right more than 10 times in a row we stop paying attention and replace thinking with generalisation. It is good thing that some researchers are actually pushing the probability value closer to 1, but we also need to start paying attention. Because it will never be 1 and yet it will be frequently interpreted as such.

          1. Anonymous Coward
            Anonymous Coward

            Re: "No one really understands why machine-learning code is so brittle"

            Here's something claiming 0.997:

        2. Katyanna Quach

          Re: "No one really understands why machine-learning code is so brittle"

          Ooh, that sounds really interesting. Do you have a link to that?

  4. Paul Kinsler Silver badge

    It’s trivial to trick neural networks into making completely incorrect decisions, ...

    ... just by feeding them dodgy input data,

    Actually, this also works with people. In fact, people often make completely incorrect decisions in an entirely spontaneous and unprompted manner :-)

  5. Crypto Monad

    Don't worry, AI solves all your problems

    I received the following spam from IBM yesterday.

    "To drive your organization’s journey to AI, you need a database smart enough to fuel it. That’s why IBM recently announced their vision for Db2 as the premier AI database. Db2 not only uses AI to optimize its own performance, but has built in support for AI apps and workloads as well."

    Fantastic. All I need is to move my SQL apps to Db2, and they will instantly become AI-enabled! Of course, a SQL database used by AI applications obviously needs to be different to a regular SQL database (in unspecified ways).

    As it turns out, even these bogus features are vapourware. If you read the announcement it's full of "Going forward.... a capability that will be released later this year ... the next release of Db2" etc.

    Sounds like IBM is trying very desperately to remain relevant, or at least to sound relevant to its investors.

    1. Anonymous Coward
      Anonymous Coward

      Re: Don't worry, AI solves all your problems

      Funny you should mention this. Just a few days ago I was thinking about how if something like SQL came along today, with all of its power and all of its statistically based optimization routines, then someone would surely insist that this was AI at its finest. But of course that's not true, given that it actually has decades of ordinary programming behind it, with some basic statistical processing thrown in for good measure.

      This takes me back maybe 25 years or so ago, where I was attending an IBM presentation about their latest query engines which were behind their SQL product and some of their other, older query products, too. And at one point they popped up a screen which said something like "And this is the point where magic happens!" And indeed it did seem quite a bit like magic at the time. But like I said, just ordinary programming, not AI.

      As for staying relevant, at this very moment there is story floating around (maybe even on here) about how something like 40% of "AI" startups don't appear to be really doing anything with AI at all; they're just milking the term for all it's worth. In which case I'd say that IBM has far more legitimate chops than they do.

  6. Charles 9 Silver badge

    Where do you draw the line?

    Fooling a recognition system with slight modifications is one thing, but what if one goes the other way and presents something so weird that even humans can't recognize it. Take the article's mention of beer bottles and wine glasses. Suppose one submits an artfully-grafted picture of a beer bottle but from the neck up, seamless, is the top of a wine glass (like a funnel bottle). Now you're getting abstract, and binary computers get into trouble trying to quantify something that defies quantification.

    1. Michael H.F. Wilkinson

      Re: Where do you draw the line?

      In which case the correct response would be to output

      ++++ Out of cheese error ++++

      ++++ Reinstall universe ++++

      ++++ Redo from start ++++

      1. Charles 9 Silver badge

        Re: Where do you draw the line?

        No, because the output would be rejected as not being "beer" or "wine". If all you can output is "0" or "1", what happens when neither output is applicable?

        PS. I speak on this from real-world experience from trying to chain UNIX commands that can get "complicated" yet have no way to express these complications other than a numeric exit code. When KISS runs head-on into Necessary Complexity.

  7. hellwig

    Negative Reinforcement

    Punish the AI for mis-identifying something. It'll learn real quick not to do that anymore, that is, if it's truly intelligent.

    1. Calimero

      Re: Negative Reinforcement

      Great, and how is that mechanism different from the adversary's regarding your positive reinforcement - aka reward - as his punishment? Same problem, with a changed sign.

  8. Anonymous Coward
    Anonymous Coward

    Stunned Silence

    "Machine learning isn’t the answer to all of your problems."

    Never thought I'd see that in print.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022