back to article NIST: If someone's trying to sell you some secure AI, it's snake oil

Predictive and generative AI systems remain vulnerable to a variety of attacks and anyone who says otherwise isn't being entirely honest, according to Apostol Vassilev, a computer scientist with the US National Institute of Standards and Technology (NIST). "Despite the significant progress AI and machine learning have made, …

  1. elsergiovolador Silver badge

    Correction

    If someone's trying to sell you some secure AI, it's snake oil

    Fixed.

    1. Yorick Hunt Silver badge

      Re: Correction

      I'd go one step further and correct it to "if someone's trying to sell you anything, it's snake oil." Anything of actual worth will be actively sought out by the market; it's only worthless crud that needs to be "sold."

      1. Pascal Monett Silver badge
        Thumb Up

        A bit extreme, but not entirely wrong either.

      2. cyberdemon Silver badge
        Devil

        Re: Correction

        Are you saying that the entire profession of "Marketing" is equal to the dubious profession of Grift?

        Pretty much true, except for the very few genuine cases where a customer who doesn't actually know what they want or need, meets an advertiser who is actually honest. Those events are pretty rare.

      3. Anonymous Coward
        Anonymous Coward

        Re: Correction

        Certainly explains why Apple’s marketing budget is so large.

        1. jake Silver badge

          Re: Correction

          Apple's marketing budget is so high because Apple, like Microsoft, is primarily a marketing company.

  2. HuBo Silver badge
    Gimp

    Rather cladistic

    Great classification job in the taxonomy of adversarial ML postures. Way to go NIST!

  3. cageordie

    Everything is going to be AI

    Whether it has any AI content or not, that's going to be the marketing BS for the near future. Just like 'fuzzy logic' was in the past. Just more bullshit in most cases.

    1. monty75

      Re: Everything is going to be AI

      Would you like to buy a bridge? It’s got AI and everything.

  4. johnrobyclayton

    Its just a new version of the security/performance traeoff in IT

    IT has always has a security/performance tradeoff.

    Passwords limit utility to those with the passwords

    Security limits peformance

    Increasing the speed pf password entry attempts make it easier to brute force the logon.

    Peformance limits security

    Locks prevent access.

    security limits performance

    Higher dimensions allow walls to be bupassed.

    Performance limits security

    For AI it gets a bit more complex.

    Some of the goals of AI:

    Fair and just

    Accuracy

    Creativity

    Security

    For AI Accuracy and creativity are similar to performance

    Creativity and Accuracy are obviously at loggerheads

    Creativity gets around security.

    Accuracy tells secrets.

    Fair and just are concepts that our civilisation created to make a collection of inherrently homicidal maniacs get along with each other in crowded and highly competitive environments.

    Fair and Just is made up of a fabric of lies, insanities, obsessions, phobias, motivational manipulations and self deceptions that attempts to confine our natural behaviours into a patterns that stops us from eating each other's faces off.

    Trying to get an AI to deal with Fair and Just is the most likely contributor to an AI apocalypse.

    There is never going to be a general solution to this.

  5. Pascal Monett Silver badge

    "trustworthy AI"

    I can't help feeling that those two words just don't go together.

    1. mmccul

      Re: "trustworthy AI"

      For years, I said that in security, trust is a dirty word.

  6. Anonymous Coward
    Anonymous Coward

    Who Needs Actual Attacks?

    Large Language Models (LLMs) already contain an unknown (large?? no pun intended) number of falsehoods.

    Who needs actual attacks?

  7. amanfromMars 1 Silver badge

    AIMaster Race Theory for Advancing Large Language Learning Machines and Crash Test Dummies

    Poisoning attacks are what captured and captive mainstream media organs/moguls/corporations have long employed and deployed to steer easily led and fed humans in a particular thought direction most desirable for an elite chosen few. Sweet honeypot traps and loads of flash crashing fast cash the once simple and attractive but now turned toxic and destructive mainstream weapons of choice to snare and tame prey and turn opposition into the cuckold that is compliant competition.

    Its effectiveness against an opposing or competing adversary is completely lost however, whenever the secret self-serving unshared private reason for the base lure of the modus vivendi/operandi is known ....... and it is not possible to mitigate such knowledge escaping and advancing and augmenting intelligence with it then being most likely able to be more than just extremely troublesome and catastrophically destructive to hostile enemy foe and fake friend alike.

    Is that where y'all and IT and AI is presently at, and are things already moved on much deeper and darker into the future lights that lead to the ways ahead, El Regers, for a Right Choice Few ‽ .

    Obviously is it thought so here where we are all at.

    1. Anonymous Coward
      Anonymous Coward

      Re: AIMaster Race Theory for Advancing Large Language Learning Machines and Crash Test Dummies

      You managed to achieve the quantum superposition of a question mark and an exclamation mark! I know we are only talking about an 8-bit quantum super-expression, but this is still miles ahead of Google and IBM. There is method to your madness. And to think we saw it here first.

      1. jake Silver badge

        Re: AIMaster Race Theory for Advancing Large Language Learning Machines and Crash Test Dummies

        "And to think we saw it here first."

        I'm pretty certain amfM has been using the interrobang here on ElReg for years now, on and off.

        The ligature itself has been around for over 6 decades. I can set up my Linotype machine to generate it.

    2. jake Silver badge

      Re: AIMaster Race Theory for Advancing Large Language Learning Machines and Crash Test Dummies

      But can you actually poison something that is inherently untrustworthy?

      And how can you augment intelligence with something that is born of incorrect, incomplete and incompatible data which are more often than not also corrupt and stale?

      Garbage in, garbage out.

  8. spireite Silver badge

    This entire GPT/Bard/whatever is snake oil

    I've been round the block numerous times in my career.

    Never have I seen so many intelligent people fall into a new-tech (GPT) trap, making it something it clearly isn't.

    It doesn't take much to prove it's mostly show and nowsubstance.

    Back in the dim-distanxe, the first thing I was tough in the early 80s was 'Garbage In, Garbage Out'. GPT et al are still proving it exists.

  9. Bebu
    Windows

    Keeping It Everyday

    Evasion attack - disguise: fake beard or mask.

    Poisoning attack - telling well chosen (or not so) lies to idiots: the whole US political process.

    Privacy attack - the shadows you cast: when Soufflé Girl deleted the Doctor from the Dalek mind hive* and when he subsequently erased all other records of his existence he was still defined by the hole his removal left.

    Abuse attacks - disfigurement: graffitiing a moustache on a portrait possibly with certain banned symbols or gestures.

    The value(?) AI is adding here is making its dismal output more credible to the polloi but as the anti-vax, Ivermectin and general Covid nonsense demonstrated, a frightening proportion of the population will believe just about any codswallop.

    *Sorry look it up :)

  10. zmre

    Yes, but they missed more important attacks

    It's pretty common for roundups on AI vulnerabilities to focus almost entirely on the LLMs -- inputs to the training, extraction from the model, etc. That's what this paper does. It's sort of annoying since most people using LLMs aren't the ones building them. If training data is stolen from OpenAI's GPT4, what consequence does that have for a user of GPT4? Probably none. There could be consequences if OpenAI trains on some poisoned data set, but most users aren't in a position to do anything about that one way or another and the LLM is likely to make up answers regardless.

    People developing AI features, but not working for one of the big companies developing AI models, need to worry about a different class of attacks that can target Retrieval Augmented Generation and other widespread patterns. For example, vector databases are commonly used to enable AI queries over private data. And vector databases can be reversed back to their inputs using "embedding inversion attacks," which are pretty potent, but largely ignored or unknown by those covering AI security. It's a shame because there are absolutely things that users can do to protect themselves from some of these attacks.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like