back to article In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it

The magic AI wand has been waved over language translation, and voice and image recognition, and now: computer security. Antivirus makers want you to believe they are adding artificial intelligence to their products: software that has learned how to catch malware on a device. There are two potential problems with that. Either …

  1. J. R. Hartley

    127.0.0.1

    Your device is protected.

  2. jake Silver badge

    AI/ML

    Annoying Idiocy / Millennial Leachers

  3. frank ly

    Bus and Ostrich

    If you want to see the bus that looks like an ostrich (of course you do) then it's about three quarters of the way down this page:

    http://www.popsci.com/byzantine-science-deceiving-artificial-intelligence

    The rest of the page is quite interesting too.

  4. Yesnomaybe

    Scary

    Sounds like the equivalent of loading a bacterium on a petri dish with increasing doses of antibiotic.

    1. Brian Miller

      Re: Scary

      This is actually very good research. The next generation of AI malware producers will be using source code snippets. This is a lot like fuzzing, where data is munged in order to produce a crash or a hang.

    2. LionelB Silver badge

      Re: Scary

      Sounds like the equivalent of loading a bacterium on a petri dish with increasing doses of antibiotic.

      More like loading mutating bacteria on a petri dish with increasing doses of "mutating antibiotics"; you get an arms race - kind of what's happening in the real world with antibiotic-resistant bacteria (cf. the Red Queen effect).

  5. Anonymous Coward
    Anonymous Coward

    Maybe

    People posting selfie images and profile photos should consider adding ostrich etc., masking to make snooping tricky

    1. Mike Moyle
      Coat

      Re: Maybe

      But then scammers would devise an AI that would compare sets of adjacent pixels in a picture to look for signatures of Ostrichization. Where will it ever end?

      1. Charles 9

        Re: Maybe

        It's not supposed to. The next step would be to create a less-obvious Ostrichization, then to detect it, then to make it less detectable, and so on, until either they can't Ostrich it any better or they beat the noise floor, by which point the detector would fail on account of false positives.

  6. Andrew Moore

    Hmmmm...

    I see Prince Robot IV got an office job then...

    1. Faceless Man

      Re: Hmmmm...

      And it looks like he got the crack in his screen fixed.

  7. John Smith 19 Gold badge
    Unhappy

    So it's Core War played with "real" virtual processors between machines

    Imagine that.

    And it's only taken 33 years for someone to try it.

    Core War here

    The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.

    BTW there is also a Linux GCC optimizer that builds optimally efficient assembler instruction sequences for very frequently executed code. IIRC it was limited to 5 instructions, but recent versions can do sequences up to 7 instructions long (this is one of those combinatoric explosion problems)

    1. Charles 9

      Re: So it's Core War played with "real" virtual processors between machines

      I believe you mean combinatorial explosion. For a while, I was thinking Traveling Salesman when you mentioned it, but perhaps Sudoku, Chess, and maybe Go are better example. Basically, the complexity increases on an extreme scale—geometric or factorial, say—for each step up. Easy to see why we probably won't see an 8-instruction optimizer except for maybe RISC instruction sets.

    2. LionelB Silver badge

      Re: So it's Core War played with "real" virtual processors between machines

      The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.

      That's an excellent point, and one which you can be sure is not lost on the designers of this system (or adversarial ML in general). I could imagine ways of getting around this, though. First of all. you would have to ensure that the malware detector does not "forget" earlier attempts at evasion. This could be done, for example, by continually bombarding it with all thus-far generated malware attacks. That's the easy part. Getting the malware generator to diversify wildly is likely to be much harder. It probably needs to be "seeded" with exploits from the real world, not to mention the designer's imagination in full black-hat mode.

  8. Infernoz Bronze badge
    Facepalm

    Pattern matching is dumb, thus anomaly detection, with history and rollback.

    You can't train a detection system for patterns it hasn't seen yet, but you can put traps, like trip-wires and honey pots, and other anomaly detection in-place, and use a rolling audit of seemly OK previous behaviour for alerts and to dynamically re-train detection systems to quarantine later similar malware before it can do much or any damage. Having OS enforced application level permissions would also help, including faking access, to "honey pot" trick malware into revealing itself better.

    If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.

    1. LionelB Silver badge

      Re: Pattern matching is dumb, thus anomaly detection, with history and rollback.

      If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.

      So perhaps the malware generator could discover and deploy this strategy (with a bit of nudging, perhaps) - and the malware detector could then attempt to mitigate against it.

  9. TheElder
    Boffin

    Waste of time

    Why go to all the trouble? Write the code any way you like. Social engineering works really well.

    Want a longer....? Just type your account number here....

    Oh, wait a moment... The social engineering requires actual intelligence.

    Never mind....

  10. amanfromMars 1 Silver badge

    More of the same old nonsense in not a viable option for future delivery of surreal derivatives

    The aim of the game is to fudge the file, changing bytes here and there, in a way so that it hoodwinks an antivirus engine into thinking the harmful file is safe. The poisonous file slips through – like the ball carving a path through the brick wall in Breakout – and the bot gets a point.

    In just exactly the same way that mainstream media presents both state and non state actor scripts for virtual realisation and program reaction to create a chaotic future for "experts" to stabilise?

    Yes, it sure is, bubba. But that stated secret is always best kept safe and secure and away from and widely unknown by the masses, because of the very real live danger to elite executive systems administration that such knowledge delivers.

    Now that it is out there in spaces and places which cannot be commanded or controlled by formerly convenient and/or conventional means and memes, is the Great Game changed with novel leading players with authorisations to either create new future projects and more magical systems and protect old legacy systems leaders or simply destroy perverse and corrupted old regimes if they/it chooses to remain disengaged and silent whilst peddling its arms to the ignorant slaves which be identified in this enlightening tale ........ Silent Weapons for Quiet Wars

  11. Anonymous Coward
    Anonymous Coward

    Sounds like Charles Stross was on the right track with the near-future he outlined in "Rule 34".

  12. P. Lee
    Terminator

    The only winning move is not to play.

    Seriously, stop relying on A/V.

    We need more sophisticated and accessible rights-dropping. We need applications to drop rights to disk access outside designated subdirectories.

    Give me ultra-light jails where I've dropped rights to all sorts of things like disk areas, opening of listening ports etc.

    Reduce the impact of a compromise and the incentive to compromise rapidly diminishes.

    1. Charles 9

      Re: The only winning move is not to play.

      You forget things like Return-Oriented Programming where malware can simply use other programs (who are MEANT to access the places it needs) to do its dirty work FOR them.

  13. nickx89

    This is a must read research by Endgame. I would wait to see the conclusion of this cat and mouse game in the context of antivirus. Since some malware doesn't work in a pattern, how would the AI response to that?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like