127.0.0.1
Your device is protected.
The magic AI wand has been waved over language translation, and voice and image recognition, and now: computer security. Antivirus makers want you to believe they are adding artificial intelligence to their products: software that has learned how to catch malware on a device. There are two potential problems with that. Either …
Sounds like the equivalent of loading a bacterium on a petri dish with increasing doses of antibiotic.
More like loading mutating bacteria on a petri dish with increasing doses of "mutating antibiotics"; you get an arms race - kind of what's happening in the real world with antibiotic-resistant bacteria (cf. the Red Queen effect).
It's not supposed to. The next step would be to create a less-obvious Ostrichization, then to detect it, then to make it less detectable, and so on, until either they can't Ostrich it any better or they beat the noise floor, by which point the detector would fail on account of false positives.
Imagine that.
And it's only taken 33 years for someone to try it.
Core War here
The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.
BTW there is also a Linux GCC optimizer that builds optimally efficient assembler instruction sequences for very frequently executed code. IIRC it was limited to 5 instructions, but recent versions can do sequences up to 7 instructions long (this is one of those combinatoric explosion problems)
I believe you mean combinatorial explosion. For a while, I was thinking Traveling Salesman when you mentioned it, but perhaps Sudoku, Chess, and maybe Go are better example. Basically, the complexity increases on an extreme scale—geometric or factorial, say—for each step up. Easy to see why we probably won't see an 8-instruction optimizer except for maybe RISC instruction sets.
The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.
That's an excellent point, and one which you can be sure is not lost on the designers of this system (or adversarial ML in general). I could imagine ways of getting around this, though. First of all. you would have to ensure that the malware detector does not "forget" earlier attempts at evasion. This could be done, for example, by continually bombarding it with all thus-far generated malware attacks. That's the easy part. Getting the malware generator to diversify wildly is likely to be much harder. It probably needs to be "seeded" with exploits from the real world, not to mention the designer's imagination in full black-hat mode.
You can't train a detection system for patterns it hasn't seen yet, but you can put traps, like trip-wires and honey pots, and other anomaly detection in-place, and use a rolling audit of seemly OK previous behaviour for alerts and to dynamically re-train detection systems to quarantine later similar malware before it can do much or any damage. Having OS enforced application level permissions would also help, including faking access, to "honey pot" trick malware into revealing itself better.
If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.
If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.
So perhaps the malware generator could discover and deploy this strategy (with a bit of nudging, perhaps) - and the malware detector could then attempt to mitigate against it.
The aim of the game is to fudge the file, changing bytes here and there, in a way so that it hoodwinks an antivirus engine into thinking the harmful file is safe. The poisonous file slips through – like the ball carving a path through the brick wall in Breakout – and the bot gets a point.
In just exactly the same way that mainstream media presents both state and non state actor scripts for virtual realisation and program reaction to create a chaotic future for "experts" to stabilise?
Yes, it sure is, bubba. But that stated secret is always best kept safe and secure and away from and widely unknown by the masses, because of the very real live danger to elite executive systems administration that such knowledge delivers.
Now that it is out there in spaces and places which cannot be commanded or controlled by formerly convenient and/or conventional means and memes, is the Great Game changed with novel leading players with authorisations to either create new future projects and more magical systems and protect old legacy systems leaders or simply destroy perverse and corrupted old regimes if they/it chooses to remain disengaged and silent whilst peddling its arms to the ignorant slaves which be identified in this enlightening tale ........ Silent Weapons for Quiet Wars
Seriously, stop relying on A/V.
We need more sophisticated and accessible rights-dropping. We need applications to drop rights to disk access outside designated subdirectories.
Give me ultra-light jails where I've dropped rights to all sorts of things like disk areas, opening of listening ports etc.
Reduce the impact of a compromise and the incentive to compromise rapidly diminishes.
int main(enter the void)
...