
Just link the virus checker to an AI
Game on!
Machine-learning tools can create custom malware that defeats antivirus software. In a keynote demonstration at the DEF CON hacking convention Hyrum Anderson, technical director of data science at security shop Endgame, showed off research that his company had done in adapting Elon Musk’s OpenAI framework to the task of …
Exactly what I've been saying for years.
And the problem is that AV is nothing more than pattern recognition (at best! Most of the time it's nothing more than byte-matching!), and all you need to do is find a pattern that it doesn't recognise but that does what you want.
I always laugh when people talk about antiviruses as something that works like an innoculation - hunting down viruses and removing them, when in reality it's more like a bouncer on a nightclub door with strict instructions to only let named people in. And it just asks people their name, if they are on the list, they are allowed in. No verification. No ID. Not even clever enough to spot similar-but-different names to those on the list, etc.
The days of polymorphic viruses showed us this, with encryption etc. there's no reason to suggest that AV can ever keep up whatsoever. The only secure method is to run whitelist - literally only allow THESE PROGRAMS on the network, everything else can go fish - and nobody does.
Believe it or not, most of AV is reverse-engineering. Someone has to sit with a VM, work out how the virus operates, what parts of it change, etc. which is how they come up with those (useless) reports of what registry entries it touches, etc. - they run it and record what they see changing, not what it's capable of changing. Only in extreme circumstances do they bother to delve into it deeper and see how it actually works (e.g. the very-public ransomware).
Because it's the work of a moment to make a program that makes a copy of itself, encrypted with a different public key, and using an off-the-shelf library to decrypt itself on run, which makes the AV companies either do some serious reverse-engineering or mark that library code as the virus. This is why AV tries to unpack UPX executables, etc. because it "knows" about them and wants to see what's actually being run, but in truth their signatures can never take account of all possible variations, with all possible schemes of obfuscation.
Try it on virustotal.com. You can make a malicious program that passes every AV vendor's software in about 20 minutes, all you need is a C compiler, a bit of programming knowledge, and something like that website to test it against. Automating the process via genetic-algorithms (which is what this sounds like, not AI), random variation, or even just choosing one of a set number of ways of performing each base action the program wants to perform, and you can walk past any AV and still take over the machine. Hell, compiling it with a different version of the compiler, or compiler options, will usually change it so much that AV won't recognise it.
And "heuristics"? Yeah, you know what that word means, right? A set of rules to check against. Does it contain the "Format Drive C:" command, does it try to load the function at the fourth ordinal of this system DLL? That's an heuristic. And you can defeat such things very easily with a tiny bit of obfuscation.
"... lots of tiny tweaks that proved very effective at developing malware that could evade security sensors."
What sort of malware are they talking about here? I can't imagine that these 'tiny tweaks' are random or we'll have a monkeys and shakespear situation (maybe we have)? Was it actually genuine useful/dangerous 'malware' or just something that messed up registers, etc?
I assume it should say 'random changes that maintain the same functionality'.
And you can actually have entire programs 'written' by an evolutionary algorithm. Add random code, run it, see if it does something closer to what you want the end result be, try again if it doesn't.
It just takes a lot of time. I recall that this is how "Hello World" in Malbolge was written.
There are better ways to fight malware which also allow you to run any software you want/need. For instance like it's done in iOS/Android - every app runs in its own sandbox. Of course, such apps can interact with the kernel and penetrate it but it's relatively rare and it could be fixed fast. When that's not enough, you can run every app in a VM (but that's not a complete panacea since hypervisors also contain vulnerabilities).
And if that's not enough for you, you can run a potentially hostile app in a VM which runs on a separate PC in a separate network segment while you can access this VM only via RDP/VNC which is 100% secure.
No, because what if the malware is remote-aware and manages to monkey with the protocol enough to pwn the viewer program to pwn the client machine. Same for the network segment: pwn the other end to bridge the segments. Heck, a truly determined adversary will find ways to exploit Sneakernet, meaning it can get past air gaps. If you can get past an air gap, you can get past nigh ANYTHING.
If there wasn't a way to click through the warnings. The trick is getting people to accept it. Apple has got iOS users to accept it, but there was no legacy iOS software to worry about. It is a lot easier to start fresh than to force a transition.
Microsoft essentially tried this with Windows 8, if they hadn't tied the signed software to that horrible interface maybe they would have got people to buy in on it, but few people want a touch-first GUI on a home computer with a 24" screen.
https://en.wikipedia.org/wiki/Genetic_programming
Now the fact that I could pull that out of thin air, whilst those around me stroked beards and said "totes amazeballs" is why the youngest are not necessarily the brightest.
So that's *my* CEO immunised against the inevitable snake oil.
"The key to the system is to take legitimate-looking code and change just a few tiny parts of it to convert the software into attack code. Even changing small details can fool AV engines, he said"
There's something self-contradictory here.
Start with something legitimate. Make small changes. Small changes can fool AV engines. But if the AV engine were white-listing the legitimate code than those small changes should fool the white-listing. And if you weren't counting on white-listing why bother to start with legitimate-looking code in the first place?
Indeed, white listing is a critical part of good security, unfortunately someone/something needs to build and maintain the list; that's fine for a closed system, but probably impossible with all the valid non-corporate, unsigned software flying around, so like Android and iOS, programs must declare their what they need to access, and not just be escalated for coarse undefined behaviour, and the OS must enforce this and even limit/block some requested access, something Android should damned well allow non-admin users to do!
The problem with bugs is you can only fix them after you have identified them, and some can be very subtle or caused by "code blindness".
"Start with something legitimate. Make small changes. Small changes can fool AV engines. But if the AV engine were white-listing the legitimate code than those small changes should fool the white-listing. And if you weren't counting on white-listing why bother to start with legitimate-looking code in the first place?"
Perhaps they're trying something akin to Return-Oriented Programming which can use whitelisted programs to wreak havoc.
Basically the AI has been fed signatures on how various virus scanners detect a particular virus. Then the "AI" changes the virus patterns so it has a high likelyhood of bypassing the virus scanner on the target system.
My car GPS has been fed the map of my area. If i came to a roadblock the GPS can navigate around it. Now throw in the words AI and Elon Musk and you have an article on the Reg.
I felt like a punk who’d gone out to buy a switchblade and come home with a small neutron bomb.
Screwed again, I thought. What good’s a neutron bomb in a streetfight? The thing under the dust cover was right out of my league. I didn’t even know where to unload it, where to look for a buyer. Someone had, but he was dead, someone with a Porsche watch and a fake Belgian passport, but I’d never tried to move in those circles. The Finn’s muggers from the ’burbs had knocked over someone who had some highly arcane connections.
The program in the jeweler’s vise was a Russian military icebreaker.
AI will always have blind spots, but the really scary part is that we can't tell how they do whatever it is they do, so we can't tell where the blind spots are. It looks like metal-based life forms will be as error-prone as carbon based ones. Star Trek's Data is impossible.