resides solely in memory
but does that mean removal is as simple as pulling the plug out of the wall on an infected machine?
A newly uncovered cyber-espionage campaign targeting Israeli organisations relies on "fileless" malware hidden in Microsoft Word documents, a hacker tactic that's becoming a growing menace. The attack was delivered through compromised email accounts at Ben-Gurion University and sent to multiple targets across Israel. Malware …
It probably hides itself as a child process of something else running or finds a way to completely conceal itself so it doesn't appear at all.
What I'm wondering is if the next step is to use a memory-only malware to leapfrog past files and go straight to firmware so that it can make itself nuke-resistant if not nuke-proof.
Yeah, but a running process is <program that's on disk> + <data that's only ever in working memory>. Spawn a shell, install a program in its data space and your solution won't work.
Other posters above suggested that switching the machine off will deal with it. But what if it's a kind of APT ("advanced persistent threat") that can find other local machines where it can also run in memory, maybe even using different exploits or propagation methods? This can act as a backup in case the first machine is power-cycled, then re-infect it using the original exploit when it comes back up. Just like the ancient "Robin Hood and Friar Tuck" hack, except that there's no persistence if both machines are turned off at once.
Putting on my black hat for the moment, not persisting on disk can be a great way of avoiding detection. It's great for initial stages of an attack because you can use it to passively monitor a target network and use that info to plan for future attacks. Chances are this won't trigger any internal tripwires, and even if the probe is found, it won't reveal very much. From there, you can use a variety of different payloads, each working together stealthily using ideas of "quorum sensing" and "oblivious agents".
Quorum Sensing is an idea from bacteria, where individual bacteria take cues from the environment and begin to change their own secretions. The ultimate expression of QS in bacterial colonies is that they can regulate gene expression, so that they become more efficient at thriving in the environment. Apply that analogy to malware and you get to the idea of individual bits of malware using subliminal channels to announce their presence to each other and coordinate with each other to a degree. A simple example of a subliminal channel in a network might be to interact with a caching proxy (be it a web proxy or memcached database proxy or whatever) somewhere on the intranet. By looking at timing differences in responding to a request, each malware agent can basically pick up environmental cues to detect each other's presence. There are doubtless tons of other ways they can implement subliminal channels over innocuous-looking traffic.
Oblivious Agents are bits of code that have an encrypted payload. They take a set of input parameters (such as environmental cues, as gathered above, but it could also include things like the time or the host IP or whatever) and combine them to form a key. They use that key to do a trial decode on the encrypted payload, and if the decrypted message is valid (eg, by checking that it has a valid checksum), they execute it. They're called "oblivious" agents because they don't know (and don't reveal anything) about what exact set of triggers are needed to run a particular payload. And, of course, a defender can't easily decrypt the payload, either. Neither does it have to have just one payload, nor does all the logic have to be confined to being stored in a single malware agent: a payload could be just sending out a certain environmental trigger that ultimately serves to self-repair the swarm, delete itself, or start enacting some new strategic phase.
All of this is much more suited to a spear-phishing attack against a high-value target. It's still fascinating to think about how you could apply techniques like this against certain businesses, banks, military installations or whatever. If it can lay more or less dormant and inactive over a long enough time, there's no telling what it could do. It could, eg, find some long-term persistence technique (so that it can re-infect at a later time if it's discovered), or use a variety of environmental cues, eg, noticing lots of extra emails being sent or other seeing other signs of activity to guess that a North Korean missile site is about to conduct a nuclear test, or even just have some other internal resource (like git repo, active directory server, SCADA system or whatever) as the real target, and delete the bridgehead system once it's done its job.
Hmm. I think that having that black hat on for too long has affected my brain ...
Biting the hand that feeds IT © 1998–2021