Fileless?
Last time I checked, a .DLL was a file. Not only a file, but a file that can contain executable code.
Microsoft has lifted the lid on the inner-workings of a particularly nasty piece of fileless malware that aims to pilfer user data without needing to install software on the victim's machine. Dubbed Astaroth – the same name as the Great Duke of Hell – the software nasty has been in circulation since 2017 and has primarily been …
You probably need to know about some kind of shared dynamic libraries, whether they are by convention called .dll or .so or something else. Windows is not so unusual in this respect; it is the undisciplined/disorganized installation "system" which allowed storing all the .dlls alongside with the system files or overwriting one applications .dlls with another application which is problematic. Luckily I do not have to worry about this either, but it is good to know where the problems are.
Which is forbidden since at least NT or 2000 - where you can't write to those directories without Administrator privileges - with root privileges, you can overwrite system files - or any file - in *nix too.
One nightmare in Windows is developers who never got their skills updated past Windows 3.1 or 95 and thereby write applications so badly designed they can't work without Administrator privileges even if they do nothing.
Yet recently because of SELinux and other similar tools being active by default and with more restrictive polices I've seen Linux developers too implementing bad shortcuts to avoid to comply with security rules.
But you need Administrator (or delegated, but similar) level of privileges to install an application in Windows, anyway. Also, files belonging to one application are still not protected from being overwritten by an installer of another application, which is likely to happen because many of these files are still installed in system32 directory. And then there is COM, where you can install file in a more convenient location, but there are not namespaces for interfaces in the registry, so good luck if you have two applications relying on different versions of the same component.
"But you need Administrator (or delegated, but similar) level of privileges to install an application in Windows"
No you don't:
https://answers.microsoft.com/en-us/windows/forum/windows_10-security/why-are-standard-users-able-to-install-and/1140032f-3437-4a63-9b1b-8a759f95ad18
Apps that get installed into a users profile don't go anywhere near the system32 folder.
Check this out:
https://www.softether.org/4-docs/1-manual/A._Examples_of_Building_VPN_Networks/10.B_Exploit_SecureNAT_for_Remote_Access_into_Firewall_without_Any_Permission
That's a user space VPN install that does not require ANY admin permissions.
Covert channels are no longer the hidden exploit, covert channels are becoming the norm ..... All your computerz iz belongz to skiddies.
"Apps that get installed into a users profile don't go anywhere near the system32 folder."
But any system admin would prevent that from happening, ot at least prevent the installed application from running, using tools such as Software Restriction Policies or AppLocker.
Problem is an increasingly large number of vendors who should know better choose to distribute software that is designed to work like this (I'm looking at you, Autodesk).
"Apps that get installed into a users profile don't go anywhere near the system32 folder."
EXEs that get installed into %AppData% are often blocked because that is/was an attack vector for several ransomware variants. And that blocking rule causes great grief throughout the land as many, MANY, small apps (*cough* SPOTIFY *cough*) think that %AppData% is the perfect place to put their EXEs, even though it's intended to be a DATA area, not an EXECUTABLE area (hint: it's in the name...).
All those developers - including Google Chrome - are trying to circumvent security polices to get their applications installed on locked-down systems, usually business PC where you should not install players or non-company allowed browsers. Windows should have made all those locations non-executable
Try running yum/apt without sudo privileges, what happens? Linux packages too are installed with high privileges because they may need to setup the system to allow applications to run without high privileges - but the setup procedure may require to create resources in privileged locations and then settings the proper permissions to allow the application to run.
You really have no need to install files in system32 unless you are installing drivers (or maybe a few other system-level modules), and you must not. Writing there has been deprecated long ago and no userland application should ever write there. If one application still does, it's exactly because its developers never understood how to write software for Windows after 1996. A few Microsoft redistributables may write there - but you must use the installer from Microsoft, not attempt the same in you installer. Registering .NET assemblies in the GAC has its own rules.
COM registration must follow precise rules as well, and it's based on GUIDs - if you follow the rules you can easily have different versions of the same component. Office does that routinely, problably because its developers read the instructions...
Some package managers are explicitly designed to install in user folders, and if you have access to a compiler it's trivial to download dependencies independently of the system package manager and compile your own applications. There are exceptions to every rule.
Not even that. (Although that also). The magic word "fileless" has been expanded to mean more things:
Firstly, a non-malicious framework that is used to load some other malware. To the extent it's not malicious, it's difficult to detect.
Secondly, a framework that uses other programs. To the extent that it just uses other normal programs to do its dirty work, it's difficult to detect.
Astaroth isn't actually fileless by any stretch of the imagination. It just uses "fileless" techniques as part of it's infection process. For example, it starts with you downloading a link, not an executable. Links are harmless right? They aren't executables right? They are actual files, and the Astoroth download is a zipped file containing the link, which is a file, but we'll call it "fileless" because there wasn't an exe to scan. That's not where it ends: it uses other "fileless" techniques, and it downloads other files, and even saves files in the file system, and generally becomes less and less "fileless" the further you go, but a lot of virus protection is at the edge of the system, looking for specific executables or executables that do specific things, and we''ll call this "fileless" because virus scanners might miss it.
There have been malware like this, too - it would "survive" reboot by re-infecting local machine from one which has not been rebooted. One way to clean this up was to shutdown ever single machine connected to the network, which is not really viable if you have employees with laptops or working remotely.
It may get to the file system for a short time, but in ways it's less easy for an AV to spot them. As the article says, the code may be obfuscated to make it more difficult to identify. Once loaded in memory, the files can be removed. The persistence system may be made using files that got alone looks innocuous - bu they can re-download and install in memory the malicious code when needed.
...does it survive a reboot then?
The article mentions spear-fishing, so we're probably talking corp machines. These often never get rebooted. Partly because the user is to busy[1] to do it, but also because a reboot tales the thick end of forever due to appallingly written corp admin shiteware, like the ubiquitous Altiris, spending ten minutes getting its lardy arse up and running as part of the process..
I've seen exec laptops that have been surviving on suspend/resume for over a year, despite having been screaming at them for a reboot to install updates for much of that time.
[1] a.k.a. lazy.
You need to check again then. A DLL can be downloaded from a remote server, buffered in memory, and be injected into an existing process all without ever touching the disk.
It’s easy enough to do this in Linux too - you can try it yourself with GDB. Other tools are available.
Note: the technique of loading code from the network without ever touching the disk is nothing new. In fact it was one of the major features of Java back in the mid 90’s...
"A DLL can be downloaded from a remote server, buffered in memory, and be injected into an existing process all without ever touching the disk."
It's still a file that was downloaded. The fact that it wasn't written to the local disk makes no nevermind. It is a file fetched from a storage device and loaded into working memory.
"It's still a file that was downloaded. The fact that it wasn't written to the local disk makes no nevermind. It is a file fetched from a storage device and loaded into working memory."
No. Simple categorial error: you're confusing container and content. While "loading a file into memory" is common IT parlance the phrase is strictly speaking not accurate: It's the /content/ of a file that gets loaded into main memory (and possibly transmitted via a network connection). Neither the main memory of current digital computers nor a network connection is file structured in any way. The main memory is structured by pages (and possibly segments) and accessed by numeric adresses, a network connection typically provides an unstructured stream of octets - no file system structure whatsoever exists in either case.
Sophistry isn't becoming, Cem Ayin.
The FILES are stored on a remote system. Copies are transferred over the network. Code contained within is handed to the CPU for execution. Regardless of how you try to abstract it[0], a .DLL is a file, period.
[0] I prefer "twist it", but whatever.
And the files are stored in your file system. Cem Ayin isn't trying to pretend that a DLL, or that a memory loaded file isn't a file: they are using the word "fileless" with a completely different technical meaning, specific to the area of internet malware. It's like "printer resolution" or "radio discrimination".
.. is why I absolutely HATE installers which don't actually contain the code they install, but grab that off the Internet during installation. It prevents decent inspection of what is being installed.
I've left the Microsoft/Windows eco system ages ago, but this crap exists for MacOS too - same risk. It will probably not come as a surprise that one of the companies doing this is called Adobe..
You can download .deb file from apt repo, and then install it locally. Or setup a mirror and install everything from that mirror. Also, the packages are signed and the signatures must be registered in the system, so it's not like you are downloading some random file. Unlike conda ...
What about repositories that asks you to install their keys? There are plenty of them and often you have to use them whenever whatever is packaged with the distro you use is too old/buggy to be usable.
Even packages may pull code, say they are installing some Python app, and they run pip, or anything alike. And anyway they can still pull any dependency.
Windows installers can be signed as well just as executables, and Windows will display a noticeable message box when you try to install something that is not signed.
Again, it's just a matter of trust. Signatures are just a way to automate trust.
If you have a small number of systems, its simpler to do that. if you have many systems, adding one more as a local repo is your best bet. You can schedule repo updates on a schedule that will make all of your installs identical.
But back to your point. what the previous commetard was referring to was the practice of providing a download link that only downlaods a shim which then runs and downloads the actual application. When these were first introduced I hated them, and I still do, because I've never had just one machine to install software on and downloading two copies from anywhere isn't as fast as downloading one copy and then copying it to the rest of the local machines.
And that shim works exactly like yum/apt, albeit usually in a simplified way. It checks what are the latest versions, downloads them and any required dependency, and install the software.
Often it's a way to install only the software you require (or are licensed for) instead of downloading the whole lot, exactly again like yum/apt do (but the licensing part, of course). It may mean to download some hundred megabytes instead of several gigabytes.
But sometimes, downloading several hundred megabytes a hundred times or so is STILL worse than downloading several gigabytes ONCE. That's why Microsoft still offers offline installers--specifically for enterprise rollouts and the like.
Sure, if you have to install on several machines at once it does make sense to download once - the same reason why you would install WSUS or mirror a Linux repository. Yet at the pace software is updated today your multi-gigabyte download could be obsolete in a few weeks if not days, and you would need to re-download it for installing updates. When everybody will have 10G XSG-PON, 5G, 10G Ethernet or WiFi7 maybe a little issue, not yet today.
For many user downloading only what they need could be a simpler solution - and the same mechanism can also deliver updates - again, how yum/apt do. How many people just download a minimal Linux install, and then install whatever they need from the network, instead of downloading the full image?
Ther are good reasons for doing it that way, such as having one installer that gets the latest (patched) version of the software, using a peer-to-peer network or downloads only the patches it needs when upgrading already installed software. After all, if you want the latest bug fixes, do you want to dowload the whole 20Gb install, or a 20Kb update to a single file?
A lot of software is distributed in this way (not just on Windows), and often you can download the actual binaries instead if you want to. Using the installer means that if lots of people are trying to get it at once, they aren't all hitting the same file server to download a binary, but are most likley getting it via bittorrent, or similar from each other. That means that you get the files quicker, and that the provider doesn't have to over-spec their servers to cope with that half a day after the software is released when everyone is downloading it at once, so that they can run at 0.1% capacity the rest of the time.
"After all, if you want the latest bug fixes, do you want to dowload the whole 20Gb install, or a 20Kb update to a single file"
On windows that what an .msp is for.
"so that they can run at 0.1% capacity the rest of the time."
Isn't that what supposed to be so good about using the 'cloud', spin up and pay for what you need when you need it.
Isn't that what supposed to be so good about using the 'cloud', spin up and pay for what you need when you need it.
If you put your downloads on what is effectively a virtualised file server "in the cloud", then you may be able to cope with peak demand, but you will also end up paying for the bandwidth, and I'm willing to bet that the pricing tiers don't go up in small amounts. Peer-to-peer delivery means that it is mostly done with other people's bandwidth, which you don't have to pay for (in fact it's the people downloadling the software that bear the costs).
Microsoft and other vendors have had to rely on their heuristic detection tools. In particular, AV tools need to be closely monitoring the use of WMIC command-line code and applying rules when loading DLL files - such as checking the age of a file and flagging or blocking newly-created DLLs from running. When you know what you are looking for, Lelli explains, fileless malware isn't particularly hard for newer security tools to catch.
Nor is it hard for (some) older security tools to detect. Agnitum's Outpost monitored DLL's and other system objects. At times it could be a pain with all its Allow/Block/Terminate popups, particularly when installing applications for which there wasn't a pre-existing ruleset. But when running a browser and loading a webpage results in the same warning, more often than not, you could generally regard what was being downloaded as dangerous.
With Agnitum, the AV tools scanned the ZIP, the heuristics and system object protection would detect the attempt to run an unknown DLL.
Shame Yandex bought out Agnitum a few years back, they seemed to have many things sorted years before their competitors. With the Yandex purchase and incorporation of Agnitum technology in their browser, it does seem Google/Mozilla are playing catch-up on in-browser security...
One of the biggest misconception about *nix. The fact that many things can be accessed with a file-like API doesn.t mean they are exactly like a file, nor that something that say, monitors new file on disk will catch accesses to other types of objects, since the code path inside the kernel is different.
Boom boom boom dah dee boom (probably a misquote; my ability to remember Wordsworthian lyrics isn't that strong). But anyway.
Dook Dook Dook Dook of Derll . . .
Classic El Reg. So classic it took me several minutes to figure it out. Well done, whichever Vulture Sub it is who's flying so high as this.
I look forward to further excursions into the past with references to Phyllosan and Gibbs SR (both long overdue.) Graded grains make finer flour: a reprise of that, too. As for the greatest musical compositions since the dawn of civilisation, Dook of Derll is still but a pale shadow of Purple People Eater.
That's what I've been thinking these past months. This default "STANDARD" user in Windows is still very powerful.
Standard users should just have a browser or an office suite/ email client plus other specialised tools for your business... period, nothing else.
This "lower than standard user" accounts should not have access or able to execute batch files nor WMI via CLI. Let alone other system specific debugging API on these DLL files. I was very surprised when a standard user have access to PowerShell, to JS and many other windows scripting engines. Maybe we forget how VBS worms reached even those ship destroyers and aircraft cariers in sea.