Not holding my breath
See title.
There's a way to vastly reduce the scale and scope of ransomware attacks plaguing critical infrastructure, according to CISA director Jen Easterly: Make software secure by design. "It is the only way we can make ransomware and cyberattacks a shocking anomaly," Easterly said during an RSA Conference keynote panel this week in …
Secure code costs time, money, resources, and better quality programmers.
I think part of the reason we're in this shitshow is because somewhere along the way it was decided that rather than making quality software, companies should just shovel out wherever crap appears to work, and release downloadable updates (maybe, if you can be bothered, etc) to patch over issues that arise, and keep up with this cycle of iteration. DevOps, essentially. Now it's starting to look like the cheap and quick and dirty approach isn't going to get a secure and quality product. Who'd have imagined?
So... I'm really not holding my breath.
Well, yes. That's the whole point of regulation: to turn the externalities of low-quality software into direct costs to the software providers, so they will find it less expensive (and thus more profitable) to improve software quality and security. I'm sure everyone at CISA is well aware of how that works.
Right now, the only potential "solution" I can see being proposed would be
* Closed source
* Locked down with certificates
* Developers having to pay to get code signed (or it cannot run at all)
* CRapp stores must get THEIR piece of the action
* Operating systems having internal certs built-in for signed code (which means stealing the master key is still possible)
Micros~1 has been trying this since Vista. 64-bit Win7 REQUIRED kernel components to be signed BY THEM. And of course "a moderate fee" is involved.
It's really just a matter of trusting the vendor, and limiting the scope of malware. Easy "administrator" access BY DEFAULT is STILL a problem, as is a 'sudo' derived security model that makes it too easy to just gain root access whenever you want it. Convenient, yes, but should NOT be :"the default".
So the REAL problem is still between chair and keyboard... and settling for "the default" even when it is a BAD idea.
You can't charge a substantial fee to force everyone signing their code since developers would simply flee to the next platform. Microsoft isn't going to risk that.
At best they'll throw up some pop-up dialog, but they'll never block you from running insecure code. That would be akin to suicide.
> Micros~1 has been trying this since Vista. 64-bit Win7 REQUIRED kernel components to be signed BY THEM. And of course "a moderate fee" is involved.
Of course they made it, Windows XP and all before allowed every shit to enter the kernel space. With Vista came a much stricter process isolation, including the kernel, which is still there in the newest insider builds.
As for drivers which require NO kernel space a signed driver is recommended as well.
As for the "fee", the certificate is the cost. If you want to release it to the public. But nothing is free.
And you can turn that driver signing off of course, if you want to test your own driver.
> So the REAL problem is still between chair and keyboard... and settling for "the default" even when it is a BAD idea.
That part is true, very true. Bad default settings in so many places, not just the Admin problem. Example? If you watch any administrator opening an MMC console, you will always see them moving the divider of the tree pane to the right. All of them. In every video. Wasting a few seconds on just that. Every freakin' time. Since Windows 2000.
> * Closed source
So you've learned nothing over the last 30 years? How can you believe that closed source is safer?
Closed source means "resource optimization": inexperienced and overworked junior programmers from "low cost regions" botching their code while they dream of the day when they'll be "managers" complemented by dressed up marketing a**holes flogging that crap with the help of glossy poorpoint presentations adorned with 25% pictures of office ladies pointing at screens. Hacker's paradise.
There's also the little problem of undecidability. If a system is Turing-complete, as most computer hardware and OSes strive to be, then you can never be sure that it won't be vulnerable. (Vulnerability, like halting, is only Recursively Enumerable, not Recursive.)
Of course, real-world systems are finite, and therefore finite-state. But given the number of bits easily affordable nowadays, there are just too many states to examine. Is a vulnerability check function that is (low) polynomial-time in the total number of states even possible?
Maybe there's a Quantum Algorithm that will save us :-)
So much infrastructure and public and private organisations rely on software that was written once, a long time ago (*) and has not changed since. There are many perfectly reasonable explanations for this - (a) it costs a bunch of money to write software, (b) new software means new bugs, (c) people have forgotten (or never knew) what the original spec was for the software that already exists, so recreating it is actually quite a challenging task... and most importantly (d) the existing software 'just works' well enough to get on with your job.
So, yeah, call for secure software. But understand that the cost and time to provide end to end secure software in place of the stuff that's just sitting there already is astronomically high. Consider that Birmingham have spent nearly a billion dollars trying to replace their payroll system and you can get some idea of just how big the problem is.
(*) A long time ago being approximately three years - the half life of an average software developer in many organisations.
So much infrastructure and public and private organisations rely on software that was written once, a long time ago (*) and has not changed since. There are many perfectly reasonable explanations for this - (a) it costs a bunch of money to write software, (b) new software means new bugs, (c) people have forgotten (or never knew) what the original spec was for the software that already exists, so recreating it is actually quite a challenging task... and most importantly (d) the existing software 'just works' well enough to get on with your job.
"Works well enough" may be sufficient for the manufacturer. I work in a *very* regulated industry where we have SOP's for almost everything exept making a pot of coffee...
My $workplace acquired a multi-million dollar/euro/pound automation line. The machinery is reportedly fine and dandy but the software... Jesus wept.
- Multiple antivirus packages fired up the klaxons, but the manufacturer didn't want to do anything until $secops intervened and threatened to cancel the deal with $legal help. A clean AV scan was a requirement in the agreement.
- Ancient DOSBOX version was found running some ancient DOS software because they don't have the specs or can't be arsed ($$$) to compile code for anything recent. Probably both.
- Manufacturer somewhat recently moved from Windows to Linux to due to problems with updates (kinda understandable), because "Linux does not need updating". The systems are running already out-of-support distros and we are not allowed do anything about them (except firewall the bejesus out of them)
- Lots of security findings such as not supporting Secure LDAP; obsolete encryption standards etc etc etc
- Made in Germany by a very succesful company, consisting of "a bunch of mindless jerks who'll be the first against the wall when the revolution comes"
Yep, you describe the Real World. You cannot imagine how much SMB1 traffic I stumble upon during pre-check for Server 2022 Domain Controller upgrades and the "we need hardening! NOW!" screams. Even some Cisco products, with the newest software, try to create NTLM from 1993 within an SMB2 package, and the server says "no" of course. This techcommunity blog post is the exact identical situation, albeit a different vendor of course - fiasco products are not the only affected.
>> Ancient DOSBOX version was found running some ancient DOS software
> Actually these days it is probably secure as no malware exists for it...
Well if it's running on an emulator any so-called security is probably illusory, but it's funny you should say that.
I run accounting systems on boxes which run DOS on bare metal.
No network stack.
Hack that.
Moving all desktops to Linux Mint would help a lot too.
I believe Capability Based Computing (essentially fine grained memory access protection) will reduce or even eliminate the hacking of computers. And the rest could be fixed with laws mandating strong passwords, 2FA, no default passwords in devices etc. etc.
Wouldn't
The only reason they dont target linux mint is because it is far less used in the commecial world than windows.
If mint was in use by 90% of the world's companies and government services they'd be trying to find the holes in it 24/7 and then exploiting them.
The other points are perfectly valid, but that will add cost to the products...
And it would never get rid of the primary cause of infection : the users, who would cheerfully click on a link from a friends e.mail saying "watch thiz fer lolz" 5 mins after watching a colleague being crucified for clicking on an unknown link in a spam e.mail.
Since Linux is the most used operating system in the world on servers, mobiles and embedded devices do you think they aren't looking for security holes in it 24/7? Think again I'd say. It's just that they can't find any (or very few).
Also, configuration plays a big part in the security of Linux. A well configured, locked-down Linux system is virtually impregnable.
"Since Linux is the most used operating system in the world on servers, mobiles and embedded devices do you think they aren't looking for security holes in it 24/7? Think again I'd say. It's just that they can't find any (or very few)."
Yes, they are, and they find them. The main reason why desktops would be different is that, to hack into a server, you generally have to find something wrong with the configuration. And they do. Put out a Linux system on the public internet and within an hour you'll have had a thousand attempts to get into it. If there are vulnerabilities in that that are already known, those thousand will try them. It's not just trying basic passwords in SSH even though those are very common. But still, you have to find your own door in. The number of Linux-compatible ransomware strains, specifically designed because there are a lot of Linux servers and that's the most valuable thing to encrypt demonstrates this.
With a desktop, you have the other method of trying to get a user to do something for you. Email them a shell script and tell them to run it. The shell script downloads a binary and runs it. No vulnerabilities needed, you now have access to that user's privileges just as much as if that was a Windows box. Of course the admins can configure the box to make that more difficult, but they can do that to a Windows machine too and they don't. If you think anything is impregnable, you do not understand security.
Do you think that security exploits for Linux do not exist?
Choice of some specific distro, or even OS, does not make you impervious to exploits.
To completely secure ANY computer system and render it impervious to all current exploits and future Zero day attacks requires one action, no matter the platform or OS.
Disconnect the power source, then lock it in a safe.
All current OSs are based on 40+ year old tech. Most security is made to protect the pathetic MS OSs weaknesses and over complexity to obscure issues and bloating the OS to the point that the security fixes use more RAM than the OS needs to run.
There is no fixing everything wrong with these, they were great in the day, that day is not today.
It is time for a new OS, built from scratch, based on security, 100% modular.
Unfortunately, 'tradition' trumps doing the right thing, so lets just bloat the next version of winblows and harvest more data.
I thought Windows NT is just around 31 or 32 years. Dave Cutler (well, probably not he alone...) made the genius choice to make the OS object oriented and let the process communication be object oriented as well instead of the POSIX "everything is a file + pipes must speak text between processes to transfer information" (with the exception of binary pipes of course, but then they are just bit stream, not an object).
But of course, everything has a big shoulder they stand upon, therefore you ought to say "100+ year old tech".
Forget demanding your "New OS", unless you start to make you own OS it won't happen.
As long as the same memory is used for both code and data there will be weakness. Ideally, all code should be in ROM, or at least in physically write protected RAM. But this makes any update/extension process rather painful, it also makes systems less flexible, and more expensive - so it's not likely to happen any time soon
The primary source of the malware infestation is that Click-and-Install OS running on x86 hardware. What's needed is a Manhattan Project to design a replacement.
“Harvard architecture refers to a memory structure in which the processor is connected to two independent memory banks via two independent sets of buses”
> a memory structure in which the processor is connected to two independent memory banks via two independent sets of buses
Which most modern CPU do. And interleave them for higher performance if you have more than one RAM Module.
With DDR5 it moved to interleave for every RAM Module for consumer hardware, so four busses are the lowest default now. And then take a look at current AMD EPYC with their 12 RAM channels (effectively 24, since every DDR5 module....).
It is easy to say we need it - (like we need anti gravity drives) give us some new ideas and tell us how to do it.
You have to range from
developer stupidity - leaving userids and passwords in the code - or shipping with default passwords
to difficult problems where a code path didn't do something
will signing code really help? - and when the bad guys get a valid certificate.. ?
You get code blind - if you wrote the code, you assume that what you wrote is what is written. It is the old principle of getting someone else to check it.
Requiring browser controls like no cross site scripting is allowed would be a good start.
Enforce TLS
...
Lots of solutions are know - just not used.
There are a number of suggestions in these comments that advocate ripping out something that has existed for a long time and writing it from scratch. It's not that this wouldn't make something better if you could accomplish it, because a new OS, written from the ground up would probably be better than what we have now. There are two problems: you can't do it and it wouldn't be perfect. We do not have an environment where starting massive things from scratch is feasible. People don't want to have an operating system released four years from now to account for all the development effort required to get it working in all the places that Linux does which will need new software written to use it. They won't buy it, they won't run it. Even if they did, there is no infrastructure that will entirely prevent vulnerabilities. It's important not to let the perfect be the enemy of the good, but it is also important not to praise the good as perfect or it will end up looking bad when it arrives.
We will have to work on securing things at multiple levels. It is more work, and it is a lot more painful, but it is not something we can avoid. No secure hardware design will prevent an insecure operating system from existing. No secure operating system will prevent an insecure application from existing. No secure application will prevent an insecure administration from existing. Only by getting security at all of these levels will we get anywhere. It is not possible to do that worldwide, but we can focus on making sure the parts we interact with are as close to that as possible. This means that IT people cannot ignore their requirements to make and enforce security policies and maintain their equipment by blaming the software for allowing something insecure to exist and that software writers cannot rely on the administrators to work around the parts they didn't want to write securely.
I realize that this sentiment is as broad and difficult to implement as Ms. Easterly's statements, but I still think it's worth keeping in mind. We will not solve this problem in one single leap. We will likely not solve this problem at all, but we can at least improve our position.
Fools are far, far too ingenious.
All you can do is separate the fools from the clueful and only allow those with clues to access corporate computers. Which will never work because it would remove computers from the desks of middle and upper management.
Another option is to unplug all ordinary users from the Internet at large. Only allow the clueful to use computers on the Internet-connected section of the corporate network. I'm in favo(u)r of this, as the vast majority of corporate users have absolutely no reason to access the Internet while at work. This is not a panacea, however (see: middle and upper management).
Whatever happens, DON'T PAY THE VERMIN! As in animal and child training, rewarding bad behavio(u)r is contraindicated. (You do have a proper, verified off-site backup system in place, right?)
Everyone keeps saying cannot be done etc etc .... with lots of supporting stories .
BUT .... think of the cost to date of allowing all the crappy code to persist .... never-ending fixes, fudges and re-writes.
Is that cost *anywhere* near to the cost of *trying* to write better code that is secure by design !!!
I think it is worth trying, to establish the true scale of the problem .... instead of simply stating it cannot be done.
This is *not* like asking for a backdoor in some encryption, code does not need to be insecure or badly written; it is simply the standard we have been willing to accept.
So, try upping the standard that is acceptable !!!
:)
It's worth focusing on what specifically we say cannot be done. My other comment describes what I think is a nonstarter, but also suggests lots of places where we can and should increase our standards. In short, trying to solve it with one big change isn't feasible and wouldn't work anyway, and although making lots of small systems more secure is more work, it is the possible option.
We don't want tech to be controlled by the state. The USSR route is a bad route to take.
By all means lock down critical infrastructure (it does not need and should not have a connection to the public internet) and mandate air gapping for data that needs to remain secure. But if people are not going to be allowed to write code and release it freely for others to use, without some form of state sanction and paid process, then we will need a rebel alliance helping to develop unsanctioned code on unsanctioned platforms operating beyond state control or tech development will end. If the last ten years in the UK have taught us anything, it is that the state is toxic to technology, innovation and pretty much every other pie it sticks its fingers in. There are ways to operate infrastructure securely and handle data securely without switching to a tech dictatorship and operating computing like the CCCP.
Instead, design out the problems. We should be switching to distributed systems with data held on people's own devices, rather than in honey pots on central servers. We can run secure social media sending encrypted data packets user to user via old fashioned e-mail with a quasi-distributed topology. Hold less ID about people - so that if you lose it, you lose as little of it as possible. Don't use biometrics (you can change your password but not your corneas). And use less tech. Some processes (as Birmingham Council and Edinburgh U have found out) would be better using a mix of simple tech and paper. Some things - hotel door locks, reporting energy usage, paying for things in shops - are simply more secure and more resilient when the physical, tangible and human element is retained.
'National security' covers a wealth of toxic political abuse. Don't let 'data security' be used in the same way to allow a state grab for control of tech by people you do not like and do not trust.
The fundamental issue is HUMANS, specifically users. It doesn't matter how secure you build the system some idiot user will do something to let some malware in.
An example is physical security.
I remember years ago during Y2K work I turned up to an extremely secure bank IT installation. It was meant to be hidden in the open as this basic standard bland building. Discrete cameras no signs etc.
The taxi drivers all knew what it was because they all transported bank staff between the HQ to the IT building. When I arrived there the side door was propped open with a chair to allow some airflow because the office area was too hot from all the heat generated in the server room.
Oh, also the largish coms array on the roof pointing to bank HQ may have also been a giveaway.
Getting paid for bad code has to stop. Much like the government had to mandate seatbelts, performance standards for public-network-connected software need to be created--and enforced. these cracks, outages, and issues are embarrassing, but also dangerous. It is time to acknowledge that and that starts with system classification now and fix prioritization later and finally with standards-based testing by government agencies. I think the health department kitchen inspection system is a model for this that is operating already.
The majority of ransomware attacks are executed using stolen credentials from personnel with GOD rights to the company's data.
If you don't fix that problem, no amount of secure code is going to protect you.
If Jenn, the CIO, given her job because of her tits and not her skills, answers "OK" to the phone call, "Hi this is Bob from IT, I'm new with the team and been tasked with running some test on user account, in order to do these tests, I need your current password" No amount of secure code will fix this.
You're right as far as that goes, but there may still be technical ways to defeat ransomware, raising huge alarms when certain operations are attempted even with full rights, logging everything so it can be reversed, etc. Give the system a certain degree of agency.
I haven't followed this in any depth for ages, there must be a ton of literature on it, yes?
Well, there are a large number of men falling for scam texts with beautiful images.
Most calls originate out of NK and the poor girls are kept captive and beaten.
I found women to be more careful about privacy than men.
Maybe you used a bad female hire as an example and not a generalization. If so, my apologies.
Just signing code is not enough, security means security in depth, independent systems watching, and reporting, and blocking, and encrypting.
But any system like that could make it hard for Microsoft and FBI to build in back doors.
Internet traffic also needs to be tracked and blocked actively ... not to mention spam voice calls on my landline, ROFLMAO.
Lack of security was the big miss in the original Internet concept, well, we all have to start somewhere.
Except for 2 companies I worked with, every other company has resisted even a static code analyzer to be run on their code.
A famous modem company specifically stated and made it clear that under no circumstances could we run binary code analysis.
At one company I walked out of a lucrative contract because the security team reported to the engineering director for application software and they refused to run analyzers of any kind. Not even nmap.
It was specifically called out in the contract that security companies were not allowed to do certain things; all vulnerabilities had to be approved by engineering.