Finally…
A cyber security tsar says what we have been thinking.
Software suppliers who ship buggy, insecure code need to stop enabling cyber criminals who exploit those vulnerabilities to rob victims, Jen Easterly, boss of the US government's Cybersecurity and Infrastructure Security Agency, has argued. "The truth is: Technology vendors are the characters who are building problems" into …
I agree with her and with you
I read this bit and laughed….
“at RSAC, nearly 70 big names – including AWS, Microsoft, Google, Cisco, and IBM – signed CISA's Secure by Design pledge…”
Microsoft signing up to “secure by design”? Seriously???? And Cisco have a decent track record of messing up on security. Google obviously think “security” doesn’t extend to people’s personal information. Same for Amazon (AWS)
There was a Reg article last month (or previous) asking why it seems to be considered acceptable to have a Patch Tuesday. Indeed, WHY is it? Why is MS (and other) software SO crappy that there is a need to patch it EVERY month? Personally, I loathe all things Microsoft - every single thing they have ever produced (at least the version after the one they originally bought/consumed) has been utter shite (possible exceptions being the original minesweeper and patience games)
All these companies should be taken to task; not invited to sign up to a toothless manifesto that they will be (at this very moment) completely ignoring
Very much an easy thing to say... and obviously the view of somebody that has never built any software, nor has to actually pay for the result. The thing about software is it just doesn't work that way. You can design with the best of intentions build in quality at every step and really grind on it... and still have bugs. NASA is an interesting example. They spend some crazy amount _per line_ of code and still crash into Mars about half of the time when they do something new.
The truth is security is an manageable risk that is in the rounding error of the net productivity that software brings to the user.. even if its insecure by design, especially given that insecure by design is the only _usable_ design. I mean hell.. the internet isn't very secure.. but we get by.
I must agree with you. The answer to the question "Why does software require so many urgent patches?" is that the security landscape is continually changing.
Okay, that does not give a free pass to not sanitizing inputs - that should be punishable by a dozen lashings - but you can build a product with care and attention, making sure to protect against all known vulnerabilities, and then bam! A month later, a new type of vulnerability appears and you have to correct for it by patching.
That is the truth, but it should not hide the lazy programming many products are guilty of.
I'm a software engineer. There are some super obscure vulnerabilities but most flaws are exceptionally bad code. I've left a few companies because whole engineering department was about selling a product fast and cashing out before everything burns to the ground. SQL injection, command injection, authorization bypass, and self-destruct exploits were all OK with senior management.
Having developed (full stack) software for decades, it all comes down to attitude. It's just too hard. There will always be bugs. Risk management.
CISA's 2023 list of the top 25 software vulnerabilities, here are the top 5:
1. out of bounds write (buffer overflow)
2. cross site scripting
3. SQL Injection
4. use after free
5. OS command injection
We sit here as purists and say don't fuck with my favorite language with your syntactic sugar, I might have a use for poking stuff out of bounds. Range and bounds checking might slow my app down. Although CPUs have outpaced software requirements and if your app is slow, it's likely all the bloat from feature creep or mega libraries that get added for one or two functions. Years ago I made a joke about Internet Explorer hitting 75 megabytes. My current browser is over 400 megabytes.
None of those things on the list are new. The first four at least have been a major topic of discussion for a quarter century. And yet even when we are given better tools that help eliminate them, people won't use them.
Make no mistake, it's not just about getting hacked. Software bugs are killing people, and as things get more automated sloppy coding will increasingly kill more.
You can blame management all you want, but at the end of the day developers own at least part of the blame for the quality of software today. If you aren't willing to take some responsibility, you're a code monkey and not a professional.
In most code, there will be software defects, achieving a defect density of zero in the general case is impossible. (Church-Turing.)
But SEL4 demonstrates that you can produce certain classes of code with provably zero defects. So, for those categories, your argument has no validity. (Church-Turing applies to the general case, not to all cases of non-trivial software.)
So let's consider those classes of problem where you can't prove zero defects. What can we achieve?
Let's look at that luzt of yours.
Buffer Overflow. Very often, this can be detected with a static checker. You can therefore write a compiler that detects and fails code with buffer overflows.
Cross-Site Scripting. This can be blocked. If it's a problem, it's something that has been intentionally chosen to be problematical. I feel no desire to defend idiots.
SQL inhection. Code shouldn't contain SQL at all. If your code has SQL, it's bad code. The SQL should always be on the database itself, which should not be directly callable but accessible through a front end. Do it like this and all the SQL injection in the world will do nothing.
Use after free. As with buffer overflows, you can simply have compilers reject code with such bugs. You can also use languages like Occam, where there's no dynamic memory.
OS command injection. If the program is executing as per the doctrine of least privilege, any injected command can't do much. And you should always run with exactly the privileges you need, not a single privilege more. But your program shouldn't call other programs, at least not directly, and should certainly never use data for any such call. Again, compilers can block such things.
In both the injection cases, it also requires inputs aren't validated. Inputs should ALWAYS be validated.
So, frankly, that entire list is self-inflicted damage. You could make all five categories of defect a criminal offence, and it wouldn't stop software being written
We can actually go a bit further. Nothing stops a C or C++ from supporting the same compile-time precondition and postcondition static tests supported in SPARK (a variant of Ada).
Can we go further still? Yes. The existence of malloc alternatives like Hoard means that programs with bounded memory requirements can be given bounded memory pools that are guaranteed available. Such programs will always have the memory they need to run, regardless of what else is running.
There's also test-driven development, which can be pushed in preference to rapid turnaround.
I would not regard myself as a language purist, as you can probably tell. If you can't, then it might help if I said I knew 20+ languages.
To me, a language is syntactic sugar used to describe a problem to a computer. If you can't describe the problem well, then the sugar needs to be changed so you can.
I'm not sure you disagree, because you just made the same point. The top vulnerabilities all have simple fixes, but they aren't being used.
But note, a compiler static checker won't be sufficient for buffer overflows. They have to be checked at runtime. Every copy operation has to determine if the target memory location is large enough to hold the contents of the source. As an example, Delphi has had bounds and range checking at runtime since the 90s. Microsoft added bounds checking in Visual Studio in 2003 and made it the default in 2005. IBM and other vendors have done so as well. Why are we still seeing them as the #1 vulnerability?
The original post I replied to stated that bugs were just a fact of life and that the solution is risk management, and if we didn't understand that we obviously had no experience writing software. My contention is that software developers share the fault, because management generally wouldn't be versed enough to tell developers to not use simple tools to avoid these problems. Speed to market doesn't justify poor coding habits. Scrub user entered data, you can write a routine to do it once and use it over and over. Turn on bounds checking. Use stored procedures with parameters (most SQL servers will escape illegal characters automatically).
I despise Agile. I think it turns software development from a skilled profession to an assembly line and promotes taking a narrow view. "I'm just here to add this feature, I don't have time to refactor all the crap inserts from the last 10 developers just adding a feature" But management isn't to blame when the 'professionals' don't stand up and tell them their shortcuts are dangerous. Assuming management has said anything beyond 'faster'.
You may be an excellent SWE, but heck you have no idea about basic security stuff.
Take the god damn high horse somewhere else,sit in the corner and think how many shitty PRs you contributed as you are for sure part of the problem.
Code does not contain SQL. Yet you fail to understand one of the top3 vulnerability class of SQLi.
Do better. Read more.
I know more about security than you.
No, I have never committed a bad PR. That's because I know what I'm doing. I've been doing this stuff a lot longer than you. I don't even have to ask how long you've worked in IT security to know that.
You should NEVER have applications access databases. Ever. Partly because it's stupid and not secure, but also because SKILLED programmers NEVER tightly couple.
I/O should NEVER be in your primary code, it should be segregated out so that if something changes, you change only the interface, never the innards.
Christ on a pancake. This is like programming 101.
High horse? I own the bloody battlefield, because piss-poor wannabes like you get involved. Get the hell out of IT and programming, your ilk aren't welcome.
You're in a hole and continuing to dig now.
You should NEVER have applications access databases. Ever. Partly because it's stupid and not secure, but also because SKILLED programmers NEVER tightly couple.
I am always hesitant when I see "NEVER" (yes especially when capitalised) used in a blanket manner; I automatically get suspicious. Engineering by axiom is a fundamentally flawed appraoch. There are times when a gatekeeper is appropriate, partifularly when it is providing additional logic and fucntionality. There are plenty of other times, probably the majority, where it isn't. Aside from the increase in the attack surface, Issues of security, concurrency etc are well considered on most of the quality DBMS and if you think you can casually cook up better than those to support your app on a Tuesday afternoon you are either a) God, or b) an idiot.
I/O should NEVER be in your primary code, it should be segregated out so that if something changes, you change only the interface, never the innards.
There should be a degree of isolation but it ultimately needs to be embedded otherwise you miss the entire point of encapsulation: a module shoudl do a job, not half of one saying "oh this bits too hard, you'll have to do that for me". That approach in itself is a recipe for poor secuirty and reliabilty sicne instead of well considered, well tested generalised routines you end up with an ad-hoc arrangemnt cooked up wherever the resulting code it used.
Buffer Overflow. Very often, this can be detected with a static checker. You can therefore write a compiler that detects and fails code with buffer overflows.
But very often it can't. If the aray bounds are variable, or they are externally defined external to the scope of analysis (i.e. crossing a function, module or even code-library boundary) you have little hope of determining anything at or before compile time. It the usual knee-jerk secuirty ay axiom mantra with the same results - do half the job, job done. Who cares about the other half of the problem because my code is now magically "secure"?
Cross-Site Scripting. This can be blocked. If it's a problem, it's something that has been intentionally chosen to be problematical. I feel no desire to defend idiots.
Have you ever done any commercial programming? It you are told "we are using this external provider for this" then that is what you do. Good luck getting your e.g. payment processing done without going through your payment processor's gateway. Yes, it introduces additonal hazards that have to be mitigated, but sweeping it under the carpet and pretending it doesn't exist why screaming "La la! I can't hear you!" is not a security approach.
SQL inhection. Code shouldn't contain SQL at all. If your code has SQL, it's bad code. The SQL should always be on the database itself, which should not be directly callable but accessible through a front end. Do it like this and all the SQL injection in the world will do nothing.
So all those tools for embedded SQL in code are hopelessly misguided? There is plenty of scope for them. I note hear you are not even advocating for stored procedures (which have their own set of restrictions) but a complete banishment as a mantra. I refer you to the points made above.
Use after free. As with buffer overflows, you can simply have compilers reject code with such bugs. You can also use languages like Occam, where there's no dynamic memory.
The magical compiler firy again. If it's so easy why isn't it being done? It can be shown this is impossible in the general case. Often the determination of an object's "owner" is a policy rather than a technical consideration. That's without considering the complexities introduced by various shared data structures, where the same nodes appear in more than one higher-level structures. Pretending such difficulties do not exist is not a valid approach to managing them.
OS command injection. If the program is executing as per the doctrine of least privilege, any injected command can't do much. And you should always run with exactly the privileges you need, not a single privilege more. But your program shouldn't call other programs, at least not directly, and should certainly never use data for any such call. Again, compilers can block such things.
Which instantly and dramatically increases the amount of custom code out there in place of well tested generalised code. The Unix philosophy advocated this approach as one of its starting premises and it has served the platform well. That was only defined 50+ years ago.
Storm in a teacup, I'd call it. All of these issues belong to a particular type of programming, something that involves users interacting with systems over a network connection. More often than not its down to the rather weird way that programs are called in a web environment and all the issues related to transferring and parsing unbounded parameters that have special meaning to some website owner or another. Nobody will fix the underlying mechanisms because the entire world of e-commerce, of user tracking and all the other BS that blights our personal experience with computing depends on it. So its just patch, patch, and patch again.
In the real world this sort of thing might be widely used but is actually just a relatively small corner of programming -- although one worrying trend is this 'slap and dash' school seems to be seeping into the real-time world as the wide availability of powerful processors and bountiful memory makes it possible to migrate this methodology and (unfortunately) a lot of programmers now don't know any better. Everything really boils down to a couple of simple things, one being that when you're designing communications between systems or even processes you should demand the minimum but be prepared to accept any old junk (and at any rate its fed to you) . Defensive programming, essential if your system has to keep going. The other is that you've got to actively test your system -- the code has to be designed to be test and you literally have to test it to destruction (and that does not mean "it ran overnight / over the weekend"). Results oriented management may not feel the need to spend the time on this but its got to be done. It doesn't hurt to write proper specifications as well (note that a 'specification' isn't a half finished document file with a couple of vague pages of wish list on it -- again, tedious, nothing to do with banging out product but a great investment).
Wow, lots to go on here but in general I agree with the principles but some of the details are wrong (IMHO).
Buffer overflows... yes, a compiler can check for these. Look for the compilation option for range/bounds checking and that's where it has been for the last three decades. This does not, however, trap everything and in reality it is very hard to trap everything but we can do a lot more. One of the most amazing tools I had as a developer was a couple of decades ago (ish) in Delphi which intercepted the memory allocations and paired this with standard MMU functions to flag up every single instance of the application wandering out of bounds. The sheer number of Operating System and standard library functions that had to be masked when running an application with this tool enabled really highlighted just how bad things were. In the end, one had to ensure that the application itself isn't doing anything stupid and to try and avoid known poor quality library and OS calls. The tool was a developer tool, not for deployment, but with a good suite of tests and leaving it to run for as long as possible and to use every function available and in as many unexpected ways as possible and most things would be tracked down - the rest were more edge cases and sometimes timing related which was a particular issue when real concurrency landed in Windows - i.e. multiple CPUs (later CPUs with multiple cores, but these had different bottlenecks).For example, one bug I found was that a library call didn't deallocate memory immediately and therefore an access after deallocation test was passed... most of the time. Took a painful amount of time to track down that issue. In the end, coding as safe as possible is the only way to go.
SQL... application code that access a database requires SQL. How the hell do you expect the data to be retrieved and/or edited otherwise? The problem with SQL injection is where parameters are put in plain text into the SQL, not that SQL code is used. There are plenty of business cases where one could either make 12 different SQL queries or one that is tortuously complicated (and therefore unmaintainable) or one just composits the required SQL in the application. There is nothing wrong with this approach, the issue is where the parameters are put into the SQL text rather than create the SQL, prepare the statement, and then assign the required parameters to the prepared statement - an approach that has been available for a mere three decades and utterly inexcusable to not use it. Validate your inputs too, of course, but never put the values directly into the statement manually.
One application often needs to run another application. Hell, this is the core of an Operating System. There is nothing wrong with this at all. Your point about executing with the lowest privileges is the important bit here and for many years with Windows from the start being a single-user, semi-stand-alone system running as local administrator was either the only way to execute an application or a little later the preferred way because it saved a developer from having to think... and to put files in the correct locations. Windows has had defined file locations for application code (i.e. only editable by installer level applications), common application data and per-user application data - all just one very quick API call away. Yet many inept developers still wrote configuration and log files to "Program Files" rather than the correct location and through just doing this part they required that either the application ran with elevated privileges or, for later and similarly inept developers, that the security configuration of the read-only "Program Files" path is changed so their application can continue to violate security. Certain Microsoft applications still do this rather than use the correct file locations, so it is not just 3rd party developers to blame for this.
If MSFT had actual competence, they would essentially copy the Android Sandbox system over to Windows. Each application can only access a certain part of the mass storage and can only read+write certain types of file extensions. Why does Excel need to access *.cpp files ?
Why is there no sandboxing and crypto signing of Excel VBA Macros ?
Why is there no scheme for Administrators to tailor such Sandboxing policies to user company needs ?
They have the power of forcing developers into a secure sandboxing scheme, but they chose to do nothing.
"You can blame management all you want, but at the end of the day developers own at least part of the blame for the quality of software today. If you aren't willing to take some responsibility, you're a code monkey and not a professional."
That's true on the individual level, but bad processes and incentives drive it on the systemic level. You can spend more time designing secure systems and writing secure code, extra work that nobody in your company asked for, while the developer who follows Agile literally by coding up the Minimum Viable Product (MVP) to pass the demos as quickly as possible will have better metrics and justification for promotion.
Space probes are unreliable in part because radiation causes system damage. Radiation-hardened processors and radiation-hardened memory still suffer from radiation-caused glitches, and no program can fix that.
But there's also a hardware issue. Rockets impose enormous stresses on the probes they carry, from the acceleration and the vibration. Micrometeorites pepper the hardware constantly, causing structural damage. And the radiation will damage the materials still further.
And then there's the descent onto Mars. Very thin atmosphere, but significant gravity and very uneven ground. Not a good combination.
The probes being lost is almost never due to buggy software.
But space probes also have to contend with design process errors. NASA lost a mars probe because the design team used old school english units and the NASA flight team thought the numbers were converted to metric as per agreement.
https://www.simscale.com/blog/nasa-mars-climate-orbiter-metric/
That's not a software error though, that's a process error. Real scientists use metric because (a) it's much easier than trying to work out how many groats per square foot imply the acceleration of chained beavers to a tetroid and (b) only three backward regimes on the planet still use Imperial units. Therefore for international collaboration, and things like space flight really require international collaboration or at least the best that an agency can attract, standard units are used instead.
Other critical US errors in space technology have been where parts were made or supplied to the incorrect size as there is a difference between metric measurements and imperial "close enough" measurements. Just because an imperial size nut happens to fit a standard size bolt does not mean that it fits correctly and aerospace devices really put things through hell, and sometimes back, when it comes to use.
This post has been deleted by its author
Would the laxity allowed in software production be allowed in any other life-and-limb-threatening industry?
If a car's brakes regularly failed would the manufacturer be let off with a slap on the wrist and a fine?
If *any* physical product was sold with a defect would the producer be let off with a promise "to do better next time"?
Would the laxity allowed in software production be allowed in any other life-and-limb-threatening industry?
Pretty much. Historically many new industries begin with a period of significant danger to customers, and which in retrospect arguably continued too long. Automobiles would be a biggy in terms of complexity but most home appliances were conceived with much less concern for safety than we accept now. Surgery. Healthcare in general. Electricity and gas supplies to homes. Buildings (take a look at the foot dragging in the UK over cladding fire-safety for a good recent example).
The factors are the same as in software right now. To an extent it's not economically viable to produce high quality from the start. The demand is large and needs to be met quickly at an affordable price so corners get cut. People claim they didn't see the dangers ahead of time. Then a culture of not providing or expecting safety sets in because it's always been done before without. Regulation lags behind. Responsibility for the situation is split between industry, consumers and regulators and it takes a lot to incentivise everyone sufficiently to change.
Change is only going to come gradually with everyone pushing in the right direction. It won't happen because one stakeholder enthusiastically tries to pin the blame entirely on another.
Yes, the physical products that have to constantly change include: CPUs. And GPUs.
They are announced on as little as an 18 month cadence. That's pretty frequent in my book.
One neglected fact: the design and manufacture of CPUs and GPUs relies on the correctness and accuracy of software.
Software bugs may cause hardware bugs which provoke software bugs....it's bugs all the way down.
(You can google Intel CPU Errata for some scary lists, not the least of which is how long the lists are...)
> Yes, the physical products that have to constantly change include: CPUs. And GPUs.
Not in the same way software changes.
The equivalent would be to redesign half the ALU core while the assembly line for the new CPU is 75% built, and we decided that our word size is 70 bits instead of 64, because big number sells easier. Oh, and the whole thing now has to be triangle-shaped, because marketing thinks that looks more "zazzy!". And before I forget, Janet from sales read this thing about "ternary logic" somewhere on facebook, and we had a C-level meeting, and we really want that in the product as well, so make it happen, alright?
> They are announced on as little as an 18 month cadence.
Cute. Meanwhile, software projects sometimes change core requirements single-digit DAYS before launch. And in some cases, AFTER the launch.
Oh, and btw. those 18 month cadence are the result of an incredibly complex planning and logistics process that starts YEARS before the announcements go out, including multiple R&D teams working in the next few generations of chips while they still anounce this years.
There have been security standards for software since the 1990s, and these are regularly updated and expanded. For example Common Criteria and all its derivatives.
Also, security was added to other standards such as ISA-IEC62443 for industrial control.
And there is a whole raft of industry specific standards, such as PCI (payment), HIPAA (healthcare)
I think soley blaming Devs is not an answer to a complex problem here. Sure i agree being educated a software engineer with trained to secure code needs to be treated with the same strict standards as a real world Civil or Mechanical Engineer. But a changing breakneck "move fast break things" industry means finding workforce in a vast skills gap means companies have to pick people who are good enough to keep up with demand. This is ultimately a market and business decision in the end and having a Dev as a default scapegoat doesn't really solve anything. So put down the pitchfork and show some stinky overworked neckbeard Dev some love and hugs wontcha? :)
No, the issue is not just about the developers.
The issue is the whole software industry has lived by the rule of no responsibility for practically anything. So the goal is always to add features or shiny pointless interfaces at lowest cost, instead of careful design, development, and test with a fixation of making things secure. The industry has become so bloated in code size as more and more stuff is added as it is cheaper to import GB of 3rd part stuff for a trivial feature then to code it directly, and while code reuse in the form of tested and maintained libraries is a VERY GOOD THING, practically none of that bloat is supported or maintained in any real sense.
Then you get the endless migration of compatibility so efforts have to be put in to making it work with XYZ new OS/library/browser version/hardware interface, and not to keep nailing known issues instead.
Who can afford to do a job properly if another company can undercut you with slap-dash put together crap that is faster to market, cheaper, still sold legally as no standards to meet, and more "shiny" appealing to the consumer?
For years, even as a software developer, I have hated the supply terms of software which are, effectively:
"We, the suppliers of this software, generously from the bottom of our bank account, graciously let you, the user, borrow a copy of it. This software may not do what you want it to do, and probably doesn't even do what we say it does, but we don't care."
With the advent of Internet delivery where patches could be made available quality got even worse because things could be released and anything "serious" (to the bank balance) could be patched later. Or the user could just buy a new version, whatever increased profits. Therefore the license terms effectively added:
"Give us more money for the upgraded version because we are no longer interested in the old version and no, we won't fix things, our marketing department got a new box of crayons at Christmas this year and that's more important.".Now things have moved onto subscription services which effectively added more to the one-sided horror from software suppliers:
"We noticed that not enough of you were paying for new versions of the software, and this makes our shareholders anxious, therefore from now on you will have to pay a subscription to use this software. Whatever you do, do not multiply out the cost of this subscription over a few years and compare it to the cost of a fixed license. We will either stop selling a fixed license or make it so expensive that the subscription appears to be good value."
I have seen absolutely shitty code from a major database software vendor and it was right in the "front door".
You could bring down said database system with elite hacker tools such as telnet+random typing.
This company was also said to be founded with capital from U.S.G. !
Corporations must be forced to have much better standards, much better processes. The Wild West Phase of software development must be ended.
Proper Regulation and Laws must be written and enacted to achieve this.
I am an old-school ASM developer, on small embedded systems only. This helped me later to control the software development of our product line, say, we tried to do this efficient. One of our speer points always have been: Code efficiency.
It seems that we are one of the few doing this. But it helped us *very well* in tackling bugs and all sorts of other nasty problems.
Why is no one talking about code efficiency? Because of all the great resources we have? If you compare the quality of all of the hardware we have at our disposal and you then think on what software runs on them.... it makes you wonder what could come out of the *same* box if the code was more efficient (and thus certainly more secure).
My thoughts for the future is that for software development things radically must change. The way how it is done today is not sustainable anymore. Too much security problems and risks, and only because no living soul understands what's going on "under the hood' anymore. From most operating systems we created a monster.
I am sure that I will get slaughtered with the above writing, but hey, at least give this a thought.
Indeed.
You can see the bloat with phone apps as well, storage and compute is free as far as the developers, or more likely their project managers, are concerned.
Microsoft Authenticator on iOS is something like 250MB which seems excessive for the little it does.
Revolut's banking app was up at over 500MB but they seem to have brought it down to 373MB currently.
Paypal is an unbelievable 355MB for the minimal amount of stuff it does.
"You can see the bloat ..."
I can't upvote this post enough.
As a developer who in the 1970s and 1980s wrote software in M6800 assembler for example for nuclear material security and for data reduction in hospital laboratories I'm constantly staggered by the wastefulness of current software trends. I used to take raw calibration and measurement data and turn it into usable diagnostic numbers, in addition to running the feedback loops which stabilized the gain in the nuclear pulse amplifiers of sixteen measurement devices, all inside four kilobytes of code.
Last year I thought about putting a data compression library into some financial software that I started work on in the mid to late 1980s, and which I still suppport.
My software is mostly written in C, with a smattering of assembler. It's a multi-user system which maintains records for stocks, customers, suppliers, sales, purchases, financial transactions, ... basically everything you need to know where you are in a business which buys and sells things. The executable weighs in at just over 600kBytes.
The data compression library I looked at was five times as big. Half the machines running my systems don't even have that much RAM. I thought of a better way.
The thing is, most people nowadays would think three megabytes is on the small side.
Me, I wonder what on Earth all that code can possibly be doing?
This is the culture. If it works then why tinker with it.
Is it slow? Other apps are slow too.
Is it large? Other apps are large too besides storage is cheap.
Does it consume a lot of energy? Batteries are improving, by the time we optimise the code (where do we even find developers who can do that and how much is that going to cost?), there will be phone with longer battery life. No point.
I once developed embedded software for a nice MCU with 4K Ram and 64K Flash. There was "plenty" of space left in the end. I created C++ Objects that consume 1 byte/octet.
So - you can indeed use a high level language and still create very small executables with a small Ram consumption.
The MCU was an AT90CAN64. Very powerful device, actually.
Surely it is the fault of poorly skilled developer, totally not the fault of management who hires them to maximise profits.
Training?
"No we can't do any training because the moment they complete it, they'll move jobs"
Why don't you pay them more, so they stay?
"Shareholders won't like that. Product is coming up nicely, we are doing fine with what we have."
How do you know that?
"We haven't got any major complaints or issues."
Yet?
"We will cross that bridge if we come to it."
Engineering Managers, Engineering VPs must be held legally responsible for sub-standard development techniques. Up to and including jail time for things like hard-coded credentials and obvious violations of regulations. Corporations must be made financially liable for violations of regulations.
Of course, now the question is "what are proper software/system regulations ?". A lot of damage can be done by stupid regulation, as always with laws+regulations. Doing nothing is not an option either, as U.S.G. has now found out.
Also, there must be a "ramp up" phase, from the current Wild West approach to Proper Regulation.
As a first step, force corporations to use PC Lint for the C code and fix or justify any PC Lint complaints. On the long run, make them prove memory safety of internet-facing code(first layer of SW) OR use Rust/Java/C#.
Yes, we need a good conversation about this. Civilized, enlightened, rational.
Quote: "...Makers of insecure software...."
Quote: "...Jen Easterly, boss of the US government's Cybersecurity and Infrastructure Security Agency...."
Facing both ways at once are you....Jen Easterly??
Some of us KNOW FOR A FACT that the US Government (aka Fort Meade) has encouraged Cisco Systems to ship "insecure software"...........
Why not take it on the chin?..........you Jen Easterly are part of the problem!!!!!
Why am I not surprised that the "US government" wants to walk both sides of the street at the same time?????
But, as mentioned by several above, writing bug-free, secure code is well nigh unattainable code, especially in a consumer economy where most often the cheapest product wins, not to mention quickest to market.
Heck, I literally woke up this morning realizing that I left off an entire necessary option in a project on which I'm working and now have to recode a module and redeploy before it borks someone's research results.
Just saying "nerd better" is not a magic panacea.
Perhaps we could establish some agency akin to the US National Highway Traffic Safety Administration (NHTSA) to do the equivalent of "crash testing" on products but I can hear the howls of pain from the Silly Valley bros over interference with innovation, the dead hand of government, yadda yadda, after just thinking such a thing. Probability zero.
Even if we did create something of that nature, how would we test all these widgets in all the permutations and combinations that exist, especially when they may interact with each other in strange and possibly undefined ways.
We do have UL, what used to be known as Underwriters Laboratories, but submission of a product to UL is voluntary (they're a private, profit-making organization) and they're really only concerned with things bursting into flames when they're plugged in. My understanding is that they've also got a huge backlog and it takes forever to get a product approved.
What's the answer?
Heck if I know.
One thing I'm pretty sure of is that naming the baddies by insulting names is probably not it. Most of those clowns would probably revel in being called "Scrawny Nuisance" or "Evil Ferret," I rather suspect.
Just look at some of the Xitter handles people take of their own volition.
Just look at "Elon Musk." What kind of name is that?
"but I can hear the howls of pain from the Silly Valley bros "
Given the way that the USA govt is disappearing up its own fundamental oriface, they may have good reasons for doing so
That said, there are OTHER national/international governments which could take up the baton
Well, the EU is a bunch of weakling-pacifists who will run to U.S.G. whenever a REAL threat(such as Vladimir or the Neo-Caliph) crops up. They can regulate to death(e.g. the USB connector regulation), but they are almost unable to develop an industry of our own.
India - highly corrupt and still kinda third world.
China - all depending on a single man ?
Brasil - bunch of lefties with bad friends.
So in the end U.S.G. must take the lead in this subject.
I did not attend the conference in question, so I don't know the entirety of what Ms. Easterly said. But the phrase "flawless code" does not appear in any of the article's quotes of her.
Drop the straw man of "flawless, perfectly secure code" and start with "more reliable, more secure code." Close the holes, fix the bugs, one by one. It takes years, and it's generally thankless. Welcome to your tech job future.
This needs to extend to installation practices and expectations, as well. I can't tell you the number of times I have been provided with installation instructions or even a helpful 'installation engineer' that expects a service account with admin privledges (I remember SolarWinds and CommVault specifically wanted accounts with Domain Admin provledges) or some other careless configuration that flies in the face of any security best practices. One may think the answer is to provide more limited configurations yourself, in order to keep configurations in line with security policy, etc. However, if you deviate from the given instruction set, the vendor will turn around and refuse to provide any of the support you paid for because you did not install it to their specifications.
I once worked with an engineer who was a true genius when it came to firewalls - setup, rules, configuration, changes, upgrades... however he never passed his CCSE (Check Point I am looking at you).
Why did he never pass it? It's simple, he knew each customers infrastructure inside out and Check Point's "perfect world scenario" exam questions would bring a company down if he implemented what they said was the correct answer to the questions in their exam.
I'm talking about huge, huge companies here, ones that your pensions rely on.
There's far too many companies relying on old creaking infrastructure that have easily exploitable bugs and defects. If you strap that on to the latest and greatest protocols customers demand then you have the perfect scenario for a world of pain and breaches. Acquisitions just add to the melée of creaky old tech mixed with fancy new tech and they don't work well together.
I'll bet most of these businesses have no idea about their hardware and software stack - it just works and makes them money... until it doesn't.
It's a sad state of affairs and will take decades to unravel, but until we do away with bolting new tech onto old cr*p we'll always have these issues.
My $0.02.
Corporations who provide essential services OR have more than 10% market share should be required by law to:
+ document all known weaknesses such as out-of-date/out-of-patch software and hardware, report to government
+ document all known weak scanners, parsers in use, report to government
+ lock down these insecure systems into enclaves with minimal external connections
+ similar sane measures to mitigate the effects of outdated and/or weak systems
Top notch corporations already do this at moderate cost and with great success. Now force the sloppier ones to do the same !
In most instances the fault lies with management - provide developers time to write the product properly, schedule in regular penetration testing, and checking of code as new exploits arrive.
Not always however. There was an unauthorised information disclosure at work, due to a 'smart arse' developer using an inappropriate hashing algorithm prone to collisions on large amounts of data. Even worse, another developer didn't think it was their responsibility to know about such flaws.
If you don't understand what the functions you're using do, don't use them! Looking up the trade offs with cryptography and hashing is a minimum standard, as is checking if a function is thread safe, or has any other documented issue. Know when to write functions yourself, and when it should be left to a third party library (cryptography, anything to do with times and dates).
Each and very US company will have received a NSL, take that for granted.
There are by far too many "forgotten" hardcoded admin credentials and other faults clearly intended as backdoors.
But to address this issue is clearly off-limits.
NSLs cannot force a vendor to implant a backdoor. It can force a vendor to provide the data he already has collected.
But yeah, there must be an enlightened discussion about Lawful Intercept and about backdoors.
About Data Collections. In my opinion it is a Stasi-like technique to collect on proven harmless and unpolitical people. Security agencies must delete collected records if the target proves to be fully harmless and non political, non military.
Then make it "the maximum number of people agency can collect on is 1% of population". That will make them prioritise who is actually a baddy and who is not. Whenever they add a new target, they must wipe an entire "old" target of their choosing. They can use Least Recently Investigated as a quick algorithm.
I draw a big distinction between a vendor who does their best and issues regular product patches and updates and a vendor that ships an initial buggy release that never gets patched or updated, save for buying a replacement which has new bugs and security holes. The latter type of vendor is, I agree, a huge part of the problem.
The easy solution is to stop buying cheap Chinese and Asian CRAP that doesn't get properly supported. But people keep getting suckered by low prices. There are reasons those prices are so low...
Except that there are non-Chinese companies with very much similar problems. There should he criminal investigations and FINES for leaving hard coded access credentials inside routers and other IT gear. And yes, major vendors that essentially run the internet traffic !
Also, there is the nagging suspicion these backdoors had been created at behest of "security" agencies.
Good to hear Mrs Easterly now wants to clean this up. Does she have the backing of her "former" employers ?
Whenever there's a statement along the lines of 'X should do Y', it's either uttered (a) because the person who states it has no say or control in the matter, (b) they misunderstand the dynamics that lead to Y in the first place, or it is (c) virtue signalling.
In the first, repeating a well known need just confirms a modicum of domain expertise (on the problem, not the cause).
In the second, if they did, and cared, they would have worked on fixing it, rather than talking about it.
Instead of this, CISA could work to persuade companies to build secure software. (not having them pledge, persuade them, which occurs often behind closed doors).
But as-is, this reads to me along the same lines as someone in a G7 country with a big truck and 24/7 airco in a house built for active cooling watching a documentary about global warming and writing a long social media post. True, but besides the point.
They are different terms for different purposes. Safety is the protection against unintentional harm to humans. Security is the protection against intentional harm to humans. Engineering for safety is the norm in many practices. Engineering for security is not the norm at all. If someone wants to throw another person off a bridge then there is no substantial protection against it. Just some minor safety barriers for accident reduction.
Lawyers need reminded of this distinction sometimes too.
are against engineering for safety. Where other engineering disciplines have had heavy regulation for many decades, if not centuries, those were for safety rather than security. All the bleating that somehow software engineering has had is easy by not being punished for insecure code is missing the fact that security is not safety, and safety is not security.
If the bad guys can find and exploit the holes in code, why can't the company whose code it is? In-house whitehats, but not the programmers themselves--a different team. And if you don't have in-house people with that skill set, then hire some outside company that does.
And yes, I'm naive...
There could be regulation that the software vendor has to create an Exploit Award Pool (EAP) of (say) 1% of revenue. Whoever can present a working exploit would be awared e.g. 1/30th of the yearly EAP. Whatever has not been consumed of the EAP would go back to vendor at end of year.
Of course details must be hashed out. E.g. employees of the vendor would only qualify for reward, if they did not work on the code related to the exploit.
It looks like Mrs Easterly knows what about what she says
https://en.wikipedia.org/wiki/Jen_Easterly
BUT - how do we make companies actually use state of the art techniques such as
+ proper, strict input scanners
+ proper, input parsers with a well-defined, strict grammar
+ memory safety in scanner+parser and other external facing code ? (Memory safe STL would already be a gread improvement in C++)
+ mathematical proof of memory safety if C is used. See seL4
+ extensive fuzz testing. Documented fuzzing concept.
There should be regulations along the lines of
A) "Must comply until 2026, if software is to be used inside U.S.G."
B) "Non compliant banking/insurance/mission critical software is taxed at double VAT"
As a software engineer myself I recognize that we need proper and intelligent regulation. Just going on with the Wild West of the last 50 years does not cut it, though.
We need a conversation about useful measures, which are then written in law and regulation.
Defending and protecting the indefensible and the inequitable always leads to increasingly rapid and peculiarly stealthy popular failure across more fronts than there can ever be defences designed and made readily available for ......... therefore, pure simple, raw common sense suggests supposed intelligence bodies and agents refrain and resist and desist defending and protecting the indefensible and the inequitable.
Surely that is not difficult to understand even though compliance prove itself so evidently impossible for so many more than just the intellectually challenged, the mentally deranged, the subversively invested and practically moronic ‽
Is that you recognising and confirming the West is most prone and particularly vulnerable to that increasingly evident and despicable problem which they do continually appear to choose to ignore whilst just expecting and hoping it goes away with no solution to field, mentor and monitor, AC?
Tell me that choice non-reaction is not delusional and verging on certifiable madness and we will have to agree to disagree.
Very hard to fathom your grievance. If it is the mainstream madness of a plethora of Marxism(Feminism to Wokery), I suggest to simply stopping to consume mainstream media. Just make sure not to replace it by the stuff coming from other power centers. Go to a church, where you can find reasonable people. A small, independent, non corrupted one.
That is an insane level of victim-blaming. It is not okay to steal. Period. Full-stop.
It is not okay to steal your neighbor's car. It is not okay to rob a bank. It is not okay to steal whatever catches your eye from a store. It is not okay to steal data, or access.
The world wastes incomprehensible resources trying to stop degenerate wastes-of-skin from stealing absolutely everything with any value. Those thieves are the villains -- certainly not their victims.
I'm no longer in a position where I buy commercial software, but I still see licensing "agreements" that state that the software is provided as "use at your own risk". How widespread (if at all) are such clauses in commercial sofware licenses? If they still exist, shouldn't they be phased out as part of the shift from insecure to secure software?