It only took how long to come up with a list like this?
Computer experts from more than 30 organizations worldwide have released a consensus list of the 25 most dangerous programming errors that lead to security breaches. The list, which was spearheaded by the National Security Agency, is the first time a broad cross-section of the world's computer scientists have reached formal …
What the hell are you on? The budget has nothing to do with it. Mostly it's crappy management, followed closely by programmers that don't give a damn (often because of crappy management), that allow these types of errors to get through.
The list sounds familiar. Most if not all of these were covered as serious errors in many of the programming classes I took for my degree. Maybe programs that continue to have these errors can now be returned as being "unsuitable for the purpose".
Maybe this list could be refined every couple of years. Now that there is an officially sanctioned list, if software development groups - especially paid groups - are still making these errors in a year or two perhaps those groups, as a whole, should be simply fired for cause. That or allow them to be sued for the damage caused by the inclusion of such errors in the code, regardless of what the weasel-worded EULA might say.
Having scanned through the list, most of these are items that I can recall being covered from my University days in one way shape or another.
Any computer science/programming course worth its salt should cover these (and more) by default. If it doesn't then it isn't worth squat.
Although, having the list formalised is a welcome development since, as the article states, it might begin to bind software developers to tighter contractual restrictions for delivering secure code.
The problem, as has already been stated by AC@02:18 is that projects on tight budgets/timescales lead to buggy software, although I would also posit that developers who have learnt their craft themselves without going through a decent, formal course are also (frequently) at fault (it's amazing also how often I see outsourced development teams on projects who have no idea of how to write good quality code or how to analyse a problem and develop a suitable algorithm)
AC since I don't want to put anyones nose out of joint from my current project (in case they're reading this)...
Mine is the heavy one with all three volumes of Knuth's 'The Art of Computer Programming' in the pockets...
Now that we have it, let's evaluate...
OS X and linux both brag about how few vulns affect each respective OS (and apps written for each OS.)
Redmond replies that it's just a user-share thing -- Windows has a bigger install base and therefore a bigger target.
Now that we have agreed-on standards. Let's compare, yes?
That might require actually looking at some code. Which would violate draconian EULAs, DRMs, and Gates-only-knows what-else. I don't think Jobs-Almighty would be too keen, either.
I guess we'll just have to TRUST the companies that won't open their source code. :-(
"getting involved with a project on a low budget " is a programmer's error, not a programming error. Anyway, no matter how well funded a project is, there will always be programming errors - so I really appreciate such a list of errors.
But most enjoyably is, and there is nothing to do against, if senior managers force a programme into production against all odds, i.e. project manager says 'no', developers say 'no', testers say 'no' etc. Now that was fun!
I heard that motto a long time ago, when GUI development tools were becoming popular and MS and Borland promised that it meant any old programmer could write a sophisticated program. Of course, a bad programmer would simply write a bad program more quickly, with a glossier UI.
Paris, because she knows men with tools can be fools.
This is a very odd list; it's a mixture of stuff that's genuinely worth pointing out e.g. 'validate your inputs', with stuff that a well-designed operating system and underlying security model shouldn't let you do anyway - I mean, making sure you don't violate memory bounds? If you told the OS in advance the size of the memory, should it not just throw an error if you try? I'm no modern OS expert, but I'd expect stuff like that out of the box.
I'd like to see an alternative list which gives 25 most useful tips for making programs useful, maintainable - "don't hard-code values from your data", "define things once", "put in adequate comments", "write the documentation" (gasp!).
Most programmers still program for the 'Sunny Day' case. They design and code systems assuming that nothing will go wrong and then bolt on error checking almost as an afterthought. A proper system needs to be designed assuming that things can and will go wrong. Unfortunately designing robust applications is time consuming and expensive, two things that management don't want to hear. The situation is not helped by universities who don't seem to teach robust programming.
Is using an underspecified, unsafe language like C. Many common errors could be fixed if you avoided unsafe casts, null-pointers, manual memory deallocation, unchecked array access, unchecked pointer arithmetic and returning pointers to data in the local frame from function calls.
These things are incredibly easy to fix in language and compiler design, and the runtime cost is minimal. So there is no reason to stay with C (or even C++, which inherits most of C's weaknesses).
Hard coded passwords
So, that takes mySQL off the internet then - how many eCommerce sites use mySQL as a back end and use a single (private) username/password to store orders etc. Any I don't believe mySQL is modular enough in it's security to allow individual users to lock out only their records.
Nice idea but all are really just common sense which will get overridden by timescales/budgets etc.
Paris - I can think of 25 videos, er mistakes she's made.....
Back in the 70s, a globally renowned UK mainframe company used a high level language for the OS development because anyone could write or support it, unlike assembler languages.
They found that to be a fallacy. Given that anyone could write the code, but the results were not impressive: just as well it was easy to maintain due to the high number of bugs!
A radical rethink and programmer culling took place followed by a major rewrite and then things started to take off.
M$ and their ilk are just repeating the original mistakes, but do not have the intelligence to locate the major flaw and rectify it.
What appears to be worse is that because M$ are who they are, and have published training material on how to do programming and projects badly, the rest of the world's numpties have sat up and, in their very finite wisdom, taken the garbage as their bible. One cannot get a local programming job here unless you are qualified to make the same mistakes!
Vladimir: I suppose you could always download the open source parts of OS X, such as the kernel and the HTML rendering code - that'd give you a pretty good insight into the bits that most people and applications are likely to touch irrespective of whether Jobs wants you to.
Tom Cooke: yeah, I noticed that. It's probably slightly misleading to say that the list has been formalised; the problems that are part of the list have been formalised, the list itself is written in parts in a broken 13-year old's blog English.
your useful programming tips are not coding errors though, they are simply coding standards. If you are outsourcing work without a decent coding standards document you are in trouble!
I wrote our teams standards and went down to a stupid degree of detail, ie what sort of things needed to go in comments, that they had to be gramatically correct UK English. that meaningful numeric values had to be defined, that subroutines shoudl be used instead of duplicating code, etc. The majority of it is just normal programming practice for any reasonably competant developer, but if you don't specify it, you don't get it from most outsourcing companies. Now its all in an official document that we hand over as part of our agreement and it lets us point to it and reject any code that doesn't meet the standard.
>If you told the OS in advance the size of the memory, should it not just throw an error if you try?
Depends on the underlying hardware, once you've got an address and try write to it, either the OS must be notified and then validate the write (every single memory write!) or the hardware must have some kind of notification of where you are allowed to write and where you aren't (bit like NX bits but more extensive) and finally you need to establish overwrites that hit valid allocations - accidentally writing to another allocated address space.
>Dynamic Memory Allocation screws up eventually. If you can write the program without using
>Dynamic Memory Allocation then it will run for years without a reboot.
Very true, but that requires allocating all the memory you're going to need for years at program launch and that means other programs you might want to run in the intervening years won't have that memory available. So your shell, grep, maybe the odd ftp or ssh session, perhaps you'll want to run an image processing job, they aren't going to happen because all the memory was allocated in 1995 by the web server...
Also, C of course generally (and Pascal doesn't have to, and if you do they're slower) doesn't have bounds checked arrays so even your statically allocated memory could be overrun.
>WEB programming errors
>Not real programming errors
Like buffer overflows, not validating input and output, generous error messages and race conditions?
That's because most programmers don't have an aptitude for it and don't actually enjoy it. They simply plod along being barely adequate, waiting for their pay check. That's where the errors come from.
Those who write programs because they enjoy the creativity, the logic and the aesthetics [of a well-written program] are far more useful. They enjoy their jobs, produce better code and just see the pay check as a bonus, instead of a goal. Unfortunately, they necessarily take longer over it , which short-sighted employers see as a disadvantage, until the errors start to pile up.
And those languages are written in C.
The "unsafe" C is only unsafe if you are slack in your programming. If you are careful in your programming, you are much better off than using a "managed" system where if something still goes wrong, you have NO CLUE how to fix it because "the language knows better than you".
I'm sorry AC. - Blaming a low budget for your poor code is not on.
If the project is release ready then the code must be free of bugs (the major ones like these 25) otherwise it doesn't get out of Alpha/Beta.
If it will cost more than management are willing to spend then they can either open source the project & hope someone will fix it for free or leave it in beta & risk it in a production environment.
Release software should not have these errors regardless of budget. End of.
When I was coding SMS messaging systems for a premium rate co some years ago we'd always be badgered by the Sales Director saying 'is new feature xxx ready yet? Remember we sold this idea to a customer and told them they could have it online in a week!'
To which I'd answer 'Well, you can have it now, if you like. Or you can have it working...'
He always wanted it now....
>Release software should not have these errors regardless of budget. End of.
The decision on whether or not to release something isn't generally for the programmers though. It's a higher level business decision.
Bodging something together in half a day is always going to be flakier than if you had a few weeks to do a proper design and testing of it. A few weeks costs money.
>Unfortunately designing robust applications is time consuming and expensive, two things that >management don't want to hear.
As someone in the middle tier, I'd disagree - management often hear, and managers often know what best practice is, but at the end of the day, companies have these things called budgets, which tend to be set by how much you can sell (or how much money support services bring in if you're open source).
Unfortunately, the reality is that customers have rewarded cheap shonky systems and code over well engineered ones. That is how Windows became dominant over the Unix desktop. Every time you bid for work, you're being played off against a cheaper competitor, which forces you to cut costs to compete. Both of you can even be ISO certified, seeing as the certification only covers repeatable process, not actual code quality.
(The corollary of this is that at the real cost of doing the job properly, a lot of IT projects would not go ahead, or at least not until the components were cheaper - i.e. you can now assemble a web store using off-the-shelf components and services).
I guess certification could be a good starting point - i.e. having external code reviewers come in and review code for quality, in the same way that other parts of businesses are audited. That would help produce a level playing field, in that customers would then know if a supplier was grade A, B, C, etc. Right now, the only real feedback they get is the number of bugs they find after they've made their business dependent on the system.
The other thing that strikes me is how many of these errors have occurred due to the shift back from tools that made typed data entry fields easy to browser based apps where everything is a string. Again, that's a customer driven technology choice - I can understand that it solved a lot of deployment problems for firms . . . but now it turns out that security problems are the new deployment problems.
Hard CODED password. Don't put your password in the CODE where it can't be changed; that's what the phrase 'hard coded' means.
Move your passwords into your CML configuration or properties files and they are no longer hard coded, you can change them and then restart your server. If you're using Java (as I do) then you put that property file in a very, very private directory on the server and simply include it on the classpath. This means that the content of the file cannot be inadvertantly served up due to some programming or runtime error. In addition, whenever I work with passwords in code, I try to ensure that the 'retention period' is as short as possible, and that the password is only accessible from parameters or method scope properties - that are automatically removed when the method completes.
We have a copy of "The Pragmatic Programmer" in our bookshelf here at work - much more useful than these 25 items. We've got a copy of Joel On Software as well. Everyone should have read through that.
But this 25 points? Shit throughout.
These 25 points start off crap -"Improper Input Validation" - i mean what the hell does that mean? Does that mean "Validate according to the functional specification of the user's requirement"? Does that mean "Validate all data that could be entered into your program over the next 25 years?" Does that mean "Use your telepathic powers to translate the phrase in the requirements "Enter User Name" into 50 pages of detailed technical specification then code it."?
I see no way that someone could sign a piece of paper saying that their software complied to this point. Its so fucking vague you might as well just have the one item: "Write the program right in the first fucking place."
Fucking pissants. Next we'll get "Top 25 spelling errors" and we'll have to burn any book that commits them.
Norman - Conversely, if your programmers are rubbish then the best Project Managers won't make much difference either.
Development is a team effort and needs great analysts, developers, testers, project managers - oh, and a good client, to deliver a successful product.
Geoff - which bit needs a refresher?? The coding or the drinking?!
I am sure that the quality of IT teaching is now much improved from the days when I used to fall asleep to the dull drone of a technician who was patently out of his depth. Fortunately for me (and everyone else in my class at college) I had already taught myself BASIC and was sufficiently good at maths to understand enough about the flow of data and logical progression through a program to ace the topic and was happy to use the other half of my time teaching the class during our tutorials.
Moving on to a job as a COBOL programmer I quickly found myself with two interesting skills... firstly the forethought to control the flow of a program in such a way that most 'problems' simply could not exist (specifically the rigid control of inputted data and the writing of numerous distinct, robust and simple modules). Secondly, I had an unnerving knack of being able to identify the weaknesses in programs and was always in demand by the other trainee programmers to "find coding error that crashed their batch run" and to "test to destruction" their own creations.
Ah, the good 'ole days when a "finished product" was compiled and then packaged, ready for distribution... not the offensive torrent of 'beta' releases that never actually get fixed or spend their entire (and short life) being updated, fixed, patched... Rule 1 has got to be, "the project may be complex - but keep the modules simple".
In all my time as a programmer and subsequently in helping others to attain similar qualifications and indeed whilst later teaching myself dBase(III and III+) I have discovered that the single most common problem in IT is sheer bloody incompetence, the source of most errors is inherent habitual laziness, the most common program writing error is a "," instead of a "." and the number one error is not in programming, it is a lack of rigorous testing.
"What appears to be worse is that because M$ are who they are, and have published training material on how to do programming and projects badly, the rest of the world's numpties have sat up and, in their very finite wisdom, taken the garbage as their bible."
Actually, MS Press have produced at least two great programming books which I treat as 'bibles':
"Programming Windows 3.1" by Charles Petzold - A fantastic introduction to how GUI's work for people who come from a background of command-line entry and display. Clearly written and you get the sense that Chuck knows exactly which concepts will be most alien to you as you proceed, so he explains them in greater depth.
"Code Complete" by Steve McConnell - A 'common sense' set of guidelines on how to approach every aspect of programming. A lot of the advice is counter-intuitive when you first encounter it, which emphasises the usefulness of this book to all programmers, regardless of where you are in your career. You don't even have to be programming Windows for this to be a goldmine of useful ideas.
Of course, there are many other great programming guides out there and some are available for free - I can heartily recommend ESR's "Art of Unix Programming", which can be browsed online:
You have obviously never written code in real world working conditions.
You have a project that has to go live on Friday, you have 4 days left to work on it and 5 days worth of bugs and amends on your todo list before it can go to the client. The deadline is non-negotiable, if you miss it then the company may be out of pocket thousands of pounds (could easily be over 100k), you will get a bad review, ruin your chances for a bonus and get bumped one position higher on the 'bad programmer/replaceable employee' list.
The managers responsible for the project aren't programmers and don't fully understand your security concerns, their attitude is "just get it done for the deadline". If the project appears to be finished it must be ready to go, your security concerns appear 'petty' and are not critical for release to them.
A) Refuse to let it out of Alpha/Beta, pissing off both client and the company you work for.
B) Smooth over what issues you can and release the software as-is, keeping everyone happy. You can fix further issues as they crop up and shift the blame onto the company you work for.
Then what do you do a month later when the cycle repeats itself with your next project?
Tight deadlines and bad management are probably the number one cause of most of these issues. You often have to cut corners to get the code out on time, let alone even think about a security audit because the people managing the project do not understand the technical implications and requirements for the project and agree the budget/timescale without consulting anyone who does.
PS: If you even consider open sourcing a project the company is being paid to develop in-house, you will get laughed straight out of a job.
>Don't get a lot of those on AS/400 RPG-ILE
Well, I thought you meant real programming not some crappy job control language!
Anyway, example of a heap management problem in RPG-ILE
>'not validating input' is not a programming error, it's a systems requirements omission
There are some things that are taken as read, I bet the requirements don't have things like "Must not wipe the entire hard drive.", "Shouldn't constantly dial the boss' phone number" on simple accounts reports either.
As a former budget person, now retired, at one of the USofA's largest corporations, I can assure you budget is THE issue today. The truth is it has been for the last two decades if not longer. Right or wrong, low cost clerical personnel are generally the first to go closely followed by IT projects. In reality it may be a different scenario at different companies, buy IT is generally "low hanging fruit" for the accounting/budget trolls like I was.
Right or wrong, that's the way it is. Bad management is certainly a problem. But bad management tactics to help themselves look better is to "get costs out of the business" which has the effect (hopefully) of lowering expenses and raising net income, growing the company and increasing stockholder worth.
very true, my friend.
However, part, if not the major part, of the problem lies with t'management and their project planning. They usually consider a product ready for shipping when it fulfills it's functional criteria. I.e. it produces the correct output from the set of input data. The fact that it cracks right open if you miss out a decimal point, or type "*" when the database is expecting a number, simply doesn't feature: they get picked up in the beta testing (which, BTW is now commonly referred to as version 1.0)
Why these state of affairs is allowed to happen is partly due to timescale pressures - where the development cycle simply *must* produce 2 releases a year. Part of it is due to the way development is done: with stages like code, test, release (without the obvious interlude and time allocation between #2 and #3 of "fix the bugs"), and part is due to the lack of liability that the producers award themselves from the EULAs.
In practice, pretty much all of the 25 listed errors boil down to a single element: failure to design and code defensively, for a hostile environment. But of course, that wouldn't make any headlines as we all know that already.
For tossing experienced personnel out and replacing them with MCSE script kiddies who think they know it all, but continue to keep making the same mistakes over and over again... and just when they get competent, they get tossed out on the next project and replaced by a new batch of script kiddies fresh out of college who will make the same mistakes...
Real engineers learned the hard way and ensured that their profession had a suitable qualifying regime (apprenticeships and graduate training schemes) to ensure long term institutional "memory" and that those who called themselves Engineers were worthy of the title.
Software Engineering still (after all these years) isn't mature enough to call itself a profession...
90% of all web-based applications are crap, but the world still seems to turn on its axis.
Mostly customers want cheap or quick above all else - so is it any wonder that developers don't concentrate on security? As far as I can tell western developer morale is at an all-time low - there is only so often you can bang your head against the brick wall of idiotic clients before you just give in and churn crap to earn your wage.
Paris because she has found a much more satisfying career.
C was designed as a high level assembler to write operating systems in. To whit it has to allow you full access to the raw metal as far as possible. Perhaps C/C++ isn't the best language for application development as opposed to low level development but saying it should never be used is like saying assembler should never be used. Fine, you take your high level wrap-you-in-cotton-wool languages like python, java, C# and try and write a low level device driver in them that has to deal with hardware interrupts, DMA, kernel ring buffers and the like without using any libraries written in C or assembler (or in turn call on ones that are). Once you've done that get back to us and your opinion might have some merit. Until then you stick to your safe cosy application development IDEs and let the grown ups do the real programming.
Generally the OS only protects a program against writing to memory outside its allocated process space, or memory marked as program (i.e. read only) rather than data. Library calls like malloc() handle the subdividing of memory ultimately allocated by the OS, and it's more usual for programs to overwrite the wrong bit of this subdivided memory.
I'm not keen on the title of this article though; better would be 'The top 25 programming security holes'. It's far more dangerous (from the point of view of getting anything done) to have a program that doesn't work, and the list is somewhat web specific.
The list is probably most useful to managers rather than programmers; a half decent programmer already knows the list, but it might inform the manager why it's important to construct secure software.
What you describe is not coding errors but a total lack of project management. Have you tried PRINCE2 - nearly as much fun as programming :-))
"Well, I thought you meant real programming not some crappy job control language!"
JCL is the crappy Job Control Language not RPG, which originally meant Report Program Generator, neither is PRINCE2 a RPG (role playing game), its a project management methodology. Anyway, your example of a "heap management problem" is really an example of shit programming.
I'll probably get a few RPGs sent my way for that last remark, RPG as in Rocket Propelled Grenade
Paris, 'cos she knows what to do with RPG,( Rigid P***s, Giant)
A good list, but nothing that professional programmers didn't already know. There are all sorts of tools at Microsoft and other places to automatically search for many of these problems. But it's not that easy to identify these kinds of bugs by inspecting source code. This is one of the big flaws in open source ideology. Just looking at source code is a very poor form of QA, compared to dynamic testing, bug tracking databases, unit testing, build tests, etc. This is an area where the UNIX community has been playing catch-up for a long time, because they come out of an academic "hacker" culture of individual effort versus large-team systems management.
The best way to ensure that the software you use is free from elementary programming errors is to read the Source Code. The second best way is to have someone else, independent of the original programmers and whom you trust, read the Source Code for you. (Which is exactly what happens when you use a Linux distro.)
If they won't show you the Source Code, decline politely and walk away. Be sure to let them see you count your money back into your pocket.
(Fair enough, crappy report generator then!)
>Anyway, your example of a "heap management problem" is really an example of shit programming.
It's an example of how you can mess up the heap. An example of shit programming is what it's supposed to be.
The point was that even in RPG-ILE you can get them if you're a shit programmer who writes in one of the top 25 errors listed. To think that you aren't going to get them cos the system's got IBM on the front is sufficient to show that the guy needs a nice simple example of shit programming in RPG.
Hmmm.. Dunno about the truth of this, but Wikipedia informs me that RPG officially no longer stands for anything at all.
"A) Refuse to let it out of Alpha/Beta, pissing off both client and the company you work for."
I ensure that I don't *need* the job I'm doing so they can't bully me into doing a piss-poor job.
If it *must* be shipped, either get some other donkey to do it or remove whatever bits are making it unstable (if that is likely to fix the problem or at least hide it: if it crashes when you press "Advanced" then remove the "Advanced" button).
You're forgetting something: to the latest generation of code monkeys, "applications" and "software programs" mean exactly point-and-clicky interactive client programs or web applications.
Operating systems, device drivers, and even sophisticated video games with fancy graphic effects are not part of this pool--they are just stuff that you download or buy on a disc. Of course, from that perspective, C and C++ are useless and dangerous, and distract you away from the automagic code generators, garbage collectors, and shiny components.
As for the stuff they downloaded or bought on a disc, it must have been written in PHP, C#, VB.NET or Java. Unless it's buggy, then it definitely was written C, and the programmer should have known better.
This is a "list of the 25 most dangerous programming errors that lead to security bugs and that enable cyber espionage and cyber crime." It only applies to Web apps (or others with a client-server model), and flaws that can be remotely exploited. A list of the most dangerous errors in desktop apps would look quite different.
The list seems a bit repetitive, and could be summarized as 'Code Defensively', or 'Program with Paranoia', but it's good to point out how many places that defensive coding is needed in.
so 'professional' programmers cant be bothered to put in
catch Exception(DefMessage("Oi! put a f***ing number in you tosser"));
Or something like that
But reading about all the buffer overruns? does no one bother checking the size of the file against the size the file says it is?
But there again, I'm from robot programing, where we have to be 100% right if we're not then operators could be hurt or killed or worse still expensive machine tools get broken
It's all the fault of running code from RAM and FLASH and Von Neumann architectures
If you have a true harvard machine then there is no such thing as code injection from data space. There is code memory and data memory. you can not execute from data memory. Simple as that. And there is also I/O space. And only code can go to I/O space.
Then there is the memories : when processors only had ROM (mask rom or antifuse rom or OTP ) to run from, there were far fewer errors. First of all the code could not be overwritten by something malicious. The rom was masked during fabrication and that was it. Second point: quality control was much bette.r The attitude now is , oh well we'll fix it in the next release . If you have to shell out for a new maskset you will think at least 500000000 times about each and every instruction you put down.
What's my point ? Why does nobody make an OS that can boot from write protected flash ? you install on a clean machine. Load the OS core and drivers that you need. ONce this install is done : powerdown , flip WP switch and restart. If your machine catches a bug : hit the reset button. bye bye virus, spyware or whatever. I don't care. This could easily be done. Make a split registry and program files directory. The stuff that is critical sits in WP block. The visible file system is a logical OR between the flash and harddisk with the flash having priority. So if a nasty throws a couple of DLL's in virtual 'c:\windows\system' . Let it ! who cares. Those files end up on the harddisk. During boot the OS does not load anything from HDD. THe files on the flash have priority. If a duplicate file is found betwene flash and HDD you get a message after boot that file so and so is in location so and so and a duplicate exisits in flash. Cleanup ? yes-no?
Given that an OS + APPS is small. Couple of gigabyte and flash cost pennies : what are we waiting for ? I would immediatel ybuy such a machine. Primary volume : 8 or 16 or 32Gbyte flashdrive with mechanical WP ( a switch ), data volume : standard harddisk.
The only pain : when you need to install or do an upgrade you need to power down flick a switch and then do that again. Advantage : you always boot from a known good system. If an update is in order : download file. power down. flick switch , launch system ( now in known good state ) deploy update , power down , flick switch to safe , power up. Done. But then again. how many times a year do you install new software ? And when installing software you can give option : to safe zone or to harddisk. ( remember the visible filesystem is a logical OR between flash and HDD ( not at bit level of course but at file/directory level), with the files in flash having priority. So you can install your game to harddisk. but some app that is more critical to flash, in whihc case the installation procedure requires you to do the switch toggling. I would make the switch toggle something like : you need to press this button while powering up. That way you force people from doing the safe procedure.
It's so simple , i wonder why nobody has thought about this before. Especially these days when flash is 1$ a gigabyte...
How about the firmware that makes a diskdrive spin ? That is megabytes of embedded firmware. ZERO dynamic memory allocation ! For some portions of that firmware they go so far as to hand allocate the memory locations. This byte here, that word there. And the real critical code is handcrafted assembly. Interrupt handling has to be absolute and deterministic
I took a C/C++ programming course 10 years ago and we learned all that back then. Maybe my instructor was ahead of his time (or just knew what he was doing) or maybe too many programmers/project managers don't know or worse don't care?
I don't know the answer, but I don't think you can blame the language. If a project is so pushed for time that there is not enough time to do it correctly using a language that requires a little care, then perhaps the project has been improperly scheduled?