Re: So what happened to the coder
Re: "the coder would've normally been put on leave at the very least."
For a moment there I thought you were going to say 'put to sleep'.
Almost the entirety of the source code universe is a total mess. A bug like this should be impossible in burned-in mission critical code like this. Unfortunately, a lot of evil habits are actually cultivated by design. The programmers don't know any better and they are immune to reasoning about it.
The C language is the language of memory corruption. It is a 'high' low level language designed to build operating systems and compilers. Code that does not do bounds checking is faster. In some cases speed differentials are astonishing due to the hierarchy of storage from the CPU registers through L1, L2, L3 level caches and ordinary RAM. Stuff that stays within the L1 cache operates at lightening speed -- about 200 times as fast as a reference to RAM.
It is fair game for called code to have expectations. If you pass a memory reference, you don't want every nested call to check its validity. In some cases, called code could not do bounds checking because the extent of the bound is only known back up the call chain somewhere.
When you look at the sources, you find that these bugs are usually in code that has other tell-tale flaws as well. Older code contains errors by even good programmers. Dennis Ritchie was a god, but not all of his code examples are keepers and he was guilty of some faulty practices.
The worst stuff is stuff that was written or maintained by talented journeymen programmers whose short-term memory was at a maximum and whose coding experience led them to believe that clever code was a good thing that showed off their talent. I doubt it is even true these days, but Duff's Device is an extreme optimization that *may* be a necessary evil at times, but when not necessary simply devolves to being evil. I know of at least one nascent small C compiler that fails to generate correct code when it encounters Duff's Device. A beginner or a journeyman will blame the compiler when it fails. Someone more experienced will blame the coder. Every time you do something fanciful in code you take a chance that a maintainer will have to debug and recode. An experienced, literate and conscientious programmer should not be doing this.
Beyond a failure in coding, this current situation demonstrates something that I have often commented upon in these forums. Our system is insecure and it is essentially insecure by design. Given the enormous resources spent on systems using this SSL code, does it make any sense at all that it suffers from such a mundane flaw? It does if you realize that security is not that important to the people holding the purse-strings and calling the shots.
This is about the most serious security bug I have ever seen. Cleanup is going to be a real bitch. Repairing and replacing the code is the least of the work effort. Prediction: Most of the systems that had this issue will not have passwords changed and keys replaced. If a significant number of systems were actually compromised, we will be living with the consequences of this for a long time.
Despite the severity of this bug, it pales in comparison to the inherent systemic insecurity of the Internet. There is no question in my mind that people in the know are protecting important things with air gaps, enormous keys created with very good sources of entropy, decoy layers, etc. That is to say, nobody who understands this stuff could possibly have any faith in the current security systems as actually deployed.
It is very hard to look at the current state of the Internet, particularly things like the failed introduction of IPv6 and not think that people with influence who understand security have encouraged this situation precisely because they wish it to remain vulnerable to them.