Mozilla is piloting a project designed to develop a better model for the security of Firefox, by tracking a whole series of metrics over time. Instead of simply recording the number of patches issued in a year the scheme also aims to gauge the relative risk to users over time and the effectiveness of Mozilla's developers in …
Thing is, wouldn't you think they people would code their software to such a point that security difficulties never show up as an issue? If not, why has "getting it right the first time" become impossible? What about platforms where a fixed non-updatable installation is necessary?
Charles, that would be ideal, just write perfect code the first time and never have to update it. You've never programmed on a big project, I take it.
Programs like browsers and operating systems are incredibly complex systems. No one person can grasp what they do or grasp every consequences of all their design logic. In the 1980s, people thought it was impossible to write a million-line program like the AT&T switching systems. Today, a few companies manage programs that are more than 10 times that size. It simply is not possible to make them perfect.
Companys who are good at that have very elaborate protocols, special database systems for tracking bugs and fixes, automated testings, sleuths who try to break software and "penetration engineers" who are hired to hack into it.
It is particularly difficult to fix security by pure deduction ahead of time, or by inspecting source code (Eric Raymond's idea of a million eyeballs is a great sound bite, but no software engineer beleives it). Something will always come along that you didn't think of, and you will have to fix your program.
It's also hard to get top notch programmers. Most programmers sorta suck. On a huge project you are lucky if you have a few really consciencious talented people. Then you have a bunch of programmers who have to be given small clearly defined assignments and other programs have to do code reviews to check their work. And a few programmers are just stupid or careless and end up being a negative productivity asset. That's the reality of hiring human beings.
there's also the matter of innovation to take into account, you could spend weeks testing your software against all currently known exploits, only for someone to come along and think, "hey, i could do this, this, then that" and they find a way in. if you can't imagine a way of breaking in, you can't protect against it, or something along those lines.
So you're basically saying that, even at the most basic level, the sheer scope of such projects prevents an exhaustive look at a program's security. And I already know the necessary robustness of most programs prevents the KISS principle, too.
It just seems frustrating that you keep hearing about these exploits, especially those old-school buffer overflow exploits. We've been had more than twice, yet they keep on coming.
Biting the hand that feeds IT © 1998–2021