I hope that someone can hit it. The bigger question is will the whole thing be sabotaged or backdoored by one of the snooping organizations. Or maybe it's to prove it can't be done?
In a discussion that will sound familiar to Australian readers, US military development agency DARPA wants to create provably-secure software. According to Threatpost, DARPA director Arati Prabhakar told a Washington Post security conference that embedded systems are among the kinds of applications for which it's feasible to …
An Ada microkernel - the long-awaited APSE (Ada Programming Support Environment) could be built on it.
If it was designed and coded well, it could be used under Unix - eventually you could replace the c-based UNIX with a superior (faster as well as more secure) Ada OS. If you start with the microkernel then, over time, you can re-code the other parts, leaving things like the user GUI until last.
Probably the most promising ones try to make proving code easier. Essentially you have your code as well as abstract conditions next to it. You can have conditions like "integer power of 2" and the compiler will make sure they are satisfied. This can detect certain classes of bugs and therefore potentially eliminate them. Essentially it would mean you'd have to program around additional compiler errors which would make your code more secure.
However we are talking about military projects here. Those are closed source projects often written in C++. Considering there are perhaps 20 people out there who actually fully understand C++, and C++ is a minefield of complexity, this is perhaps not the best language to write secure systems in.
The problem with that is the same with a lot of 'new language' ideas, it assumes the orginal coder has correct logic in the first place.
Now I'm not knocking the use of languages that enforce proper memory management/bounds checking, etc, but out side of a few special cases like a microkernel I don't see "maths assisted" coding helping much.
Most security flaws come from bad/carless ideas and inadequate testing due to budget/marketing pressure. The language is just a choice of which type of gun to shoot one's foot with...
But even if it were possible to create provably secure software (ah hae ma doots), all you'd need then would be provably secure hardware to run it on and provably secure people to operate it. As Bruce Schneier points out:
the mathematics are impeccable, the computers are vincible, the networks are lousy, and the people are abysmal. - Secrets and Lies: Digital Security in a Networked World
Has the discussion on reliable OSes moved on from Tanenbaum-Torvalds debate?
For me it seems that it's all about risk - you throw it over the fence and hope it doesn't come back to bite you (micro) or you attempt to contain it and avoid getting bitten (macro). Neither approach eliminates the problem
Biting the hand that feeds IT © 1998–2022