18+ hoursTo Go
I'm glad someone is doing this work. It'll be interesting to see the reqs given I have this as the minimum here. [Except mine is ground ased as my flying leaves a LOT to be desired.]
A nippy microkernel mathematically proven to be bug free*, and used to protect drones from hacking, will be released as open source tomorrow. The formal-methods-based secure embedded L4 (seL4) microkernel was developed by boffins backed by National ICT Australia (NICTA). In 2012, the software was enlisted to help stop hackers …
Linus has just sent off another expletive-laden missive because gcc 4.9.0 does silly things when compiling the Linux kernel. Then later on it was found that the bug goes all the way back to 4.5.0 but something else was changed in 4.9.0 so the bug happens more often.
This post has been deleted by its author
This post has been deleted by its author
They could use Greenhill's compiler, that's very good and is formally developed. It's the foundation of their INTEGRITY operating system (see this Wikipedia entry, which as a customer I think is also very good. Not cheap, but is value for money, if you see what I mean.
I suppose this new microkernel is a direct competitor to the kernel inside INTEGRITY, though of course there's more to a complete and usable OS than just the kernel.
For marketing yes, and that ultimately won along with a lot of legacy-supporting crud and mind-numbing stupidity like making IE deeply embedded.
If they had stuck to the original microkernel approach as planned by Dave Cutler and just accepted the performance penalty then it would have been one of the most secure OS around.
"Was it really necessary for MS to change the architecture of the NT kernel just so a server OS could have all-singing all-dancing display drivers?"
Yes it was, because they couldn't bring themselves to copy the UNIX way of achieving the same end efficiently and (more) securely. ;)
"Was it really necessary for MS to change the architecture of the NT kernel just so a server OS could have all-singing all-dancing display drivers?"
They didn't change the kernel at all. They changed the display driver architecture. The micro kernel was not changed by that.
Think display driver in kernel.
BTW, this is no longer the case as of Windows Vista (IIRC; might be Win7).
I can personally verify this as I've had my video driver crash hard under Windows 7 a number of times (faulty card.) The screens go all blank then come back up and Windows displays a little pop-up telling you that the video driver crashed but was reloaded.
"this is no longer the case as of Windows Vista (IIRC; might be Win7)."
I think it's Vista too. It had to be done so that Windows could offer a Trusted Computing Platform. Not one that could be trusted by end users and IT departments, but one that could be trusted by the "content industry" to provide an end to end tamper-proof copy-protected platform for the delivery of "high value rich media content".
It also had to do with driver robustness. Vista introduced a new driver model with some additional safeguards included. That's why video driver updates don't always have to force a restart (because now the video driver is less-attached to the kernel itself, allowing it to be stopped and restarted). It's still kernel-space stuff for performance reasons, but IIRC it's more compartmentalized.
To quote….
seL4 is the first (and still only) protected-mode OS kernel with a sound and complete timeliness analysis. Among others this means that it has provable upper bounds on interrupt latencies (as well as latencies of any other kernel operations). It is therefore the only kernel with memory protection that can give you hard real-time guarantees.
Sorry……..
Just don't see it…., even clock/edge jitter is enough to make the above statement complete nonsense.
It might be totally 'provable' when it's all neatly packaged up in its mathematical simulator but then go completely tits up in the real world due to boundary conditions in hardware clocking.
You have to consider each aspect separately. All they have proven is the implemented logic of the microkernel meets the specifications.
You can have bugs and flaws in:
- Specifications
- Sub-systems called by the microkernel
- Compiler
- Standard Libraries
- CPU/FPU hardware
But compared the today's huge kernels in Linux/UNIX/Windows/etc which have the last three as well as a box-of-frogs evolved design, that provability is a big start to something reliable for critical jobs.
Why assume anything? If they had proved all their hardware was secure, would you have lead with "I assume they've proved their kernel is secure too"?
Even if they haven't, it is one tick box off on the way to proving every component is secure, which is note- and praiseworthy surely?
"Even if they haven't, it is one tick box off on the way to proving every component is secure, which is note- and praiseworthy surely?"
Well, it may indeed prove to be one tick box off on the way to a secure system, but in some ways it's the smallest (and therefore easiest) part to do.
Writing a compiler, run time library (nb the kernel may very well not use any library code) and designing the hardware are arguably much larger and more expensive jobs. Particularly the hardware; formally designing a fast-ish CPU is going to be $Billions.
The micro kernel is going to be only a few thousand lines of code (pure guess, but it is micro). Compilers and runtime libraries have many more lines of code than that, and they too need to be correct.
It will be interesting to see it though.
The preferred method at the moment is via the GPS system whether by directly hacking in or by spoofing it with a local mobile GPS station. Do not forget that the Iranians were the first to 'Hijack' a RQ-170 drone, it is alleged that the Yugoslavians and others were good at 'downing' some of the earleir beasts.
Now a secure kernel is fine, but what about the myriad of software and hardware connected to it?
We don't know for certain they hacked it. Besides, the drone was military tech, so you'd think they'd be using the encrypted military GPS. Cracking military security tech would be a first-order coup and something MANY antagonistic countries would be itching to get, bit Iran's keeping mum, which tells me their method was likely much more prosaic and specific.
It is easy to go to the supermarket, buy a role of tinfoil and hit the forums claiming that GPS is just a bunch of radio waves and can thus be hacked.
I worked for a about 12 years devloping GPS systems, including some time deep in the analysis and design of the actual GPS tracking algorithms.
Sparing you all the maths and theory, there's so much stuff you'd have to fiddle with and get amazingly precisely right that it makes spoofing GPS almost immpossible outside well controlled lab conditions.
In theory it is possible, in practice rather more likely not!
"what about the myriad of software and hardware connected to it?"
Wrt the GPS example: if the beancounters permitted it, an inertial guidance system retained alongside the GPS would have stood a good chance (albeit not 100%) of bypassing or at least detecting GPS-fiddling.
Or maybe a dual-GPS setup using two of: US GPS, European GPS (Galileo), Russian GPS (Glonass).
Engineers know how to fix things. What do beancounters know?
Engineers know how to fix things. What do beancounters know?
I recall hearing this story, but I've not been able to find any evidence to back this up:
Some management consultants were working at Amstrad. Alan Sugar walked past a room where a management consultant was talking to an engineer about a product he was working on. The consultant was drilling the engineer over his choice of a particular component due to it's high cost compared to all the other components in the product. The engineer kept on telling the consultant that the component was the only one that would do the job, and the consultant was adamant that their must be a cheaper component he could us instead. Later that day, the management consultants were told to leave and not come back.
I recall hearing this story, but I've not been able to find any evidence to back this up:
...this wouldn't happen to be some sort of perverted form of the 3Com LAN card anecdote, would it?
Since we are talking about military drones and military procurement in this instance, when have beancounters ever had much to say about military hardware?
$500 Multidirectional Kinetic Injectors (hammers)?
$150M and up "budget affordable due to procurement scale" (the original intent) F-35 fighters?
how many $B to refit Her Majesty's new carriers with supposedly contractually possible catapult extensions?
I don't mind the beancounter bashing vs engineers, usually.
With military procurement however, I think a lot more beancounting sanity checks should be happening. Keeping the same defense budgets, that should result in much better gear for the boys n girls in the frontlines and less gouging by the military contractors.
If it makes engineering sense (and I doubt your Glonass idea makes any sense on NATO gear), then inertial sensors will be fitted in. Whether that is at reasonable cost is quite another question.
That's also an urban myth. Army quartermasters themselves have revealed these to just be accounting generalities meant to get paperwork through. Sure, people complain about the $500 hammer, but then there's the $600 jet engine...
Would the claims being made by these researchers stand up to the same level of scrutiny from the Advertising Standards Authority that a soap power advert has to survive? I suspect not.
There is a group in France working in formal verification that I think would also have trouble getting their claims published in a Soap power advert.
I'm no an expert in formal verification techniques, functional programming and all that, but it all sounds like so much marketing dross and academic wanking to me. There are good ideas in there, but when the source hits the metal, all bets are off. Limiting your options seems counter productive when it all boils down to object code in the end. I'd be happy to be enlightened as to the benefits, but it seems to me that rigidly followed development practices (heavy on testing, review and refactoring) and no-exceptions coding standards are the way to go. And yes, I know it almost never actually works that way in the Real World, but it would be nice if it did.
"when the source hits the metal, all bets are off."
The source doesn't hit the metal.
The compiled+linked binary hits the silicon (in most cases).
Improving the source is a good start. It might even be necessary. It is far from sufficient.
What needs to be right is what the compiled binary does; who cares what an instrumented (ie unrepresentative) set of source code does in an unrepresentative test environment.
Everything else is cosmetic. Fortunately, most of the time, it doesn't matter that much.
What a shame that in the places I'm familiar with where it *does* matter, the management have chosen to prefer the simple, cosmetic, easy (but unrepresentative) approach.
Thanks for the nice article.
I'd just like to clarify that seL4 predates the DARPA HACMS program (and in fact was a major motivator for it). We're using it in HACMS to build extremely high-assurance systems.
The folks worried about compiler/linker/etc bugs will be pleased to hear that we don't trust any of this, but independently verify (in a strict mathematical sense) that the translation to executable code is correct, see http://www.ssrg.nicta.com.au/publications/nictaabstracts/Sewell_MK_13.abstract.pml.
Gernot