Can't wait for 5.6. Keep on trucking OpenBSD. :)
OpenBSD founder wants to bin buggy OpenSSL library, launches fork
In the wake of the Heartbleed bug fiasco, members of the OpenBSD project have forked the popular OpenSSL library with the aim of creating a new version that they say will be more trustworthy. Even though OpenSSL is open source software, for a full two years its entire development community managed to overlook the crucial bug …
-
-
Tuesday 22nd April 2014 21:29 GMT Michael Thibault
Re: Right, so ...
Perhaps it's that the eyes involved have security consciousness in the DNA? Or, that they recognise that security is at the foundation of their preferred OS and are thus 'incentivised'? Perhaps it's that the eyes aren't actually busy crafting yet-another colour of lipstick to put on their mascot? Perhaps it's that the eyes aren't otherwise busy mule-headedly forking yet another variant of their preferred OS? It's difficult to say why people do the things they do. But there are lots of possibilities.
-
Tuesday 22nd April 2014 23:41 GMT Christian Berger
Re: Right, so ...
Well what seems to be obvious is that the OpenSSL team doesn't seem to be quite up to the job. After all they let that heartbleed bug in even though there apparently was no indication of any input sanity checks in it. If your code is critical to half of the SSL connections on the Internet, you should be somewhat more careful not to break stuff. Maybe you should even have multiple independent reviewers for any patches that come in. Plus you need to have the courage to tell someone that their code is not quite up to the standards they have, and perhaps give them tips on how to improve it. (this can, in many cases, be a pre-written text)
So yes, this might be an OpenBSD-only thing at first, but as with much OpenBSD originated software it'll spread out to other operating systems. After all most of the systems out there today are POSIX.
-
Wednesday 23rd April 2014 23:12 GMT Alan Brown
Re: Right, so ...
"If your code is critical to half of the SSL connections on the Internet, you should be somewhat more careful not to break stuff."
If you critically rely on code put together by a few _UNPAID_ volunteers then you should be more inclined to help out.
On the paid side, it's mostly just been a dog and pony show. Security tends to be an afterthought and it shows.
-
-
Tuesday 22nd April 2014 23:58 GMT P. Lee
Re: Right, so ...
I suspect the way they want to achieve additional eyes is to simplify and normalise the code style to what is currently considered good practise and to get rid of cruft.
By forking/starting a new project, you get permission to break things and lose features because you aren't impacting existing users and refactoring the code is one way to get to grips with it and will involve some auditing on its own. This is probably a better way to go than to do a massive revision in the existing package, since there is still a good chance of breaking things.
-
Wednesday 23rd April 2014 07:26 GMT Charlie Clark
Re: Right, so ...
Forking is perfectly legitimate especially if you want to change large parts of the code as seems to be the case. OpenBSD itself started life as a fork of FreeBSD and the two continue to profit from each other's different focus.
Taking the existing code as a functional specification and removing as much code as possible will allow the developers to make a reference implementation that will hopefully be a more secure. Like OpenSSH (from OpenBSD) it should then be pretty straightforward for others to use the library and I suspect other developers will be happy to join in.
Meanwhile the existing code will continue to "work" and can even benefit from backporting any changes.
-
Wednesday 23rd April 2014 08:49 GMT Anonymous Coward
Re: Right, so ...
OpenBSD itself started life as a fork of FreeBSD
A fork of NetBSD actually, since Theo was booted out of that project for his incredibly abrasive manner. At the time NetBSD was a better choice since the key design goals were portability and simplicity, whereas FreeBSD was focused exclusively on getting maximum performance for the Intel x86 architecture.
-
-
Wednesday 23rd April 2014 08:30 GMT Roo
Re: Right, so ...
"... and forking it and making it OpenBSD-only is going to improve number of eyes looking at the code how?"
Fair question, but I don't think it's relevant in this instance because a major complaint levelled at the OpenSSL team is that they don't accept many (if any) patches from anyone else, so it doesn't really matter how many eyes are looking at the code base if the maintainers are unwilling/unable to actually fix the code.
With that said, I haven't seen the many eyes argument being used by the OpenBSD team... Their beef with OpenSSL appears to be that it is a key piece of user land, it trades security for performance, and the coding falls well short of what the OpenBSD team consider to be good practice. From the examples and patches I have looked at I have to agree, the sooner that OpenSSL is shit-canned the better in my view.
Heartbleed isn't the first OpenSSL bug to have raised some hackles in OpenBSD land - but it appears to be the final straw. :)
-
Thursday 24th April 2014 16:12 GMT Anonymous Coward
Re: Right, so ...
Incredible the number of DIFFs posted in two weeks by OpenBSD. (Imagine IBM trying to clean up this gar-bage: years!)
So much junk and strange modules. Amazing that much of commerce and security was handled under the OpenSSL jibberish. For us who don't code it's, "shut up and donate."
Anyone who has a commercial web site should toss a couple of bones to Theo and Co. Some of us know Lucha LibreSSL will be a great improvement over DopenSSL.
-
-
Wednesday 23rd April 2014 08:46 GMT Anonymous Coward
Re: Right, so ...
and forking it and making it OpenBSD-only is going to improve number of eyes looking at the code how?
For starters they'll probably make it work with the system malloc instead of OpenSSL's brain dead memory allocator:
analysis of openssl freelist reuse
-
Wednesday 23rd April 2014 09:33 GMT Jamie Jones
Re: Right, so ...
"For starters they'll probably make it work with the system malloc instead of OpenSSL's brain dead memory allocator:"
I knpw you're not implying it, but as many people believe it to be so, it's worth reminding that the hearbleed bug has nothing to do with malloc, nor memory management in general.
Also, the OPENSSL_malloc function ends up calling system malloc, but as you infer, this point is moot if they use other mechanisms to manipulate their malloced memory block in user space.
-
Wednesday 23rd April 2014 10:27 GMT Bronek Kozicki
Re: Right, so ...
Heartbleed was related to malloc, in a sense. Without own malloc, Heartbleed would't have caused one tenth of the damage it did. Here is explanation how:
Had OpenSSL used system malloc rather than their own free list, memory allocated to old request would actually get freed, and memory accesses such as those exploited by Heartbleed would actually crash the system. Which would prevent miscreant from accessing your private data, and also hopefully prompted OpenSSL team to fix the bug.
-
Wednesday 23rd April 2014 11:23 GMT Anonymous Coward
Re: Right, so ...
I knpw you're not implying it, but as many people believe it to be so, it's worth reminding that the hearbleed bug has nothing to do with malloc, nor memory management in general.
Indeed, but the way memory is held in the OpenSSL freelist implementation means that the bug is far more exploitable since there's more likely to be interesting stuff that can be peeked at. Also, OpenSSL doesn't work with the freelists disabled, as Ted U discovered. Had it worked, then it's likely this bug would have been caught much earlier by the kind of people who use strict malloc/free implementations and settings for security critical stuff.
-
-
-
-
Tuesday 22nd April 2014 21:38 GMT A Non e-mouse
Not just KNF
Freshbsd has a nice summary of the changes the OpenBSD team have made so far. The general tidying up so far seems to be not just KNF, but removing unnecessary wrappers around standard C library routines, and switching to more standard/modern malloc family memory allocation calls.
These are all the low hanging fruit jobs. Once this is done and the code is in a more modern/readable style, then the hard word of auditing the code can start.
-
Tuesday 22nd April 2014 21:58 GMT Bronek Kozicki
I looked at those commits, makes interesting reading.
Same of the code being fixed or removed is ... very ugly indeed. Some of it makes my hair stand on ends, like a living illustration of The Coding Horror. Also looking at the age of this bug and its implications to security of users it is obvious that OpenSSL team is, at best, incompetent. Alternatively, perhaps they are motivated to keep the codebase in bad shape. Either way, I am never going to trust OpenSSL again.
I'm very happy OpenBSD has taken the task of fixing it, and I committed to monthly donations for them. They deserve their pay for service to the world, providing alternative to abomination of a code that was OpenSSL.
-
-
-
Monday 28th April 2014 12:28 GMT Anonymous Coward
"Not a good idea. I don't know Bronek's coding skills, but hacking cryptography should be left to the very small number of people that are good at both coding and cryptography."
D'you reckon?
People should just stick to saying how bad others are at it, without showing them how to do it better? That's a better option, is it?
-
-
-
-
-
Tuesday 22nd April 2014 22:41 GMT btrower
My eyes!
I had to look. In fairness to the OpenSSL guys, it is not the only bad code out there, not by a long shot. Again, in fairness to them, it works to some extent.
We shall see, but I expect this will go from horrible to less horrible but still horrible. This class of code is gruesomely messy all over. I would roll up my sleeves and help fix it, but it is like attempting to save a manuscript that has fallen in the sewer. It is better to just re-think and re-write. Besides, my aesthetic sensibilities differ.
I would write a list of rules of thumb these guys routinely break but it would go on forever.
I do not like to criticize other programmers, especially open source guys who have given of their blood sweat and tears for free. Code that works at all is very difficult to write. This is not for ordinary people by a long shot. Most of the people poking fun at the code would not be able to do it even as well, let alone better. Still, you may not know how to do a thing, but still be able to easily tell it is not done well. The OpenSSL stuff is not done well overall. However, that identical criticism is true of so much of the code we use, proprietary code included that it is unfair to single out the OpenSSL guys.
My hat is off to the programmer who made the mistake and fessed up to it. I admire the courage.
-
Wednesday 23rd April 2014 11:22 GMT DanDanDan
Re: My eyes!
One look at this specific change (below) had me agree wholeheartedly with you. Why hard code the size of a buffer (256 in this case), when the functions "sizeof()" does exactly that for you?! Performance gains? Wouldn't this be sorted by a semi-decent compiler?
http://anoncvs.estpak.ee/cgi-bin/cgit/openbsd-src/commit/lib/libssl?id=258edb6cb04cce27479a492e610b6bd1f535c9f3
-
-
-
Wednesday 23rd April 2014 17:24 GMT Anonymous Coward
Re: My eyes!
>Hint: sizeof() in C99 can indeed be a runtime call.
Only for variable length arrays:
unsigned char buf[256];
buf is not a variable length array and OpenSSL isn't compiling with C99 as it's standard.
And... GCC doesn't compute the size of variable arrays when sizeof is called:
"The length of an array is computed once when the storage is allocated
and is remembered for the scope of the array in case you access it with sizeof."
-
Wednesday 23rd April 2014 20:38 GMT Anonymous Coward
Re: My eyes!
INTERNATIONAL ISO/IEC STANDARD 9899
Second edition 1999-12-01
It _really_ looks like a standard to me. From said standard:
The sizeof operator yields the size (in bytes) of its operand, which may be an expression or the parenthesized name of a type. The size is determined from the type of the operand. The result is an integer. If the type of the operand is a variable length array type, the operand is evaluated; otherwise, the operand is not evaluated and the result is an integer constant.
In general my impression with GCC is that unless you insistently ask for it to try to conform to a standard (e.g., with something like -pedantic -std=c99) then it will compile a rag-bag mixture of standards and non-standard extensions.
-
-
-
-
-
-
-
Wednesday 23rd April 2014 03:45 GMT Anonymous Coward
Re: Madness is doing the same thing and expecting a different outcome
From a security point of view, yes those guys are quite insane but so far they've proven they know damn well what they're doing. As for writing in "C", here's something for you from Wikipedia :
[quote] code which needs to run particularly quickly and efficiently may require the use of a lower-level language, even if a higher-level language would make the coding easier [/quote]
If you're still unhappy, you may go ahead and write it in Java or any other high-level programming language you master.
-
-
-
Friday 25th April 2014 04:51 GMT Dazed and Confused
@Alan Brown
"You can write bullet proof code in assembler, you just need to know what you're doing."
But assembler isn't portable.
My point was that you don't need some high level language to watch your arse if you know where your own arse is.
The problem with good code is its often boring.
Flying by the seat of your pants is often quicker and more exciting. Remembering to check the return from every call is dull dull dull but a good plan if you don't want to run the risk of problems later on.
During a formal code review I once upset a programmer by complaining that his code was boring to read. Well boring it might have been, but I'd have been more than happy to get on a plane if he'd have written the flight control SW.
-
-
-
-
Wednesday 23rd April 2014 07:19 GMT Anonymous Coward
Re: Madness is doing the same thing and expecting a different outcome
C is good, very good. It makes no pretence to protecting the programmer from his own inadequacies and does not claim to make the whole world warm and safe. It makes sure the programmer and designer retain full responsiblity for their actions. It is light weight, reasonably transparent in its interactions and terse. With a good coding style and relevant comments it is clearer to understand than most "high level" languages. I write this as a happy user of Pascal/Algol/Modula style languages at one time and quite a fan of Python now (there's a contradiction for you).
Java and its kind are dangerous: they purport to do all the nasty bits for you. But they do so to a surprisingly limited extent and involve an unhealthy amount of trust in black boxes, innumerable libraries and a full understanding and tracking of scope, for instance.
As an ancient SE, I am amazed at the reduced quality of even critical code today and the casual acceptance of bugs and frequent updates. Either the programmers are much worse or the tools are at fault, or both.
I am ever more impressed by the long term solidity of, for instance, real UNIX implementations, UNIX shells and tools, X11, countless real time software and so on, written mainly in K & R C plus some assembler. I am delighted by a language that is small enough to be described, in full, in a book less than a couple of centimetres and that is so clean and small that, very quickly, a competetent engineer can hold it completely in his memory, including the commoner library calls, with the rest in succint man pages.
I am appalled at the likes of C++ or Java where, it seems, none of us can master even the full semantics of the basic language (if it stops evolving long enough), let alone the plethora of class libraries each described in a manual as big as the Old Testament, so that advertisements for workers have to specify which libraries, which versions ....
As for the idea that a load of clever amateurs or professionals with time on their hands is the way to get code, in which they have no personal interest, properly and thoroughly reviewed, probably with no desing or implementation documents to guide them - well, openSSL is not the first software to show how daft that is. Reviewing is hard, skilled work requiring knowledge, experience and time. It is rarely doen adequately even when people are paid for it and have a professional, vested interest. These "reviewers" of open code: how many of them publish proper reports of their reviews, findings, recommendations etc. for peer review? Or do they just put in a "fix", perhaps not much better than the original and rely on some "gatekeeper" to approve it and shove it in the package source tree?
Anon - stealing a few minutes before starting work.
-
Wednesday 23rd April 2014 08:46 GMT Roo
Re: Madness is doing the same thing and expecting a different outcome
"I am ever more impressed by the long term solidity of, for instance, real UNIX implementations, UNIX shells and tools, X11, countless real time software and so on, written mainly in K & R C plus some assembler. I am delighted by a language that is small enough to be described, in full, in a book less than a couple of centimetres and that is so clean and small that, very quickly, a competetent engineer can hold it completely in his memory, including the commoner library calls, with the rest in succint man pages.
I am appalled at the likes of C++ or Java where, it seems, none of us can master even the full semantics of the basic language (if it stops evolving long enough), let alone the plethora of class libraries each described in a manual as big as the Old Testament, so that advertisements for workers have to specify which libraries, which versions ...."
Have an upvote for that, although I'll agree to disagree with the dig at C++, it really is a different beast from Java. Your point about the language being held completely in memory is a good one - often overlooked in these days of hitting CTRL-SPACE repeatedly until you have something that will stumble through the Java/C# compiler. :)
-
Wednesday 23rd April 2014 15:50 GMT Anonymous Coward
Re: Madness is doing the same thing and expecting a different outcome
The huge advantage of C (from an open source perspective) is that it allows bottom feeding of the very lowest level denominator coders. The is crucially important when you rely on volunteer effort and can't count on people with more than the bare minimum skills. This also perhaps explains bugs like Heartbleed.
-
Thursday 24th April 2014 23:40 GMT BinkyTheMagicPaperclip
Re: Madness is doing the same thing and expecting a different outcome
The advantage of C is portability. C++ compiler quality is *still* variable on different platforms and there's the name decoration issue. Yes, it's possible to work around it, but C is a tad simpler.
Additionally pretty much all existing operating systems are written in C (although there are exceptions such as L4, and certain components written in C++. Windows is written in C, but some parts are realistically only usable from C++ or .NET).
For most libraries and non low level operating system code I would use C++, but in the specific case of libraries such as OpenSSL that may be running on embedded systems with fairly severe resource restrictions this may not be realistic.
-
-
Wednesday 23rd April 2014 20:17 GMT Ken Hagan
Re: Madness is doing the same thing and expecting a different outcome
"I am appalled at the likes of C++ or Java where, it seems, none of us can master even the full semantics of the basic language"
In fairness to the C++ guys, the worst of the complexity results from a sincere attempt to actually describe and then remain compatible with the C subset. In no particular order, C's integer types, promotion rules, decay of arrays to pointers, lack of initialisation guarantees and (until recently) lack of a memory ordering model, have been the bane of anyone who actually wanted to write clear and safe code. Classes, namespaces, exceptions, templates and the like are pretty damn clean in comparison.
-
-
-
Wednesday 23rd April 2014 07:15 GMT Anonymous Coward
Re: Madness is doing the same thing and expecting a different outcome
>"Writing in C"
Can you suggest a language that A: solves all the possible programmer errors that are possible in C while running on the bare metal and not inside of a virtual machine, B: doesn't depend on libraries that are written in C, C: has a working modern portable compiler that isn't implemented in C.. oh and we need an OS that isn't mostly implemented in C to go with it.
Or we could stop trying to blame C for programmer errors that are possible in any language that is powerful enough to actually do system level programming and accept that "humans are always going to make mistakes". The human error that caused heart bleed was actually very small and I bet that you could find similar mistakes in just about any codebase you care to look at. I would say that heartbleed doesn't show we need better languages, new TLS libraries etc. Instead I think it shows that we actually need to rethink how private keys, passwords etc are handled in memory. Should private keys be floating around in the memory of processes that are also handling connections from the outside world?
-
-
Wednesday 23rd April 2014 05:55 GMT akeane
The problem is ...
... instead of using computers for their intended purpose i.e. playing Doom, people started hacking them to do pointless rubbish like eCommerce, storing recipes, Minesweeper and other such fripperies!
As for blink tags, these days I suspect you would have to implement it in JavaShi^H^Hcript or Flash,
again introducing potential security risks #sadface
-
Wednesday 23rd April 2014 08:03 GMT Destroy All Monsters
C - the leech theraphy of coding. It will never go away!
Let the witch doctors speak!
C is good, very good.
It doesn't even have Modules, FFS.
Can you suggest a language that A: solves all the possible programmer errors that are possible in C while running on the bare metal and not inside of a virtual machine
"I want to have FTL travel, meanwhile I will defend my Zimmer Frame equivalent of coding for no particular reason!".
21st century: Programming applications "to the bare metal" and other retardations of the conservative or ignorants.
You can write bullet proof code in assembler, you just need to know what you're doing.
You also need a very large team, lots of time and lots of money in that case. Nope, you won't get them.
C does not have the exclusive license on bad or insecure code.
"WIth that kind of plus, we can only win."
Code which needs to run particularly quickly and efficiently may require the use of a lower-level language, even if a higher-level language would make the coding easier
Surprising generalities from Jimbo's Bag Of Trivia!
Seriously, WTF? I anything being tought in schools today?
-
Wednesday 23rd April 2014 09:30 GMT E_Nigma
Re: C - the leech theraphy of coding. It will never go away!
Ahh, the things thought at schools these days... Reminds me of a group of graduates from my university who, straight out of uni, got themselves a job of designing and implementing a real time system. 'A' students that they were, they wrote code that was absolutely beautiful to look at, pouring all of their fresh knowledge of software design into the work. The only problem? Their beautiful, readable, maintainable OO code was nowhere near as fast as it had to be. Cue a rewrite that saw them doing everything that you "shouldn't" be doing when programming in the 21st century, and it worked.
So horses for courses. ;)
-
Wednesday 23rd April 2014 11:07 GMT Blane Bramble
Re: C - the leech theraphy of coding. It will never go away!
It doesn't even have Modules, FFS.
Modules are surely the function of the linker? Or did you mean namespaces?
21st century: Programming applications "to the bare metal" and other retardations of the conservative or ignorants.
How else do you program your security layers and OS? If they are not "to the bare metal" then what are they running on, and how do you write and secure that? It isn't turtles all the way down here you know.
You also need a very large team, lots of time and lots of money in that case. Nope, you won't get them.
Rubbish. Writing bullet proof assembler just requires sane coding practises and decent documentation of entry and exit requirements etc.
Seriously, WTF? I anything being tought in schools today?
Very little, however it seems I have self-taught myself more than you.
-
Wednesday 23rd April 2014 11:38 GMT Anonymous Coward
Re: C - the leech theraphy of coding. It will never go away!
>Let the witch doctors speak!
>C is good, very good.
If you have done any work on very small platforms (~2K of RAM) then you might like C more.
I actually like being able to write code to handle binary packet formats that'll run on top of Linux and without an OS on an 8bit micro without any changes..
>It doesn't even have Modules, FFS.
Not sure what you mean by modules. C has objects .. usually a single file becomes an object. If you want to hide something that declared at global scope in an object from other objects you can make it static. If you want something that isn't destroyed when a function returns but is only in the function's scope you can declared something as static within a function etc etc. That's usually enough to allow you not to trip over yourself. C doesn't have namespaces like C++ or Java or lots of control over visibility for sure.. but it's also not pretending any of that stuff counts for anything at runtime. Marking something as private when you are running outside of a vm or an interpreter means absolutely nothing because there is no one around to enforce it.
>Can you suggest a language that A: solves all the possible programmer errors
> that are possible in C while running on the bare metal and not inside of a virtual
> machine "I want to have FTL travel, meanwhile I will defend my Zimmer Frame
>equivalent of coding for no particular reason!".
But you couldn't come up with an alternative for C that is safer than C but can be used in the same places C is used. You do realise that using C for one thing doesn't mean you have to use it for everything right? If you don't like working in C because it makes your head hurt stick with what ever language you like that suites your purpose but if you do that you'll never get to work on any of the really fun parts. Let's hope that the compiler, VM or interpreter for whatever language you are using doesn't have any bugs because without any understanding of how things actually work you're going to be shit out of luck trying to fix them.
>21st century: Programming applications "to the bare metal" and other
>retardations of the conservative or ignorants.
So no one needs operating systems any more?
-
Wednesday 23rd April 2014 21:48 GMT vincent himpe
Re: C - the leech theraphy of coding. It will never go away!
"If you have done any work on very small platforms (~2K of RAM) then you might like C more."
try with 32 bytes of ram ... and 2k rom. c will puke all over itself. it can't create its stack.
The trouble with 'c' is that it was developed for a cpu architecture that does not exist anymore : PDP-11. The c runtime library recreates the missing functionality. C is heavily stack based and wants to push everything on a stack to call a function and then pop it off. Intel cpu architecture is substantially different that 'c' has always fit 'wrongly'. There is a language designed specifically for intel architecture: PL/M. CP/M was written in PL/M, so was the iRMX operating system. One of the most bulletproof operating systems out there. Even a catastrophical hardware failure like a broken ram chip is survivable without bringing iRMX down. The kernel traps a parity fault , marks the affected block as bad, figures out what was loaded there , reloads it from storage( rom or disk) , remaps it and continues.
C is syntactically not bad, but it should have a better compiler. there should be an option where you specify : if an array is created it must be zeroed before use. after release it must be zeroed as well ( overwritten ) that simple option would have prevented heartbleed. any released memory is erased so there is no snooping in remainders. any allocated memory is zeroed as well. buffer overruns would be a thing of the past as the overrun space would be empty.
someone needs to design a better (safer) memory handling library for c.
-
Wednesday 23rd April 2014 23:33 GMT Jamie Jones
Re: C - the leech theraphy of coding. It will never go away!
As for heartbleed:
". buffer overruns would be a thing of the past as the overrun space would be empty."
Do we know for sure that the buffer read overran into freed memory, and not just some other data structures that were still in use?
Even if so in this case, the malloc proposals aren't a silver bullet for all overruns
-
Thursday 24th April 2014 02:18 GMT Anonymous Coward
Re: C - the leech theraphy of coding. It will never go away!
>try with 32 bytes of ram ... and 2k rom. c will puke all over itself. it can't create its stack.
I've actually used C in 32 bytes of ram recently. I have a chip that has 32 bytes worth of embedded SRAM for running init code loaded over serial before the SDRAM is setup.
Of course you can't call functions that would overflow the stack. If you're working in such a restricted setup you would have that in mind from the start.
-
-
-
Wednesday 23rd April 2014 12:08 GMT Anonymous Coward
@Destroy
"Seriously, WTF? I anything being tought in schools today?"
No, which is exactly the problem at hand, I consider your post to be a prime example of that. This reminds me of a rant on one of the FreeBSD mailinglists (or the forum) where someone just couldn't understand that the FreeBSD source code also contained a huge chunk of assembly code. Surely everyone used C these days and that piece of ancient coding was just waiting to be obsoleted, no?
Hardly.. Different tasks require different approaches which can easily include different programming environments.
Here's some food for thought for you: what came first; the programming language or the compiler? And when looking at "more advanced" languages such as, for example, Java; how would it be possible for the compiler to be written in Java when you'd need to compile the source code to begin with?
-
Wednesday 23rd April 2014 15:26 GMT Anonymous Coward
Re: @Destroy
>just couldn't understand that the FreeBSD source code also contained
>a huge chunk of assembly code.
I wonder how FreeBSD, Linux etc would implement all the really hairy stuff around task switching, jumping into the kernel and back out again for interrupts etc with this "does what C does but doesn't allow for any human error" language that Destroy All Monsters apparently knows about about can't name. You can't have inline assembly in a totally safe language...
-
-
-
Wednesday 23rd April 2014 08:04 GMT fpx
More than just Eyes
Human eyes are too easily distracted and confused.
What software like this needs is mechanical eyes and some formal software verification. There's static code analysis tools (like Lint, Coverity) or run-time analysis (Valgrind). Then it needs unit tests with a measurement of code coverage.
Writing software that works is the easy part! Quality assurance using measurable metrics takes at least as much work.
-
Wednesday 23rd April 2014 08:48 GMT Infernoz
Re: More than just Eyes
Sorry, that's fantasy, because a lot of bugs don't get picked up even by static code checkers (I use them); that's what automated unit tests of compiled code are for. I can help if the code is written in a stricter language running in a Virtual Machine e.g. with references rather than pointers and built-in bounds checking.
-
Wednesday 23rd April 2014 11:07 GMT phuzz
Re: More than just Eyes
Except of course Coverity didn't pick up the Heartbleed bug, until a couple of programmers went through and added tests for it by hand.
Although that might mean they'll pick up similar bugs in the future, it's still very much bolting the stable door:
http://blog.coverity.com/2014/04/14/coverity-heartbleed/
-
Wednesday 23rd April 2014 11:27 GMT Anonymous Coward
Re: More than just Eyes
Although that might mean they'll pick up similar bugs in the future, it's still very much bolting the stable door:
It's still good to see them generalising particular problems and adding tests for them. This is the correct way to do testing, and fits in with the OpenBSD approach - when you identify a bug, don't just fix that single instance but try to find a way to test for it across the entire codebase.
-
-
-
Wednesday 23rd April 2014 08:34 GMT pitrh
Ted Unangst has more of the backgroud
Ted Unangst, the OpenBSD developer who can be said to have 'instigated' the events that lead to the fork that's now referred to as libressl has a nice writeup on his blog about how it all happened, including links to earlier analysis of the heartbleed bug and how it went undiscovered: http://www.tedunangst.com/flak/post/origins-of-libressl
-
Wednesday 23rd April 2014 08:41 GMT Infernoz
Excellent
I've had to try and fix layered cruft (technical debt) code, got fed up, and now spend the time to make readable extensible code; the sooner this is done the better, because old code can be a minefield of nasty subtle bugs, and some major ones too!
It is not just security code which needs to be fixed; I looked at the Sqlite code, in the past, and saw a freaking declare macro mess which would take too long to port to another OS, so a legacy commercial project which I had hoped to patch with it was retired.
I really dislike C code which makes a lot of use of declare macros, when this should be moved behind a *.h interface, with separate *.c or *.cpp files for each OS or other major dependency i.e. factory style code rather than a minefield of switch code!
-
Wednesday 23rd April 2014 12:06 GMT Bronek Kozicki
Re: Excellent
you are unfair to C. It's just that there are plenty of developers who do not know how to use it well, never mid more complex languages. From which they copy features and pack them as macros.
Personally I am not a big fan of C, I very much prefer C++ (which I happen to know well enough). But it has its place.
-
-
Wednesday 23rd April 2014 08:42 GMT r00ty
It's easy to rubbish OpenSSL now.
But, for over 15 years it's been used by everyone. Small software writers and big business alike. It allowed many large companies to use cryptography without employing their own specialists. Everyone was happy.
But, at any time these companies with the resources could have reviewed the code. Anyone else could have reviewed the code. It seems it was never done, or at least not done regularly enough.
Also I think the other problem is one that any medium to large suite of software reaches. That a lot of old code that will probably never be used remains. People are too scared to remove it, or even revisit it in case they break something. All the while, old development styles persist in older code. But again no-one wants to rewrite it, lest they introduce new issues with their rewrite.
What is left is a mashed up mix of coding styles all linked together to create quite a mess. So, something like this was inevitable.
A complete rewrite would be a good thing. But, I don't really think it should be a fork, or for a specific OS. I personally (with no real knowledge in the area, casting judgement!) think it should be OpenSSL 2.0. I think some of the big businesses that have saved so much money over the years could provide some useful resources to this endeavor. Let's face it, most routers are running this, our phones, most of the big websites were running behind it. If these guys could spare some resources, along with the OpenSSL development team, and anyone else who has the time and expertise. Rewrite this behemoth, from scratch. Omitting any obsolete processes. Aligning to a single design style. Maybe, we could get past this and move on.
Back to my point. Calling the developers incompetent is easy to do now. But, EVERY person that used OpenSSL and never reviewed it, can receive the same label. Sadly, that includes me. Only once mind you, and then just to sign a bank payment file. But, all the same. We all (developers) use these libraries never really knowing (or sometimes even caring) how they work internally until something like this happens.
Ignorance is not bliss!
-
Wednesday 23rd April 2014 08:59 GMT Roo
Re: It's easy to rubbish OpenSSL now.
"Calling the developers incompetent is easy to do now. But, EVERY person that used OpenSSL and never reviewed it, can receive the same label. Sadly, that includes me."
I don't think name-calling is going to help either, but I think this particular incident has shown that the community at large should be questioning whether these guys have an adequate track record of producing good code and fixing the broken code.
At the end of the day if these guys are demonstrably incompetent when it comes to writing a key component of security infrastructure, I think folks have a moral duty to report it - however uncomfortable it may be, because trusting untrustworthy code causes avoidable pain.
-
Wednesday 23rd April 2014 10:35 GMT r00ty
Re: It's easy to rubbish OpenSSL now.
"At the end of the day if these guys are demonstrably incompetent when it comes to writing a key component of security infrastructure, I think folks have a moral duty to report it - however uncomfortable it may be, because trusting untrustworthy code causes avoidable pain."
But, that's pretty much the point I'm trying to make. The code was "out there" for two years. But never properly audited. Just happily used by small and large scale users alike. So, at least some of the burden lies on all of us for the blind trust put into this tool for which it could have been properly audited by anyone, at any point.
I think the more important point is that up until now, OpenSSL was regarded as not only good, or fine code. But, simply the de-facto standard tool for the purposes it covers. So, knocking the guys that wrote it now is kicking someone when they're down.
What will libressl look like in 15 years, once an untold number of other developers have each had their hand at extending the functionality? What vulnerabilities might lie below the layers of functionality by then? Unless some regular auditing takes place from either libressl or an OpenSSL 2.0. We'll be re-visiting this situation sooner, or later.
-
Wednesday 23rd April 2014 15:12 GMT Bartholomew
Re: It's easy to rubbish OpenSSL now.
>>> What will libressl look like in 15 years, once an untold number of other developers have each had their hand at extending the functionality? What vulnerabilities might lie below the layers of functionality by then? Unless some regular auditing takes place from either libressl or an OpenSSL 2.0. We'll be re-visiting this situation sooner, or later.
You literately must have never heard of OpenBSD to say that.
Read "Audit Process" @ http://www.openbsd.org/security.html
-
Wednesday 23rd April 2014 16:29 GMT r00ty
Re: It's easy to rubbish OpenSSL now.
"You literately must have never heard of OpenBSD to say that.
Read "Audit Process" @ http://www.openbsd.org/security.html"
Not wanting to rain on your parade here. But, since they've been following this process since 1996 (before OpenSSL 0.9.1) and since they say they perform a file by file analysis of every critical file component (I would say, OpenSSL is critical). Surely OpenSSL would have been part of this process. Yet, both 5.3 and 5.4 seem to have "shipped" with the bug in.
Also this is an audit process. My comment is about the way the code will evolve, in the same way presumably OpenSSL has since 1998. Over the years, styles change, active developers change, concepts change and a serious amount of obsolete code (which isn't easy to identify) is present. I'm not convinced the auditing will prevent this from happening.
So, I think my point stands.
-
-
-
-
-
-
Wednesday 23rd April 2014 09:32 GMT Roo
Re: "a new version that they say will be more trustworthy"
"Now, what does the NSA have to say ? I'm sure it's willing to participate in this project and give valuable insight into security procedures and encryption techniques."
Funnily enough OpenBSD did receive a $2.3m DARPA grant which was pulled shortly before it expired in 2003. Theo thought it might have been something to do with his speaking out against the Iraq war at the time. I suspect the NSA would love to help out OpenBSD, but some Shrub-Loving-Right-Whinger in the funding dept doesn't like handing cash out to people who bad mouth wars. ;)
-
-
Wednesday 23rd April 2014 11:43 GMT Anonymous Coward
Unfair criticism
I think people put way too much value or "weight" into open source software. Here's not saying that it hasn't any value, don't get me wrong, but the constantly used argument that "many eyes are more likely to fix bugs" is flawed in some ways.
First of all the obvious: different people, different coding styles which automatically makes it harder to follow ones programming style (or logic) if the used style isn't something you'd normally use. This goes double if the programmer doesn't document his code either.
Then there's the issue that the argument also assumes that the majority of users is actually interested enough to go over said source code. I don't have any statistical data myself but I still can't help think that in comparison only a small amount of people would actually take the time and effort to go over the source code before they use the software.
But another thing.. I'm also a bit sceptical that this solve the problems, even though I'll be the first to agree that if any team of coders can do it then it's the OpenBSD team. After all; they're all about security first and constantly weigh their options between user friendliness and ease of use. Where the latter more often has more weight than the first.
Thing is; I still remember the Debian OpenSSL disaster from 2008. Here we had a package maintainer who considered it a good idea to change the original software to make it better "fit in" with the distribution (which seems to be a common trend amongst Linux distributions these days, a development which I'm quite sceptical about). Only 2 years later did the team finally discover that instead of enhancing the software they actually broke the random generator.
Not only did this incident show us how popular and widely used Debian actually is, it was also yet another prime example of a very popular open source software package which despite all the attention could "run amok" for nearly 2 years. Not just that; the problem was even at the very heart of the program, yet still went undiscovered.
And before you claim that this was "only" one Linux distribution; don't forget that Debian is one of the most commonly forked distributions out there. And one which had a dedicated team for OpenSSL maintenance as well.
So with that in mind I also think it's a bit unfair to put the full blame on the OpenSSL team.
And although I can understand the motivations of the OpenBSD team I can't help think that we'd probably benefit a whole lot more if they'd be willing to spend some of their programming resources to help and make OpenSSL even better than it is today. Especially if you consider that OpenSSL is one of the very few "standards" we have in the wonderful world of open source software.
What I meant with that? Some people use Linux, I'm a FreeBSD user myself and guess what? We both use OpenSSL.
-
Wednesday 23rd April 2014 17:37 GMT Mr Flibble
Re: Unfair criticism
As one of the comments at http://research.swtch.com/openssl points out, that particular bug could have been avoided with one simple comment.
I fully expect the OpenBSD devs to make some silly mistakes along the way despite their review processes, but I still think that we'll all end up with a significantly better SSL library.
-
-
Wednesday 23rd April 2014 12:05 GMT Bartholomew
Security first, everything else second.
OpenNTP look at their code and compare it to the ntp source. It is not perfect, but all the really scary syntax, from a security perspective, is gone. I'm sure that the same will happen with their LibreSSL implementation.
If the foundation of your security is in a swamp, or sitting on the top of an active volcano it is time to move house. And you have to respect the OpenBSD team for doing just that. Will their solution be perfect, probably not, will it be secure, as close as you will get without spending billions.
-
Saturday 26th April 2014 22:23 GMT Henry Wertz 1
I hope this doesn't get embarrasing...
I do hope this doesn't get embarrsing -- as in, LibreSSL introducing security bugs and flaws that OpenSSL did not have. I was actually NOT expecting flaws like AC @ post #2 found (failing to check for success of malloc), I assumed OpenBSD code practices would require careful checking throughout. But, I *would* expect possibly missing a higher-level sanity check or two when they start moving and removing chunky chunks of code.