back to article Anatomy of OpenSSL's Heartbleed: Just four bytes trigger horror bug

The password-leaking OpenSSL bug dubbed Heartbleed is so bad, switching off the internet for a while sounds like a good plan. A tiny flaw in the widely used encryption library allows anyone to trivially and secretly dip into vulnerable systems, from your bank's HTTPS server to your private VPN, to steal passwords, login …

COMMENTS

This topic is closed for new posts.
  1. bonkers

    I don't get it..

    I'm wondering again how code gets written without bounds-checking, on "message length" parameters. It's not the first time is it?

    Is the leaked data simply the junk that was in de-assigned memory? It looks kind of important stuff you might not want to write over - let alone send over the internet.

    perhaps as a general rule, apart from the obvious bounds checking, one should clear all memory as it becomes (re-)assigned? - or better on de-assignment.

    Perhaps generally these under-run or their over-run brethren should be detected and escalated as a general principle.

    just suggesting, perhaps we could be a bit less crap at everything?

    1. diodesign (Written by Reg staff) Silver badge

      Re: I don't get it..

      "Is the leaked data simply the junk that was in de-assigned memory?"

      Yeah, it appears to be dead or alive blocks of memory allocated via some malloc()-like magic. If dead, one wonders why it wasn't zeroed on release.

      "just suggesting, perhaps we could be a bit less crap at everything?"

      This is why I'm learning Rust for its better pointer and array bounds handling, tho I'm not sure it could have helped here.

      C.

      1. Gunnar Wolf
        Linux

        Rust would help, but there's a reason it's not used there

        System libraries usually need to be implemented in the most efficient possible way. That efficiency is achieved by working as close as possible to the "bare metal" — And C gets you there. For code that will be executed thousands of times every minute, in millions of servers all around the world (such as OpenSSL), this efficiency is a must.

        And when writing in a language without the memory management bits we have come to get used to... does not come without a price. Writing in C means you have to be much more careful — precisely because of this kind of issues.

        1. Destroy All Monsters Silver badge
          Windows

          This attitude is not the key to success

          System libraries usually need to be implemented in the most efficient possible way. That efficiency is achieved by working as close as possible to the "bare metal" — And C gets you there.

          BOLD TALK ... FROM THE EIGHTIES! Well, already in 1984: The Lilith

          Writing in C means you have to be much more careful

          THIS ZIMMER FRAME REALLY GETS ME THERE FASTER, I JUST HAVE TO BE CAREFUL WHEN GOING DOWNSTAIRS. SURE I BROKE MY NECK A FEW TIMES, BUT IT'S NOT GONNA HAPPEN AGAIN.

          1. Nick Ryan Silver badge

            Re: This attitude is not the key to success

            System libraries usually need to be implemented in the most efficient possible way. That efficiency is achieved by working as close as possible to the "bare metal" — And C gets you there.

            BOLD TALK ... FROM THE EIGHTIES! Well, already in 1984: The Lilith

            Writing in C means you have to be much more careful

            THIS ZIMMER FRAME REALLY GETS ME THERE FASTER, I JUST HAVE TO BE CAREFUL WHEN GOING DOWNSTAIRS. SURE I BROKE MY NECK A FEW TIMES, BUT IT'S NOT GONNA HAPPEN AGAIN.

            This kind attitude to coding is exactly why many current applications and indeed operating systems are so staggeringly inefficient and slow compared to the equivalent of even a few years ago despite the hardware being orders of magnitude faster.

            The lower level the API the less appropriate it is that it is implemented using "managed" code. If you had an understanding about just how much more processor resources (memory and CPU cycles) are consumed by managed code than unmanaged code then you would understand. Some things are appropriate implemented one way, some another. No one programming technique is appropriate for all cases and attempting to use one across all or to use the wrong technique is utterly stupid.

        2. John Hughes

          Re: Rust would help, but there's a reason it's not used there

          So, we use C because it's fast.

          And it's fast because it has no bounds checking.

          And we need bounds checking.

          So we add it to our C code, except when we forget.

          Isn't there some problem here?

          1. Nick Ryan Silver badge

            Re: Rust would help, but there's a reason it's not used there

            That is the problem. There are some very clever code analysis systems that can help to spot these kind of mistakes, but they can't spot everything.

          2. Steve Graham

            Re: Rust would help, but there's a reason it's not used there

            ...or except when we deliberately remove checking for performance reasons.

            http://article.gmane.org/gmane.os.openbsd.misc/211963

        3. Tim 11

          Re: Rust would help, but there's a reason it's not used there

          according to wikipedia "Performance of safe code is expected to be ... comparable to C++ code that manually takes precautions comparable to what the Rust language mandates"

          If it has met those objectives, then it seems to me you'd have to have a pretty compelling reason not to use Rust

      2. Jim 59

        Re: I don't get it..

        Modern kernels tend to leave memory unzerod even when after it is "freed", often for virtual memory / performance reasons. Even though the memory is free and available for re-use, pointers are maintained to it in case the same data is needed again soon afterwards. Eg. Solaris 10. Upon being malloced/added to a different process, it is then zerod, obviously.

        This instance seems to be a case of a process requesting data from a legitimate partner process, where the two already have a legitimate, authenticated relationship. So I am not sure how the kernel/system could prevent that. It doesn't know about the application's (openssl) data design.

      3. Bill Stewart

        Re: I don't get it..

        It's able to grab whatever 64KB off the heap is near the object it's supposed to be able to ask for, so that can include memory from live or dead objects, because C doesn't stop you from shooting yourself in the foot by running off the end of an array.

        The reason the memory of the dead objects wasn't zeroed on release is that, by default, OpenSSL keeps its own pool of memory and doesn't bother using malloc() very often (because on some systems, that might be slow, which would make managers sad), so OpenSSL doesn't call free() when it's done with those objects, and therefore if you've got a malloc()/free() system that has extra protection, like zeroing stuff or putting guard pages after chunks of memory to keep you from running off the ends, it doesn't waste time doing that.

        So yeah, modern Linuxes give you lots of cool tools, but they're not compiled in by default.

        C is still my favorite programming language after all these decades, but most people really shouldn't be allowed to use it, certainly not without extensive oversight of anything security-critical.

      4. beavershoes

        Re: I don't get it..

        "one wonders why it wasn't zeroed" -- It's just a heartbeat. All one has to do is compare the lengths and message. If they don't match, ignore. You're done. What you are suggesting would make it easier to pull off a denial-of-service attack. Next you will be suggesting rebooting between heartbeats.

    2. HollyHopDrive

      Re: I don't get it..

      Call me paranoid....Who would like to be able to get userids and passwords without tricky legal issues.....

      Tin foul hat at the ready.....

      1. Anonymous Coward
        Anonymous Coward

        Re: I don't get it..

        > Tin foul hat at the ready.....

        You're meant to wear it on your head. That's the problem right there.

        I share your paranoia; but this looks more like a slacker's foul up that nobody noticed rather than alphabet interference. Be interesting to see who submitted the code though, just in case.

        1. Anonymous Coward
          Anonymous Coward

          Re: I don't get it..

          supplemental:

          Submitted here:

          http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=4817504d069b4c5082161b02a22116ad75f822b1

          (it was in another Reg story). Also, this obligatory link:

          http://xkcd.com/1353/

          1. This post has been deleted by its author

            1. asdf

              Re: I don't get it..

              Just to shame the asshat that caused all this misery. Robin Seggelmann you sir are nobody's hero right about now.

              1. asdf

                Re: I don't get it..

                I'll take the downvotes on posting the author's name above. Yes its a common mistake but if you are going to screw up code make sure its not code half the world uses. And yes many other people are responsible as well including especially his main reviewer Stephen Henson (a Brit I assume).

    3. Destroy All Monsters Silver badge

      Re: I don't get it..

      just suggesting, perhaps we could be a bit less crap at everything?

      "C"

      The path starts here.

      1. regadpellagru
        FAIL

        Re: I don't get it..

        ""C"

        The path starts here."

        @Destroy,

        We'll all get tons of downvotes for this obviously, but I completely agree with you. C does not have any idea of objects, their size, bound checks etc ...

        It's time IMHO to start using a really secure language for those critical security components.

        This fiasco wouldn't have happened on a lib written in ADA.

        You can't trust a language which allows a tab (erm, sorry, a pointer, since tab don't exist) to access memory anywhere with no control.

        1. oldcoder

          Re: I don't get it..

          no.. it wouldn't.

          But you also can't write lowlevel runtime libraries with Ada either.

          It is either too slow, or the language itself prevents you from doing the things necessary.

    4. Anonymous Coward
      Anonymous Coward

      Re: I don't get it..

      I don't get it either, all the open source morons have been saying for years their OSS crap is more secure, then we get things like this. Oh and the 23 year old x windows vuln exposed a few months ago.

      Hint: down arrow is below, morons lol :)

      1. Nick Ryan Silver badge

        Re: I don't get it..

        I don't get it either, all the open source morons have been saying for years their OSS crap is more secure, then we get things like this. Oh and the 23 year old x windows vuln exposed a few months ago.

        Hint: down arrow is below, morons lol :)

        Nice troll.

        Mistakes are made equally in Open Source Software and Closed Source Software. The point with OSS is that it can be made more secure. This kind of fault in closed source may never get spotted or reported and then you'll be in an even worse situation where you don't know about the fault or how long it's been there.

        1. Sander van der Wal
          Holmes

          Re: I don't get it..

          Only problem is, nobody is actually examining Open Source for such errors.

          Apart from the alphabet soup agencies, the malware writers, and the companies that make money finding defects, that is. The only people who do have an interest in finding exploitable defects.

          Clearly, while the reasoning behind the "many eyes" idea appears to be reasonable, in practice it does NOT work. A bit of economic theory will tell you immediately why this is so, and also why the bad guys are the most likely people to find these defects.

        2. Trevor 3

          Re: I don't get it..

          More importantly, with OSS, you can either wait for the fix to be released, or get the code and build your own.

          Or stick the compile options workaround in.

          Its open to you, and up to you.

        3. Anonymous Coward
          Anonymous Coward

          Re: I don't get it..

          "This kind of fault in closed source may never get spotted"

          Or exploited then.

          "where you don't know about the fault or how long it's been there."

          I don't know with Open Source either. What I do know is that it's much easier to go find new holes in Open Source given the motivation as you can look at the source code...

          1. Tufty Squirrel

            Re: I don't get it..

            >> I don't know with Open Source either. What I do know is that it's much easier to go find

            >> new holes in Open Source given the motivation as you can look at the source code...

            Cobblers. Holes are mainly found by fuzzing, not by poring through source code. Exploits rely on code mishandling user-supplied data - fuzzing involves sending enormous quantities of deliberately broken data at something until it does something it's not supposed to. This is far easier than having to work out what some piece of logic is supposed to be doing, what it's actually doing, and why it's broken in this or that edge case. Chuck a load of crap at a victim machine (that you also control), wait for it to go bang, and then work out what you are going to be able to do while the smoke's clearing.

            http://en.wikipedia.org/wiki/Fuzz_testing

      2. Jim 59

        Re: I don't get it..

        Hi Tombo software will never be totally secure, any more than code will ever be perfect. I think open and closed source are both good, but your citing of 2 bugs in 25 years is hardly an impressive argument against FOSS.

      3. This post has been deleted by its author

    5. Hans 1

      Re: I don't get it..

      Obviously the $1 000 000 question is why expect a length parameter at all ? That is metadata that can be calculated quite trivially.

      1. richardcox13

        Re: I don't get it..

        > why expect a length parameter at all ? That is metadata that can be calculated quite trivially.

        How? A socket connection is just a stream of octets, there are no record delimiters (except as provided by your own protocol).

        And then you need to detect with partial data (eg. interruption on the network).

        1. regadpellagru

          Re: I don't get it..

          "> why expect a length parameter at all ? That is metadata that can be calculated quite trivially.

          How? A socket connection is just a stream of octets, there are no record delimiters (except as provided by your own protocol).

          And then you need to detect with partial data (eg. interruption on the network)."

          A long time ago, XDR solved those problems. It just needs to be used.

          http://man7.org/linux/man-pages/man3/xdr.3.html

          1. Michael H.F. Wilkinson Silver badge

            Re: I don't get it..

            I would suggest the key issue is that there are two specifications of the length of the data, not one. One has to wonder about the reason for this redundancy (it may be useful in another context, I do not know enough about the SSL libraries and protocols), but here it causes a problem. It could be used to check for malformed heartbeats, of course, but the moment you store information in two places, and fail to ensure consistency of the information, you can get into trouble.

            Using a calloc rather than malloc to allocate the space for the incoming heartbeat data based on the SSL3 length field and then storing the payload_length size chunk in it (after checking payload_length<=SSL3_record.length) should have avoided the problem, I would think. Of course calloc could be a touch more costly than malloc, but in the context of security (or indeed delays in the network) I would think this hardly figures in the grand scheme of things.

            Just my tuppence

            1. Steve T

              Re: I don't get it..

              It isn't stored in two places. The count inside the packet is sent by the originator, and is supposed to say how big the payload is. The other count is the size of the packet actually received - it could differ for example if there was an I/O error of some kind. Having two sources of the same info allows it to be verified, but this was unfortunately missed.

              It really is unpleasant to see how many poeple are dissing the folk who provide the whole world with free and usually superb quality software.

    6. Anonymous Coward
      Anonymous Coward

      Re: I don't get it..

      Hmmm... [checks wrist watch, confirms 'Yes, it's 2014...'] perhaps the compilers could be programmed to watch for such things. The compilers could essentially follow the input data around the code and make sure that somebody, somewhere has "sanitized" it. They already do similar checks on code; this is a tiny forward step.

      I'm not a coder drone, but once upon a time I wrote a nice 30+ foot long chunk of code (overnight) for serious industrial purposes. Every single last module I wrote checked and double-checked all the inputs and all the outputs, both on entry and on exit. That chunk of code was used for about a decade with zero bugs.

      1. I Am Spartacus
        Facepalm

        Re: I don't get it..

        AC suggested he is not a coder drone and proposes that bounds checking be done in the compiler.

        The first problem is that at compile time the bounds are not known. So the compiler can't check.

        The second problem is that, in the kernel of an O/S, especially in Unix/Linux type of O/S's, there are many places where bounds checking is just inappropriate. Very carefully controlled ways of ignoring bounds checking are used so that your PC responds fast, at the speed you want. Context Swaps, Process Creation, memory paging, device IO. These are things that need to be done fast and efficiently.

        There is no doubt that Heartbleed is a big bug. But it was a simple mistake. It was not deliberate. So holding up the whole of the FOSS community to ridicule, and the author of this code specifically, is pointless. In any event, the patch was out the same day it was discovered and people were patching their SSL code straight away. Ask how quickly microsoft/IBM/Oracle/Sybase come up with fixes to such problems.

        1. Steve T

          Re: I don't get it..

          Indeed. Within about 2 hours of hearing about this bug, my Ubuntu desktop got an automatic fix. Like to a Microsoft respond like that ;)

          1. Anonymous Coward
            Anonymous Coward

            Re: I don't get it..

            But when you heard about it wasn't when it was published. Several hours went by between announcements and patches.

            Microsoft release patches at the SAME TIME they make such announcements...

            1. oldcoder

              Re: I don't get it..

              But you don't know how long Microsoft sat on the announcement...

              It could have been several days... or 17 years.

    7. The Man Who Fell To Earth Silver badge
      Boffin

      This article conatins a major flaw

      This article glosses over the issue that the buffer overflow returns RANDOM DATA, not specifically keys or passwords. An actual attack would require, generally, a whole lot of queries each returning 64K of RANDOM DATA. Such an ACTIVE attack might be noticed, for starters, and is not assured of ever returning useful data. Having said that, I certainly don't want to downplay this vulnerability. But even this article is, as it's first sentence shows, overly alarmist rather than rational.

    8. LarryLain

      Re: I don't get it..

      After reviewing the code, I can't help but wonder if C is an appropriate language for critical stuff like ssl. A language where the programmer has the power to return the contents of a chunk of memory to the caller in a critical area like this would be akin to allowing bank tellers access to the entire contents of the bank's vaults to service customer requests. It just makes very little sense.

      Time for a rethink and a re-write I think.

  2. Anonymous Coward
    Anonymous Coward

    Simple script?

    Not that simple - unless you can remember 50 lines of cipher hex codes off the top of your head.

    What language is it anyway? Looks like some fucked up version of python. Ruby maybe?

    1. bigtimehustler

      Re: Simple script?

      When did simple=being able to remember it off the top of your head?

      I can't remember the entirety of many web applications I have had write in my time off the top of my head, a fair number of them I would regard as simple however. Simple can mean that when you read it, making sense of what it does and being able to write a variant is easy. Which in this case, it seems like it is.

    2. diodesign (Written by Reg staff) Silver badge

      Re: Simple script?

      "Ruby maybe?"

      Bingo. And by simple, I meant there's no screwing around with race conditions, crafting complicated structures, dodging ASLR, building ROP chains and what not. Just simply lie in a length header. Take the rest of the year off.

      C

      1. Anonymous Coward
        Anonymous Coward

        Re: Simple script?

        "And by simple, I meant there's no screwing around with race conditions, "

        Err , you should never get race conditions on a half duplex protocol.

        "dodging ASLR, building ROP chains and what not."

        What not? Oh dear. No more acronyms to impress us with? Shame.

        Most people definitions of "simple" mean something like a 10 line script sending one or 2 strings down the line. Not 300 lines of code doing challenge response.

        "Take the rest of the year off."

        Thanks for the advice. Here's some for you - buy yourself a dictionary.

        1. Destroy All Monsters Silver badge
          Facepalm

          Re: Simple script?

          Most people definitions of "simple" mean something like a 10 line script sending one or 2 strings down the line. Not 300 lines of code doing challenge response.

          "Most people" are fucking idiots challenged by the simple task of cleaning up the stall behind themselves.

          Sending one or 2 strings down the line is not "simple", it's a problem for the "differently abled" (or more charitably, for "first steps" exercises)

          Doing challenge response in a 10 line script that can be read and understood by the tester is "simple" and done at the right level of abstraction.

          Check out Erlang, then report back, mkay?

        2. Anonymous Coward
          Anonymous Coward

          Re: Simple script?

          boltar - "Most people definitions of "simple" mean something like a 10 line script sending one or 2 strings down the line. Not 300 lines of code doing challenge response."

          Well, I'm not a dev, but I read and understood what's going on with the script, with no great difficulty. Probably wouldn't have done, if it'd have been in C or C++

          You seem to be insisting the script must be complicated, because you're too stupid to understand it... maybe the explanation isn't that the script is complicated ....

          1. Anonymous Coward
            Anonymous Coward

            Re: Simple script?

            > You seem to be insisting the script must be complicated, because you're too stupid to understand it

            And looking through his post history I would say you have hit the nail on the head.

          2. Anonymous Coward
            Anonymous Coward

            Re: Simple script?

            "Well, I'm not a dev, but I read and understood what's going on with the script, with no great difficulty. Probably wouldn't have done, if it'd have been in C or C++"

            Like the other the dimwits on here you're confusing "understand" with "simple". I understand most of the 230K line c++ framework I'm debugging at the moment - I wouldn't call it simple. I understand everything this script does, but I wouldn't call it simple. printf("hello world\n"); is simple. This script isn't.

            Interestingly I got 7 thumbs down for pointing out that the script is NOT written in C. Which I think shows the general IQ level of the posters on this group. Doubtless these knuckle dragging mouth breathers will mark this down too.

            1. diodesign (Written by Reg staff) Silver badge

              Re: Re: Simple script?

              "Which I think shows the general IQ level of the posters on this group. Doubtless these knuckle dragging mouth breathers will mark this down too."

              I think you're being downvoted because you're coming across as a bit fighty.

              C.

              1. Anonymous Coward
                Anonymous Coward

                Re: Simple script?

                "I think you're being downvoted because you're coming across as a bit fighty."

                Awww. Did I upset some delicate ickle wickle flowers on here?

                Good :o)

                1. Anonymous Coward
                  Anonymous Coward

                  Re: Simple script?

                  Actually, you're being downvoted because you sound like a prick. He was being nice. No'one's upset, they just think you're a stroker.

              2. TchmilFan

                Re: Simple script?

                Uther, can we get some muffins in here?

        3. Daniel B.
          Boffin

          Re: Simple script? @boltar

          Most people definitions of "simple" mean something like a 10 line script sending one or 2 strings down the line. Not 300 lines of code doing challenge response.

          Are you a script kiddie? I didn't get ROP but I do know what ASLR is. And indeed the script is simple as the only thing it does is send a malformed package (the phony heartbeat request) and get the juicy bytes in response. Compared to the weirdness usually involved with exploits like stack smashing/injecting shell code, it's pretty straightforward.

          1. Anonymous Coward
            Anonymous Coward

            Re: Simple script? @boltar

            "Are you a script kiddie? I didn't get ROP but I do know what ASLR is"

            Good for you - I've actually implemented ASLR FWIW.

            My point - which obviously you didn't get, quelle surprise - is that dropped acronyms into a post in an attempt to gain gravitas usually has the opposite effect.

            1. diodesign (Written by Reg staff) Silver badge

              Re: boltar

              "dropped acronyms into a post in an attempt to gain gravitas"

              No, that wasn't my intention.

              C.

              1. Anonymous Coward
                Anonymous Coward

                Re: boltar

                "No, that wasn't my intention."

                Well thats not how it came across.

                1. diodesign (Written by Reg staff) Silver badge

                  Re: boltar

                  OK. Well, hopefully we can move past that and maybe now we can get back to the technicals - such as mitigation. You say you've implemented ASLR, so any thoughts?

                  Guard pages around individual sensitive allocations, causing this memcpy() to trigger a fault? It burns up virtual address space a bit, but worth it IMHO.

                  There's also this: http://article.gmane.org/gmane.os.openbsd.misc/211963

                  C.

                  1. launcap Silver badge
                    Thumb Up

                    Re: boltar

                    >Guard pages around individual sensitive allocations, causing this memcpy() to trigger a fault?

                    Or something akin to what IBM S/370 mainframes do (S/360 maybe too but I've only experience of writing assembler on an S/370 running TPF) - the first byte of each core block (4k - yay!) has a block owner ID and any attempt to read or write to that byte by something that isn't the process owner triggers a fatal error..

                    Of course, all that means that your corewalker just has to avoid the first byte and it's safe but the principle can be extended.

                    1. Sysprog Steve
                      Meh

                      Re: boltar

                      Regarding S/370 (now z/OS) memory allocation, this statement, "Of course, all that means that your corewalker just has to avoid the first byte and it's safe but the principle can be extended"

                      The part of memory that shows who "owns" it is not addressable by user programs, but by the OS. You don't get to skip it.

                      Depending on the implementation, though, that is not necessarily bulletproof. If the memory is owned by the service provider, such as the SSL task, rather than the requestor, there is no restriction.

                      The mainframe OS has other mechanisms to avoid this sort of problem, some based on architecture, some based on the security software (which is forced through certain hoops BY the architecture.)

                      Not fool-proof, certainly, but it makes this sort of vulnerability exceedingly rare.

                      Cheers,,,Steve

                  2. Anonymous Coward
                    Anonymous Coward

                    Re: boltar

                    "so any thoughts?"

                    Ah , the old devil and deep blue sea type question - if I answer I'll be accused of just quoting wikipedia, if I don't I'll be accused of bluffing. Either way I lose.

                    Fine, I'll bite - it was for data space client facing server. Multiple seperate blocks of memory allocated with various bits of important data scattered and interleaved among them with a core encrypted index. Not OS level ASLR but a similar concept.

      2. This post has been deleted by its author

  3. Bruno Girin

    In theory this should never have happened because malloc should have thrown a wobbly at copying that memory. In practice, it appears that OpenSSL is using unconventional memory allocation logic: http://article.gmane.org/gmane.os.openbsd.misc/211963

    I don't roll out my own crypto code because I am no cryptographer and I trust crypto specialists to do it right. So why can't crypto specialists trust the OS to do memory allocation correctly and let the kernel devs deal with that rather than try to roll their own?

    Sigh...

    @boltar this is C code.

    1. Anonymous Coward
      Anonymous Coward

      "@boltar this is C code."

      The code in the text yes , the script - no.

    2. Warm Braw

      "Trust the OS" - If only it were that simple...

      Memory allocation is always going to be a problem because the "right" strategy depends on the application. Using malloc for every PDU you received (for example) would be madness. And OpenSSL isn't even an application, it's a library used by applications which use memory for other things and might not want OpenSSL to get memory on a first-come-first-served basis. There's no way malloc could "know" what the best thing to do would be in every circumstance. Applications often keep several separate memory pools and libraries often expect (or at least allow) the calling application to do their memory allocation for them. And of course OpenSSL runs on a wide variety of platforms and malloc performs rather differently across all of them.

      The big issue here is simply not validating the received PDU (which could simply be random data, even if not malicious).

      It would be a good idea to have separate memory pools with no-access pages between them to segregate different types of in-process data from buffer overruns, but you'd pretty much have to write your own allocator for that purpose too.

      1. Nick Ryan Silver badge

        Re: "Trust the OS" - If only it were that simple...

        This exploit isn't about buffer overruns as such - that is where you throw too much data at a process and it overwrites executable code with whatever you threw at it. This exploit cannot be detected using memory bounds checking, because it is not violating any memory bound.

        When an application allocates memory, this memory is in an "undefined" state. For a cold started system or a block of memory that has never been allocated yet, this memory is usually all zeroes, however there is no guarantee of even this. Hence "undefined".

        This exploit allocates 64k of memory, which being "undefined" will generally contain whatever application or process last wrote. Due to deficiencies in the code one byte of memory is copied to this and the whole 64k of memory is returned. It's pot luck what is in this 64k block of memory, but keep on requesting memory and you will eventually get something interesting back.

        There are various preventatives for this, such as zeroing the memory on allocation, but for a low level library this is inefficient and as the block of memory should have been overwritten entirely a pointless exercise in wasting processor time. Another is to zero the memory on de-allocation, again for many low level processes this is also inefficient as a relatively simple process could then take 20x longer to complete, multiply a low level task by the number of calls to it and the overall system impact could be disastrous. On the other hand, a code process that stores passwords and private keys should damn well clear the memory after use, but again this is an efficiency argument compared to what can be done on an otherwise "trusted" system.

  4. Brenda McViking
    Joke

    Well at least I won't have to change my reg password - as it doesnt even bother with https to protect the login >_<

    1. Forget It

      Brenda McViking tapped out:

      Well at least I won't have to change my reg password - as it doesnt even bother with https to protect the login >_<

      That probably coz most of the time we commentards at the Reg write such nonsense - no hacker could make sense of it.

  5. Zog_but_not_the_first
    IT Angle

    So...

    ... do I have to spend the next hour changing my passwords, or not?

    1. ragnar

      Re: So...

      Presumably we need to wait until Big Bank PLC updates its servers before we reset passwords. As for how soon that will be, who can tell?

  6. GaryDMN

    OpenSSL is open source, most financial institutions don't use open source encryption.

    Free services like Google and Yahoo use OpenSSL, but many commercial sites use Verisign or similar closed source encryption.

    1. Tom Maddox Silver badge
      FAIL

      Re: OpenSSL is open source, most financial institutions don't use open source encryption.

      I'm not even sure where to start on how this is wrong. Let's break it down:

      OpenSSL is security library which is used in a number of products, some of which are "open," (openssh, Apache httpd) and some of which are proprietary (Juniper SSL VPN), and you can bet your biscuits that just about every major organization has OpenSSL deployed somewhere.

      Verisign is a certificate authority. All it does is provide signed certificates (unless they have some proprietary security package I don't know about), which is irrelevant to this vulnerability.

      1. Spoddyhalfwit

        Re: OpenSSL is open source, most financial institutions don't use open source encryption.

        IIS is immune to this attack, as its not using Open SSL. Its often criticized, but I don't remember any of our IIS servers ever having any vulnerability on this scale. I know I'm going to get downvoted for saying that by the fanatics, but its true.

        Now I just have to worry about all the services I'm using that do use OpenSSL - my bank, my ISP, etc.

        1. Anonymous Coward
          Anonymous Coward

          Re: OpenSSL is open source, most financial institutions don't use open source encryption.

          > IIS is immune to this attack

          CVE-2013-2566 The RC4 algorithm, as used in the TLS protocol and SSL protocol, has many single-byte biases, which makes it easier for remote attackers to conduct plaintext-recovery attacks via statistical analysis of ciphertext in a large number of sessions that use the same plaintext.

          CVE-2010-3972 Heap-based buffer overflow in the TELNET_STREAM_CONTEXT::OnSendData function in ftpsvc.dll in Microsoft FTP Service 7.0 and 7.5 for Internet Information Services (IIS) 7.0, and IIS 7.5, allows remote attackers to execute arbitrary code or cause a denial of service (daemon crash) via a crafted FTP command, aka "IIS FTP Service Heap Buffer Overrun Vulnerability." NOTE: some of these details are obtained from third party information.

          CVE-2010-2731 Unspecified vulnerability in Microsoft Internet Information Services (IIS) 5.1 on Windows XP SP3, when directory-based Basic Authentication is enabled, allows remote attackers to bypass intended access restrictions and execute ASP files via a crafted request, aka "Directory Authentication Bypass Vulnerability."

          CVE-2010-2730 Buffer overflow in Microsoft Internet Information Services (IIS) 7.5, when FastCGI is enabled, allows remote attackers to execute arbitrary code via crafted headers in a request, aka "Request Header Buffer Overflow Vulnerability."

          Sorry, what was that you were saying? I wasn't paying attention.

          1. Anonymous Coward
            Anonymous Coward

            Re: OpenSSL is open source, most financial institutions don't use open source encryption.

            None of those you listed re IIS are anywhere near as serious as this Open SSL one (rated 11 out of 10 by Bruce Schnier). No system is bug free but this open SSL one is catastrophic.

          2. Spoddyhalfwit

            Re: OpenSSL is open source, most financial institutions don't use open source encryption.

            If you think those bugs are on a par with heartbleed, you don't understand its seriousness.

            see eg

            http://threatpost.com/difficulty-of-detecting-openssl-heartbleed-attacks-adds-to-problem/105354

            It might mean revoking your SSL certificates and getting new ones. Pricey if you have a lot, and time consuming.

            Why heartbleed is the most dangerous security threat on the web (curiously it doesn't list any of those ones that you thought was as serious)

            http://www.theverge.com/2014/4/8/5594266/how-heartbleed-broke-the-internet

            1. Anonymous Coward
              Anonymous Coward

              Re: OpenSSL is open source, most financial institutions don't use open source encryption.

              First of all, it literally took me seconds to find a large list of remote exploit bugs for IIS. I only listed the first 4 as they appeared in the list. I paid no attention to their severity

              Secondly, any bug that allows you to remotely execute arbitrary code is worse than any bug that exposes a segment of internal memory. The reason being is that if you can execute arbitrary code you already have access to entirety of the processes memory instead of just a 64k chunk. You also have access to the file system (although it might be sandboxed), but you definitely have access to every file that the process has open.

              Thirdly, the reason that the OpenSSL bug is such a big risk is because of its widespread use. OpenSSL is in nearly every device. In my own home it is in my printer, Vonage phone adapter, Netgear switch (x2), Cisco Router, ADSL router, Sky+ HD box (according to sky), CCTV IP Camera, Blu-ray player, Seagate Blackarmour NAS, Samsung Smart TV and probably another half dozen devices I have lying around.

              If OpenSSL was like IIS and had a small market share it would not warrant such a lot of media attention.

              1. Anonymous Coward
                Anonymous Coward

                Re: OpenSSL is open source, most financial institutions don't use open source encryption.

                "If OpenSSL was like IIS and had a small market share it would not warrant such a lot of media attention."

                You know IIS has an over 33% and rapidly growing webserver market share? Lots of people are fed up with the myriad of holes like this in Open Sources and are switching to IIS:

                http://news.netcraft.com/archives/2014/04/02/april-2014-web-server-survey.html

            2. Gunnar Wolf
              Black Helicopters

              Re: OpenSSL is open source, most financial institutions don't use open source encryption.

              You say¸ «It might mean revoking your SSL certificates and getting new ones. Pricey if you have a lot, and time consuming.»

              If you have a remote code execution or a privilege escalation bug, and your server gets owned... it's game over. The attacker might have already grabbed your certificates, as well as any information in your server. Get owned, and you will anyway have to revoke every certificate — and user credentials.

          3. Anonymous Coward
            Anonymous Coward

            Re: OpenSSL is open source, most financial institutions don't use open source encryption.

            >IIS is immune to this attack:

            Yes, it is.

            CVE-2013-2566 - Hasn't been a default in IIS for years. Much more a Linux problem.

            CVE-2010-3972 - FTP is not enabled by default

            CVE-2010-2731 - Not enabled or even installed by default. Not a shipping OS. Not a server OS.

            CVE-2010-2730 - Fast CGI is not enabled by default.

            Go look at the far greater number of holes in Apache (60+ in a single version tree) in the last few years: http://secunia.com/advisories/product/9633/?task=advisories

            And that's without considering the many other big holes in Open Source stacks like PHP, MySQL and a Linux OS.

            Hence why defacement / hacking statistics show that you are several times more likely to be hacked running Open Source than Windows - http://www.zone-h.org/news/id/4737

            1. Anonymous Coward
              FAIL

              Re: OpenSSL is open source, most financial institutions don't use open source encryption.

              "Hence why defacement / hacking statistics show that you are several times more likely to be hacked running Open Source than Windows - http://www.zone-h.org/news/id/4737"

              You've quoted that irrelevant Zone-H survey, whether posting as Richto, TheVogon, or Anonymous Coward: at least 40 times.

              You know, most other comedians at least try to refresh their material from time to time.

        2. Roland6 Silver badge

          Re: OpenSSL is open source, most financial institutions don't use open source encryption.

          >IIS is immune to this attack

          It doesn't seem to be quite so clear from various IIS forums...

          Whilst IIS doesn't use the OpenSSL libraries, there seems to be a little uncertainty as Win 2008 IIS7 - SSL is being reported as being vulnerable, whereas Win 2012 IIS8 - SSL isn't.

          To me this is one of those reasons why we need independent test labs who run specific suites of tests and award test pass certificates. Whilst this doesn't mean the code doesn't contain vulnerabilities, it does mean that known vulnerabilities are not re-introduced. Which is the real worry as whilst we can be sure that v1.0.1g doesn't have this vulnerability, there is no such certainty over future releases.

          1. Anonymous Coward
            Anonymous Coward

            Re: OpenSSL is open source, most financial institutions don't use open source encryption.

            "Whilst IIS doesn't use the OpenSSL libraries, there seems to be a little uncertainty as Win 2008 IIS7 - SSL is being reported as being vulnerable, whereas Win 2012 IIS8 - SSL isn't."

            There is no uncertainty. No Microsoft products are effected.

            http://microsoft-news.com/microsoft-confirms-that-heartbleed-vulnerability-in-openssl-does-not-affect-microsoft-account-and-microsoft-azure/

            "Microsoft Account and Microsoft Azure, along with most Microsoft Services, were not impacted by the OpenSSL vulnerability. Windows’ implementation of SSL/TLS was also not impacted."

        3. Gunnar Wolf
          Boffin

          A bug in a library is always worse, but...

          The "terrible" bit in this bug is because it happened in a very widely used system library, but just because you asked, you can look at:

          http://seclists.org/fulldisclosure/2014/Apr/108

          A zero-day bug uncovered today, making IIS servers vulnerable. Yes, to very different issues, but still, getting user-level read+exec privileges to your system means game over. Just as much as this (big, very big) program, or even more.

          There have been several big information disclosure, code execution and login credentials "mismanagement" bugs in IIS. The reason Heartbleed is more important is because it is a (gross!) library-level bug — which means that potentially hundreds of programs using said library are using broken controls.

          1. Dan Crichton

            Re: A bug in a library is always worse, but...

            If you read the followups you'll see it's a zero day affecting IIS4 on Windows NT 4 and IIS5 on Windows 2000. Both of those versions have been EOL for years, in the case of Windows 2000 since July 2010. Who in their right mind is still running web sites on Windows 2000?

            1. hplasm
              Windows

              Re: A bug in a library is always worse, but...

              "Who in their right mind is still running web sites on Windows 2000?"

              Who in their right mind is still running web sites on Windows ?

              1. Dan Crichton

                Re: A bug in a library is always worse, but...

                Given the Heartbleed vulnerability I'm very happy running web sites on Windows, and have been for many years :)

                I do have sites I manage on CentOS and FreeBSD too, luckily most of them don't use SSL so weren't affected (but have been patched already just in case), and those that do use SSL were on older distros which use older, unaffected version of OpenSSL.

              2. Anonymous Coward
                Anonymous Coward

                Re: A bug in a library is always worse, but...

                "Who in their right mind is still running web sites on Windows ?"

                Anyone that wants an order of magnitude fewer security patches to evaluate than on a LAMP stack, and a much lower risk of being hacked / defaced ?

              3. TeeCee Gold badge

                Re: A bug in a library is always worse, but...

                Who in their right mind is still running web sites on Windows ?

                No idea. But whoever they are, I'll bet they're laughing like a drain right now.

            2. Anonymous Coward
              Anonymous Coward

              Re: A bug in a library is always worse, but...

              It's not even a zero day as such - it was discovered back in 2011...

          2. This post has been deleted by its author

        4. tom dial Silver badge

          Re: OpenSSL is open source, most financial institutions don't use open source encryption.

          That you know of. Yet.

          Smugness by users of closed source products in this context is as inappropriate as similar smugness by open source users when a Microsoft vulnerability turns up. Bug probability in large or complex programs approaches unity.

          On the other hand, this one seems of the sort that good programming hygiene and code review should catch before and during implementation.

          - If the protocol or other specification leaves unspecified options (especially unstated ones) for the implementor the action to be taken should be documented before or during coding.

          - Implementations should include verification that values are within intended limits and are related to other values (also, of course, verified to be within their intended limits). Nothing should be written off as "won't happen" unless it is physically impossible.

          In addition, standard minimal testing protocols ought to have caught the early on.

          - test program behavior where numeric variables have extremal or out of bounds values as well as a few in the normal/expected range.

          - where several variables have a defined or implicit relation, test cases where the relation fails to hold as well as those in which it does.

          These are things that I understood all code hackers should be expected to do before they attain journeyman status.

          That this appears not to have been done is on the programmer and OpenSSL foundation. That the vulnerability appears not to have been identified and addressed for two years is a bit of a surprise, yet the evidence of use before public notice and release of corrected code seems to be somewhere between weak and nonexistent.

      2. Anonymous Coward
        Anonymous Coward

        Re: OpenSSL is open source, most financial institutions don't use open source encryption.

        > Verisign is a certificate authority. All it does is provide signed certificates

        It might even use OpenSSL to generate those certificates although I have no knowledge either way.

        1. elDog

          Re: OpenSSL is open source, most financial institutions don't use open source encryption.

          Hmmmm. Maybe time for a test -- a bit of man-in-the-middle with verisign?

      3. Jim 59

        Re: OpenSSL is open source, most financial institutions don't use open source encryption.

        GaryDMN is not totaly wrong. A bank who was a client of mine used SSH products from a well known software company. Those products are not affected by the current vulnerability because they rely on a combination of Openssl 0.9 and the company's own authored TLS libraries.

    2. Gunnar Wolf
      Big Brother

      Verisign provides certificates...

      But in order to use Verisign, you need some tool or library to take care of the communication. That's where OpenSSL kicks in. And, yes, it's used "all over the place", not just in Linux.

  7. Gene Cash Silver badge
    Happy

    Thanks for the readable explanation

    I haven't seen any other site even get the TL;DR bit right.

  8. Boris the Cockroach Silver badge
    FAIL

    Thank

    gawd I dont use intersnot banking.

    And to the the programmer who created this pile of shit:

    Did you not follow tech news when windows/microsoft got pwned because they could'nt be arsed with fucking bounds checking.

    And then you do the same fucking thing!!!!

    Jesus If a msg has a header declaring its length to be 10 bytes , then 10 bytes is all it gets, and 10 bytes is all it gets back

    Wheres the fail^20! icon?

    1. Tim Brown 1

      Re: Thank

      At least with open source, once the bug has been discovered:

      1) we can properly understand the problem and its implications

      2) a patch can be made in timely fashion (my servers running Debian Wheezy already have a patch).

      Meanwhile for Microshaft and Adobe bugs we have to rely on their tardy release notes and patches for information and fixes or alternatively reverse engineer stuff (breaking their EULAs in the process).

    2. DanDanDan

      Re: Thank

      While you're not exactly wrong here, the "msg" has a header declaring it's 64kb. So it was given 64kb and got 64kb back. The issue is that the message isn't 64kb.

  9. Anonymous Coward
    Anonymous Coward

    What is heartbeat used for?

    Can someone explain why it was added, and what use it is, and why it is enabled by default?

    It looks to me to be a way for the client to send something to the server and have it echoed back. Is there a reason why the server should be echoing back client supplied data, and in what way this (as opposed to sending back data that the client doesn't control) is a useful addition to the protocol?

    Beyond bad programming, I'm wondering if this is a "kitchen sink" mentality, where stuff that is of narrow interest has been included, but was enabled by default against all best practices.

    1. LordHighFixer

      Re: What is heartbeat used for?

      "Is there a reason why the server should be echoing back client supplied data"

      Sure, it is for a future internet distributed storage system. Or an awesome built in DDOS amplifier.

      "Maybe it is for making enormous swiss cheese.

      No. The beam would last for what, 15 seconds?

      What good is that? -I respect you, but I graduated.

      Let the engineers figure that out.

      Maybe it already has a use...

      ...for which it's designed. "

    2. Dan 55 Silver badge

      Re: What is heartbeat used for?

      I imagine the idea was to allow the client to put an internal ID, time, or similar in the request which could then picked back up by the client from the response so that a decision may be taken about that particular connection.

      Next huge Internet exploit next week: "A malicious server could craft a fake heartbeat response which crashes all clients connecting running library x..."

      It would probably have better just to design the thing to send back a ping without any data.

    3. VeganVegan
      Paris Hilton

      Re: What is heartbeat used for?

      Upvote. I have the same question for the reasons why this was ever implemented.

      It seems to me that to keep the connection alive, you could just send a single byte, or a SYN-ACK, back and forth, no need for elaborate stuff.

      Anyone knows the logic for what is being done?

    4. pointyhairmanager

      Re: What is heartbeat used for?

      It is really ping for TLS. If you have ever run an ssh terminal session and had it drop after 5 minutes of idleness, when you have forgotten to run a "ping localhost" or "watch date" you will know why.

      The reason for variable sized packets (and such large packets) is allegedly to enable an application to use this protocol extension to find the maximum size of packets it can send along the connection without fragmentation.

    5. Anonymous Coward
      Anonymous Coward

      Re: What is heartbeat used for?

      More to the point, I wonder why they don't just set SO_KEEPALIVE on the socket and let the TCP stack do the keep-alive.

      1. Anonymous Coward
        Anonymous Coward

        Re: What is heartbeat used for?

        > don't just set SO_KEEPALIVE

        The SO_KEEPALIVE can be effectively disabled at the kernel level by setting a high interval time. The application can not do anything about this.

        The KEEPALIVE keeps the socket active but does not test the application listening to that socket is active. I've had many a hung process were to all intents and purposes the socket is still connected but the process is off somewhere else and will never service that socket.

        Re: VeganVegan

        The SYN-ACK packet is only ever sent when the socket is first established.

  10. DaDoc
    Black Helicopters

    Client-side implications?

    I'm a medical doctor, so trying to get my head around SSL is a bit o.t. to me...

    What's the client-side implication of all this? Is changing passwords after the server-side certs have been renewed enough? Or are the libraries found in BYOD environments - what I'm saying is, is a leak inherently possible at either end, and equally dangerous?

    1. rm -rf *.*

      Re: Client-side implications?

      Let me take a stab at your questions, @DaDoc:

      Q: What's the client-side implication of all this? Is changing passwords after the server-side certs have been renewed enough?

      A: Nope. The server's OpenSSL implementation has to be upgraded or re-compiled to get rid of the vuln. FIRST, then server-side cert renewal SECOND. You can change your passwords after that.

      Q: Or are the libraries found in BYOD environments - what I'm saying is, is a leak inherently possible at either end, and equally dangerous?

      A: Possibly. I've no clue. If the "client" is being logged into by others, then yes, I guess.

  11. Anonymous Coward
    Anonymous Coward

    So having been caught out by this problem, will those organisations using these open-source libraries start examining the source updates to ensure they are secure?

    Will they pay other organisations to validate the source for them, if they don't have the expertise in-house?

    I only ask because surely the real problem will be if everyone sighs about the bug and updates their systems, but then learn nothing from it and just hope it doesn't happen again.....

  12. Anonymous Coward
    Anonymous Coward

    Sloppiness or malice?

    The RFC (6520, Feb 2012) is quite explicit in what must be checked.

    <quote>

    The total length of a HeartbeatMessage MUST NOT exceed 2^14 or

    max_fragment_length when negotiated as defined in [RFC6066].

    </quote>

    There seems to be no check in the OpenSSL code for this part of the *standard*. Indeed I *still* do not see this being checked.

    <quote>

    If the payload_length of a received HeartbeatMessage is too large,

    the received HeartbeatMessage MUST be discarded silently. "

    </quote>

    Now although "too large" is not defined (is this > 2^14 or max_fragment_length, or is this greater than the received packet length -19) one would have thought there would be some evidence of consideration of this part of the RFC in the code - there generally is in all other parts of the Open SSL code I have seen. Proper consideration of this paragraph of what is a blessedly short RFC would have made the "bug" much more difficult to overlook. As it is this paragraph (along with seems to have been completely ignored (until the patch).

    So this would seem not to be ordinary programming sloppiness, but an actual ignoring of part of the RFC that would have been in front of the programmer at the time.

    1. BristolBachelor Gold badge
      Trollface

      Re: Sloppiness or malice?

      So the RFC iasued in 2012 tells you how to do it. And the code submitted in 2011 does it a different way? I suppose that is possible in a causal universe, where you can't tell the future; it's Einstein's fault.

      1. Anonymous Coward
        Anonymous Coward

        Re: Sloppiness or malice?

        Well the draft from July 2011 is even more explicit (albeit it changed semantically by the time the final standard was written to (a) actually require padding, and (b) to require silent dropping).

        Here is the July 2011 draft version of it

        <quote>

        If payload_length is either shorter than expected and thus indicates

        padding in a HeartbeatResponse or exceeds the actual message length

        in any message type, an illegal parameter alert MUST be sent in

        response.

        </quote>

        So I think in all cases an explicit instruction to check payload_length has been part of the draft and final RFC.

        The December 4 2011 version is essentially the same as the final 2012 RFC.

    2. eldakka

      Re: Sloppiness or malice?

      If I was the coder, i'd be pointing my finger at the NSA and saying "they made me do it."

      Everyone would believe that, and who could prove otherwise? Who'd BELIEVE any proof the NSA provided that they weren't responsible?

    3. GoingGoingGone

      Re: Sloppiness or malice?

      RFC 6520 was coauthored by the same chap that created the bug. He could have hardly been oblivious to it. Make what you wish of it -you couldn't have it better for whatever conspiracy theory tickles your pickle- but my take is that this is just a simple oversight despite its catastrophic consequences.

  13. Sven Coenye

    This works both ways

    The article concentrates on attacks against servers, but this works both ways. (See the Use Cases at https://tools.ietf.org/html/rfc6520). Either end can send a heartbeat packet. A malicious or compromised server can extract a vulnerable client's memory. How long before all those will be patched, including anything that is statically linked?...

  14. Anonymous Coward
    Anonymous Coward

    will someone please think of the CHILDREN!!!

  15. Anonymous Coward
    Anonymous Coward

    So what happened to the coder

    Responsible for that commit? Clearly their commits now needs to be checked thoroughly and continue to be scrutinized closely from here on. When you're coding a opensource security library as critical as openssl there needs to be a line of reponsibility that needs to be enforced.

    I can only hope that nobody contributing to the OpenSSL code base is working for spy agencies - but that's a big ask.

    If the OpenSSL team was in the financial sector or such, the coder would've normally been put on leave at the very least.

    The code commit (improper bound checking) is a very very stupid mistake to make for C or C++ coders, it's such a newbie mistake. I'm also surprised they don't do proper TDD which would've caught these issues before they're released.

    1. btrower

      Re: So what happened to the coder

      Re: "the coder would've normally been put on leave at the very least."

      For a moment there I thought you were going to say 'put to sleep'.

      Almost the entirety of the source code universe is a total mess. A bug like this should be impossible in burned-in mission critical code like this. Unfortunately, a lot of evil habits are actually cultivated by design. The programmers don't know any better and they are immune to reasoning about it.

      The C language is the language of memory corruption. It is a 'high' low level language designed to build operating systems and compilers. Code that does not do bounds checking is faster. In some cases speed differentials are astonishing due to the hierarchy of storage from the CPU registers through L1, L2, L3 level caches and ordinary RAM. Stuff that stays within the L1 cache operates at lightening speed -- about 200 times as fast as a reference to RAM.

      It is fair game for called code to have expectations. If you pass a memory reference, you don't want every nested call to check its validity. In some cases, called code could not do bounds checking because the extent of the bound is only known back up the call chain somewhere.

      When you look at the sources, you find that these bugs are usually in code that has other tell-tale flaws as well. Older code contains errors by even good programmers. Dennis Ritchie was a god, but not all of his code examples are keepers and he was guilty of some faulty practices.

      The worst stuff is stuff that was written or maintained by talented journeymen programmers whose short-term memory was at a maximum and whose coding experience led them to believe that clever code was a good thing that showed off their talent. I doubt it is even true these days, but Duff's Device is an extreme optimization that *may* be a necessary evil at times, but when not necessary simply devolves to being evil. I know of at least one nascent small C compiler that fails to generate correct code when it encounters Duff's Device. A beginner or a journeyman will blame the compiler when it fails. Someone more experienced will blame the coder. Every time you do something fanciful in code you take a chance that a maintainer will have to debug and recode. An experienced, literate and conscientious programmer should not be doing this.

      Beyond a failure in coding, this current situation demonstrates something that I have often commented upon in these forums. Our system is insecure and it is essentially insecure by design. Given the enormous resources spent on systems using this SSL code, does it make any sense at all that it suffers from such a mundane flaw? It does if you realize that security is not that important to the people holding the purse-strings and calling the shots.

      This is about the most serious security bug I have ever seen. Cleanup is going to be a real bitch. Repairing and replacing the code is the least of the work effort. Prediction: Most of the systems that had this issue will not have passwords changed and keys replaced. If a significant number of systems were actually compromised, we will be living with the consequences of this for a long time.

      Despite the severity of this bug, it pales in comparison to the inherent systemic insecurity of the Internet. There is no question in my mind that people in the know are protecting important things with air gaps, enormous keys created with very good sources of entropy, decoy layers, etc. That is to say, nobody who understands this stuff could possibly have any faith in the current security systems as actually deployed.

      It is very hard to look at the current state of the Internet, particularly things like the failed introduction of IPv6 and not think that people with influence who understand security have encouraged this situation precisely because they wish it to remain vulnerable to them.

      1. Solmyr ibn Wali Barad
        Pint

        Re: So what happened to the coder

        "Almost the entirety of the source code universe is a total mess"

        Why, the entire universe is a total mess. Entropy is one of the most fundamental qualities of it, and an important driving force. It would be folly to hope that our puny sources would be unaffected by the Almighty Mess.

        Excellent comment, though.

    2. Sooty

      Re: So what happened to the coder

      If the OpenSSL team was in the financial sector or such, the coder would've normally been put on leave at the very least

      That is really not a great idea, if you punish people badly for making coding errors, it doesn't exactly encourage them to come forward and admit them. That sort of reaction encourages people to sweep problems they notice under the capet and pretend they don't exist, rather than risk being punished.

  16. Jason Ozolins
    Facepalm

    Workaround: Clients could refuse to connect to vulnerable websites

    Surely if the client SSL library was altered to try this exploit once during certificate exchange, the client could drop the connection if anything extra was returned in the heartbeat request. It's a heuristic thing - the larger the "exploit" request size, the easier it would be for the client to tell that the server was unpatched - but it is at least *something* that could be done at the client end to catch connections to insecure servers.

  17. Andy 66
    Unhappy

    Does the leak cross services?

    given a situation of a server running openssl for email authentication and openssl for apache, can apache be exploited to reveal email passwords or is the information stored in memory restricted to that service (apache)?

    1. John G Imrie

      Re: Does the leak cross services?

      From what I know about this the answer is 'It depends'. But I suspect that if you can access OpenSSL for any service on a server you can assume that everything on that server is compromised.

      I'm not sure if this can cross the boundary of virtual systems running on the same iron or not though.

  18. haydnw

    So which major sites are / were affected?

    While the technicalities of this are all a bit beyond me, my understanding is that if any services I have used at any point in the past have been compromised, my data may have been swiped. In such cases I need to wait for them to patch their systems, renew the server-side certificates, *then* change my password. That's marvellous, but how do I know which systems are / were affected? Is anyone maintaining a list anywhere? It would be easier to glance down that than to manually check every site for which I have a login. I know there's a website checker at heartbleed.com, but surely this won't pick up a system which was patched on Monday, for example, but from which my data might already have been taken several months ago? I appreciate it's a bit stable door / horse bolted in some ways, but I'd still like to know, and to change the pw anyway.

    1. Fihart

      Re: So which major sites are / were affected?

      This morning (as per last night), using Chrome, Twitter is still throwing up a warning that I might be signing in to a fake site due to SSL issues. Using Opera, no warning.

  19. Carpetsmoker

    I didn't ask for no heartbeat...

    No one mentioned this, but IMHO at least *part* of the problem is that a little used TLS extension was not only implemented, but *enabled by default* in the first place.

    Security is one place where conservatism really pays off quite well.

    1. Jim 59

      Re: I didn't ask for no heartbeat...

      Conservatism is right. A Debian server of mine is safe from Hearbleed, purely because it is still on Debian 6. I haven't upgrades to 7 yet, even though support runs out in a month or so.

      For secure systems, a balance is needed so that the system does not become too out-of-date, but not too up-to-date either. These low level libraries contain some of the most important code in the world, it seems, and they have perhaps got the balance a bit wrong this time.

      1. ortunk

        Re: I didn't ask for no heartbeat...

        Our situtation exactly...

        +2 for Debian Squeeze servers, conservatism payed of (again)

        -2 for Ubuntu 12.04 LTS well we do have a love and hate relationship with you

  20. Lockwood
    Linux

    I'm happy to see that the bulletproof OS is still bulletproof.

    1. James Hughes 1

      Quite.

      This being a userspace application, the problem is not kernel or OS related.

      Is that what you meant?

    2. Anonymous Coward
      Anonymous Coward

      Yep - Windows Server is not affected.

  21. scrubber

    Why is heartbeat a variable length?

    Why not have a fixed length test to check server is up and running? Surely a 4 or 8 byte fixed length message is enough to prove that it responds with what you sent and then there is no need for the length parameter.

  22. Hans 1
    Boffin

    @PropieTards

    We will not know which versions of IIS are affected, but IIS has had a "long" history of very embarrassing flaws, one of which was the ".." vuln that allowed you to download any file on the file system, yes, you could even download the registry and bruteforce passwords!!!!

    However, this issue affects version 1.0.1 to 1.0.1f, and yes, the most widely used versions are 0.97 & 0.98 branches and, to some lesser extent, the 1.0.0 branch ... 1.0.1 is "pretty recent".

    from heartbleed.com:

    What versions of the OpenSSL are affected?

    Status of different versions:

    OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable

    OpenSSL 1.0.1g is NOT vulnerable

    OpenSSL 1.0.0 branch is NOT vulnerable

    OpenSSL 0.9.8 branch is NOT vulnerable

    I call this a lot of noise for some BS.

    1. Anonymous Coward
      Anonymous Coward

      Re: @PropieTards

      "We will not know which versions of IIS are affected, "

      We already know - none.

      "but IIS has had a "long" history of very embarrassing flaws"

      Not since NT4. Since then its had far fewer than say Apache...

  23. /dev/null
    WTF?

    Payload???

    What's the point of being able to attach a payload of 64k of arbitrary data to a heartbeat message anyway?

    What's wrong with a simple sequence number?

    Did they think the case where the other end was sufficiently functional to interpret and respond to a protocol message, but somehow incapable of copying a block of memory correctly was worth detecting?

    Did this Request for Comments actually get any comments?

  24. Norman Hartnell

    61 minutes to midnight

    So that's just...22:59, then?

  25. Chris Long

    'Secure' websites

    Maybe now all those shopping websites etc will take down those fictitious 'your data is 100% safe with us' logos that they all seem so proud of.

  26. Daz555

    I was fairly shocked to read about this. I run OpenVPN on my home server....surely OpenVPN hasn't let me down I thought?,.........it had. Bugger.

    Still at least it was an easy fix. Updated OpenVPN Access Server to latest and now I appear to be clear. Changed my openvpn and server passwords to be safe also.

    1. Anonymous Coward
      Anonymous Coward

      And this is why i use a one time pass code with openvpn.

  27. unwarranted triumphalism

    '...the blunder is far worse than Apple's gotofail...'

    No it isn't.

  28. BongoJoe

    Puzzled

    I am curious as to why a response to a KeepAlive message, ie the hearbeat, needed to contain the original packet other than to say that the comms wasn't garbled.

    The specs should have said that something like ten bytes expected and ten bytes returned.

    Or, even better, have with that packet of data going in there would be a checksum. So if a shortened packet was sent the the checksum would be wrong because it wouldn't match the outgoing checksum of the data packet going out.

  29. Jay Zelos

    Article

    Just wanted to say good work for such an informative and interesting article. As a software developer (mostly) working in a different language it was enough for me to understand and not too much to bore. Well done.

    Now I just have to contact all the vendors of my enterprise grade firewalls for patches since they all use OpenSSL code without exception...

  30. Richard Pennington 1

    Years and years ago (early 1990s), I was on a project which did static analysis on a safety-critical system. By static analysis, I mean automated code verification using a tool which checked for all sorts of consistency issues (but it could not deal with anything which involved concurrency, e.g. shared memory).

    It would easily have picked up both the OpenSSL bug and the recent Apple GotoFail.

    The technology exists, and has existed for a while now (the tool was written in Algol and was old even when I was using it). But it is slow and expensive to use (the tool's users need to be experts).

    You get what you pay for.

    1. btrower

      None so blind as those who will not see

      @Richard Pennington 1:

      Those tools are torture to use, but they work. Problem is, they work. When people see the monstrous cascade of errors coming out of these things they head for the hills.

      Splint is not one of the tools above, but even splint sends out showers of warning messages on old code. I use it toward the end of development of stuff I need to be confident is clean. I don't usually bother with older code. For reasons that elude me, some programmers aggressively defend the most dreadful practices even when directly confronted with the consequences.

      Here is something that shows how extreme programmer denial can get:

      For years now, people have been opening tickets due to a horrendous security flaw in FileZilla. Except for this persistent report that I re-opened four years ago and keeps getting re-opened, they keep getting closed. Anybody with casual access to the file system is able to get FTP passwords with a single command line. They don't have to know anything beyond that. Reason: passwords are stored in the clear in an easily found and inspected place.

      Do this on a windows machine with FileZilla on it and it will cheerily show you a list of hosts and their FTP passwords:

      find "Pass" "%APPDATA%\FileZilla\sitemanager.xml"

      Here is the programmer's take on it in his own words:

      http://trac.filezilla-project.org/ticket/5530#comment:6

      status changed from new to closed

      priority changed from critical to normal

      resolution set to rejected

      Whether passwords are stored encrypted or in plaintext makes no difference in security.

      ----------

      The above is a critical design flaw that routinely gives up FTP passwords to hackers. The developer is adamant that this design is beyond reproach and will never change.

      ----------

      I confess that I have been programming for more than 30 years and have been fairly involved in one aspect or another with security for about half that time. However, the programming errors and the security issues surely cannot be that hard to understand. Perhaps the solution to the above security flaw is not easy. However, the fact that the single command line above reveals dozens of server passwords just can't be that hard to understand. If you look at the other comments on the ticket above you will see that it is *only* the programmer who seems immune to understanding.

  31. Andrew Commons

    There are a lot of comments here...

    ...so this may have been covered already. Apologies if it has.

    Some poor guy is now being pilloried as being responsible for this because he 'commited' it. The real culprit is the QA process that lead up to that commit. Name them.

    That's all.

    1. btrower

      Re: There are a lot of comments here...

      @Andrew Commons

      Upvote for you. Professional testers (not weekend beta testers) are the unsung heroes of software development. Programmers have tunnel-vision (as they should) and are terrible at reviewing and validating their own code. Testing is a difficult, dreary and thankless job, but on large projects it can make the difference between success and failure.

  32. Huan

    I believe the client/server practically will leak at most 16KB of its heap content, not 64KB as commented in the fix description. Although the payload length is 16-bit (and have max value of 64K - 1), the copying function dtls1_write_bytes() would have aborted if the total payload is greater than 16K.

    Comment?

This topic is closed for new posts.

Other stories you might like