Cardiac Arrest
For the heartburn i suggest a couple of Quick-Eze.
While most of the buzz surrounding OpenSSL's Heartbleed vulnerability has focussed on websites and other servers, the SANS Institute reminds us that software running on PCs, tablets and more is just as potentially vulnerable. Institute analyst Jake Williams said the data-leaking bug “is much scarier” than the gotofail in Apple …
That was my first thought, but then I read the detailed code analysis linked to in the article. The 64K sent back is copied from the attacker's payload. As the attacker's payload is only one byte, the rest comes from whatever is in process memory after the received payload.
Your forgetting that it is only in comparatively recent times that CPU cycles have been plentiful on some systems. Zero'ing memory was a nice to have but impractical outside of the test lab when workstation cpu's typically ran at sub 12.5MHz...
It's because it serves no purpose to do so.
An OS will certainly zero pages before giving them to you because those pages could have come from almost any previous process and the security implications of that have been known since the 60s. However, all sane runtime libraries ask for big blocks from the OS and then implement their own sub-allocation scheme on top. Doing it in-process is a big performance win (because you don't have to cross privilege boundaries) and omitting to zero the sub-allocated memory in your own address space is not a problem because it was already visible to any thread in your address space. It's not a problem until you then squirt the dirty memory out of a socket.
Yes, it could have been avoided by using calloc() rather than malloc() everywhere, but it could also have been avoided by sanitising your inputs before responding to them. The former would pointlessly double the number of writes to memory. The latter is simply "correct". My vote goes for the latter.
Note also that debug versions of malloc nearly always do pre-fill the memory (and the matching version of free post-fills with a different pattern) but this is *because* it is pointless to do so. Or rather, because it bloody well ought to be pointless and therefore doing it is a simple way of flushing out a certain class of bug.
In my view seeing a naked memcpy call at all in supposedly secure code is like walking into a restaurant kitchen and seeing a big pile of rotting carrion on the floor. The staff may know not to handle it before dipping their fingers in the gravy, but it's a clear danger that you don't want to have around. It may cost to clear it up, but that's what you have to do.
memcpy is a big red flashing warning light that says "make damn sure you've checked and sanitised every bit of data that goes in and out of here" (not only memcpy, of course, but quite a few other C functions). In fact. I'd suspect simply looking for all the memcpy et al. calls is a pretty good way of finding vulnerabilities. The best approach is to wrap them up pretty tightly. Even that's not 100% secure, but it does make a difference and in security code it's 100% worth doing.
IE, obviously, isn't vulnerable.
Firefox and Chromium use NSS, so aren't vulnerable.
Opera has OpenSSL statically linked in. The Copyright string says
"1998-2011" and the vulnerability appeared in OpenSSL in early 2012,
so again should be safe.
Android: Most versions have HeartBeat disabled, except for v4.1.1
(and possibly 4.1(.0)).
Earlier versions use an earlier, non-vulnerable version of OpenSSL
http://googleonlinesecurity.blogspot.co.uk/2014/04/google-services-updated-to-address.html
There's a client tester and a list of some vulnerable clients at
https://github.com/Lekensteyn/pacemaker
OpenVPN is vulnerable, however
https://community.openvpn.net/openvpn/wiki/heartbleed
>IE, obviously, isn't vulnerable
Just because it doesn't use OpenSSL libraries doesn't mean it isn't vulnerable to this attack vector.
Not knocking MS, but it is worth noting that without testing we don't know if third-party (ie. non-OpenSSL) SSL implementations are vulnerable to this attack. Otherwise good comment so up voted.
This is surely the worst security hole I have ever seen. However, I tested all my net facing servers and they were all OK. I just tested the copy of CURL I use all the time. Complete breakdown. It coughs up the private keys.
Now it occurs to me that a black hat might very well have exploited the vulnerability and then hacked into the system and patched it so that it is no longer vulnerable. It is a time honored practice with malware to gain control and then make the target invulnerable to attack by anyone but them.
I have seen this coming for a long time so I don't have anything particularly valuable to steal and I have long been prepared for the day when I had to change passwords and keys everywhere and finally lock things down properly.
Despite the fact that I was expecting this I am surprised at its scale and pissed at the work it is going to take to clean it up.
After the cleanup? Well, not everybody will clean up so attackers still have a good chance to hop from a compromised system to one that is not (yet) compromised.
Assuming you find your way around the above, what you are left with is the same shaky structure you had before -- a colander with a single hole plugged.
We need to have a much better understanding of these things in the technical community.
We should have long since demanded an end to IPv4. IPv6 is such a crappy alternative that it is understandable people have dragged their feet, but as lousy as it is it is entirely preferable to IPv4.
Security experts should have better explained the fact that even though a 128 bit key is invulnerable in theory, it is entirely vulnerable in practice and longer keys are better.
Public Key Cryptography is fine in principle and I would trust it if I could somehow verify the implementation was an honest one. The current case is an example of it falling down. It is not the first and will not be the last. The implementations are messy and poor. The PRNGs used to generate keys have time and again proven faulty.
The network is so fragile with respect to security that nobody in the know with something serious to protect will connect it.
Is our hardware compromised? Probably.
Can a state agency like the NSA mount side-channel attacks successfully? There can hardly be any doubt.
We cannot be sure of protection against the Military Industrial Complex. They control the factories that make our equipment, the infrastructure, law enforcement and the administration of 'justice'.
We *can* be sure against most attacks otherwise, but it requires much more than we have put in place. Non-technical people would have to take a lot of courses to fully understand the issues, but software developers should be able to understand this with a little digging *and* they should know about this anyway as a matter of course.
Given the truly horrible state of security, you have to wonder. Is this really that mysterious to everybody?
Re: "Key length is irrelevant to this, if the key is in memory then it is possible to grab it."
If I understand how the vulnerability works a 1,048,576 byte long key would be very difficult to obtain with this exploit. One of the RSA inventors spoke about using objects on the order of a terabyte at one point for precisely the reason that a large object hobbles certain types of attacks:
"I want the secret of the Coca-Cola company not to be kept in a tiny file of 1KB, which can be exfiltrated easily by an APT," Shamir said. "I want that file to be 1TB, which can not be exfiltrated."
Your statement illustrates why our security is so tragically broken. For end-to-end security to be sound *both* this type of security hole *and* the keys have to be sound. Trivial one byte keys present a barrier low enough that this exploit is not needed. Non-trivial multi-megabyte keys raise the key barrier high enough so it is not a profitable point of attack.
One high barrier does not make the whole thing secure. But one low barrier can make the whole thing insecure.
To be secure, all the barriers have to be strong enough to render attack unfeasible. With IPv4 in place, you can scan entire sub-nets by brute force looking for a vulnerable IP address. With IPv6 that is significantly more difficult, and if configured correctly, effectively impossible.
Security depends upon a lot of different things, nearly all of which are in a poor state of repair in our systems. Key lengths are but one of those many things.
There is no sensible reason to build our systems with key lengths constantly on the edge of vulnerability. Any older backup of some dire secret that is hiding behind DES is trivial to hack.
I might be wrong, in which case, using a longer key length has no impact on security. On the other hand, you might be wrong in which case a shorter key length needlessly renders the system insecure. Which of those bits of advice gives better assurance of security?
At nearly every design point on the current network, it has fundamental security issues. This was a whopper of a security hole and we may not see its like again, but this is not the last breach we will see.
Whilst you make some good points, there is a balance to be struck between security and utility. Currently we know that 128~256 key encryption is very secure and reasonably performant -ie. you can use it for SSL etc. (yes I know it wasn't that long ago that computing power made shorter keys secure, hence it is probably only a matter of time before longer keys are talked about). The problem as you indicate is the security of the key's themselves.
What this exploit reveals is that whilst care has been taken with respect to the encryption of communications, little care has been taken over the handling of the keys themselves. In some respects the OpenSSL vulnerability reminds me of a security office where normally only security personnel enter, but the door isn't locked and the keys are just left on the desk.
http://www.eviloverlord.com/lists/overlord.html - (c) 1997 - offers a similar rule for prudent evil masterminds in those days: "99. Any data file of crucial importance will be padded to 1.45Mb in size." And therefore couldn't be copied by enemies on a 3½ inch floppy disk, remember those? 1.44 MB capacity.
Having said that, do private and public key have to be of similar size? It could take way long to log in then.
There are a whole bunch of client applications out there that aren't web browsers. So the browser you're using might not be vulnerable, but the mail client, IM client, game with internet connectivity etc might well be exploitable. And unless you're prepared and able to check that every one has no OpenSSL dependency (or if it has, that it's been fixed), knowing that you're vulnerable is actually quite hard.
Still, can we at least declare this the end of the nonsensical "many eyes make all bugs shallow" meme that FOSS advocates have been touting for years?
I stopped using it when I found the calloc() (clear and allocate) library back in the early MS-DOS days. For those who don't know - the calloc() instruction clears the memory by writing "0" to each byte as it is being allocated.
It's just plane "Open Source" lazyness - IMHO ;-) to keep using alloc() when calloc() will ensure that no latent data is passed from the heap to the calling function.
Who cares if the buffer is too big if all that is in the buffer is a long (64k) string of "0"?
Free advice - worth every penny you didn't pay for it.
People still use malloc because it's faster. Especially in embedded systems (where OpenSSL is also used quite frequently) this can make a difference. Besides this: many libraries don't use malloc for every allocation, they keep a memory pool available. One would have to call memset every time to clear that data, which is unnecessary in any well-written library or application.
Any sane OS (basically all multiuser systems) already zero freshly malloced memory, otherwise it would be a trivial method of exteacting memory information the user wouldn't normally be privileged to do so.
This bug is nothing to do with malloc - it's a basic overflow - the data returned is bigger than the allocated size, thus returning other parts of the processes memory/variables.
So even using calloc throughout would have made no difference here.
Please check before posting that you are secure on that high-horse of yours! :-)
This bug is nothing to do with malloc - it's a basic overflow - the data returned is bigger than the allocated size, thus returning other parts of the processes memory/variables.So even using calloc throughout would have made no difference here.
It's not a "basic overflow", there are no memory bounds being violated in this bug which is why the automated code checking systems, good as they are, didn't pick up this bug.
The bug is that the memory allocation code allocates one size block of memory, which being unitialised contains whatever was in that memory space before, hence the problem, but overwrites this block with a different number of bytes. In this case a 64k chunk is memory is allocated, one byte of it is overwritten with the return data and all 64k of it is returned.
But is there a point in using calloc()?
The recipient still has to return the data to the sender and as far as the recipient is aware it's the stuff that's in the memory at the time. So whatever happens you are going to clear a buffer somewhere and then still return the wrong data because the recipient doesn't know, other than by the length byte, how long the data to be passed actually is.
The problem is the protocol which suggests that something can be defined by the sender. In the earlier days of the friendlier internet (remember the days when we could ping a domain to find all the eMail addresses before that was abused in the late eighties?)
The whole thing needs to be examined from top to bottom, re-specified and re-coded.
No, the real bug is having a software development system that allows someone with insufficient experience to add code to a system that needs to be secure - and then not having a sufficiently robust review process in place - and then installing that software in a critical situation on huge numbers of servers around the world.
This bug is the sort of mistake beginners make (I believe the culprit was still at uni). I'd be embarrassed if I put a bug like that into a one-off throw-away lash-up. But somehow it got into openSSL which everyone regarded as secure.
It's a bit like the debt-laundering that took place before the financial crash. Everyone thought the debt was solid, but simply because no-one bothered to look at the fundamentals. I think this incident has shown FOSS security to be based on similar principles.
>This bug is the sort of mistake beginners make
And experienced people as well, occasionally! I have seen bugs of similar stupidity level made by long-timers (me included). Sometimes in code that has been in use for years.
There is no room for any holier-than-thou attitudes in programming. Anyone can goof up, therefore processes must be in place to to catch and limit the damage.
What I want to emphasize is this starts at making sane specs that avoid unnecessary complexity (like the redundant length field).