If it's timing-based
Would adding a few random uS into the TLS processing be enough to throw the timing detection out?
Two scientists say they have identified a new weakness in TLS, the encryption system used to safeguard online shopping, banking and privacy. The design flaw, revealed today, could be exploited to snoop on passwords and other sensitive information sent by users to HTTPS websites. Professor Kenny Paterson from the Information …
Yeah, there are no "login cookies" in the TOR protocol itself. They could mean login cookies for websites that are being transmitted over TOR, or they could be referring to nonces in the TOR authentication protocol that is used to prevent a corrupt first (entry) node from imitating all the subsequent downstream nodes.
Yes. Sort of. RC4 has its own problems as regards security. The cure in this case may be worse than the disease. This attack is for the most part theoretical bullshit in your daily shopping/facebooking/tweeting context since it needs a man-in-the-middle, and a very specific man at that. RC4 may be attacked from anywhere, and not necessarily interactively.
Malware doesn't have to be a .exe file. It could be a JavaScript delivered to a perfectly sandboxed browser via some ad network. Therefore it wouldn't have permission to capture key strokes.
That said, it would seem rather trivial to add a fake random jitter to such responses from the server. This would prevent this attack vector.
Years ago, your web developer may well have been right... although "never" was obviously a bit of a stretch. I doubt any of us who were there at the beginning would have predicted the chaotic clusterfuck of myriad overlapping, ill conceived and half baked "standards" all those competing proprietors have managed to make of it.
>>Nothing is secure, it just has "levels of security"
>You definitely deserve an upvote for that statement alone. Most fail to understand or fully recognize this.
Further, Security must be implemented like an onion - in layers. You might be able to peel away one layer, but you hit another layer.
In fact, Security is an Ogre.
SHREK
No! Layers! Onions have layers. Ogres have layers! Onions have layers. You get it? We both have layers.
DONKEY
Oh, you both have layyerrss. Oh. You know, not everybody likes onions. Cake! Everybody loves cakes! Cakes have layers.
Remember folks - build security in layyerrss! And have some cake.
Also, remember that you can’t hide secrets from the future with cryptograph.
Even "levels of security" is misleading - it suggests a single dimension, ranging from completely insecure to greater and greater security (with "secure" as an asymptotic limit).
Security is really a matter of threat classes and their associated costs to the attacker. You pick the threats you're most concerned with (based on your threat model) and implement measures that increase the work factor and other costs until the attacker is better off choosing a different class of attack, or better yet a different target. So it's a graph, not just a line, and your goal is to eliminate the cheap paths from the root to anything an attacker might want.
An attack like Lucky 13 has too high a cost (have to mount a MITM attack, gather a lot of side-channel data, etc) for an attacker to just use it against random targets in the hope of landing something of relatively minor value. As with most attacks against SSL/TLS, the attacker's better option is to find vulnerable web sites with rewarding data (eg credit-card information).
If memory serves me right, i've read about an ancient exploit that cracked passwords by timing how long it took the OS to reject them (they had to be checking char by char and rejecting at first bad).
One would think guys working on such up and above stuff like heavy crypto would consider reasonably measurable right/wrong response time as an attack vector...
Or maybe they thought faster message processing for BoastingRightsTM was more important than adding artificial jitter to make it safer...
This post has been deleted by its author
There was an OS that would inform the app that it just caused a page fault. It's password-checking mechanism was strcmp() against the unhashed recorded one.
A program could place the first character of a trail password in the last byte of memory, wait for the next page to be swapped out. Then it called checkpassword(). If a no page fault occurred, then the first character was incorrect, and another one was tried. If a fault was caused, then the first character of the password was correct;, and the second letter was attempted.
Thus, cracking a 8 character password went from 96^8 to 96*8 tries.
Oops.
There's a huge amount of literature on side-channel attacks, including timing, power consumption, etc. There are a lot of side channels that need to be blinded (often by whitening, ie adding noise). It's not easy for crypto designers to catch all of them.
I think the OS you're thinking of is TENEX.
I'd sort of figured that by now the people who develop protocols (especially security related ones) would have some kind of IDK "regression test" of attacks that have accumulated over the decades that such protocols have been being built.
A sort of archive of stuff that breaks features and the conditions under which the crack works (and when t fails)
Apparently not.
...then random packet loss and timing errors in using the internet will be introduced by BT to such an extent as to make any packet analysis for encryption meaningless.
/trying to get my home hub to server wireless home networking without needing a reboot every 30 mins
//looking at BT packet shaping and contention ratio and thinking packet speed consistency is the least of their worries
"...use <insert variable service ISP here>..."
I was wondering how good your ISP has to be for this exploit to work.
If they need to measure microsecond level timing differences, is the information superhighway (sorry, temporary 90s Clive James flashback...) really consistent and quick enough for most people to carry this out?
From what I see, this attack requires perfect conditions:
1 client accessing a single server, no other clients connecting to said server
*If the server is busy with another client, then the packet will be delayed and change the timing.
An unencumbered router, or at least one that is perfectly consistent in moving packets
*Any slight delay could change this packet timing, even a simple CRC check would take a different length of time on different packets.
*Special features on the router may also delay the packet randomly
No specialized network equipment
*Load Balancers, firewalls or NAT/PAT device would add random delays due to processing, of course a pair of load-balanced servers would have different timing even they used machine with only one-off serial numbers.
All links are the same exact length
*If the packet is going over a set of bonded links (Like nearly all ISPs have and most companies), a difference in cable length would delay the packet enough to defeat this attack
Using a non time-division style network
*crossing ISPs wouldn't work or even within the same ISP. Cellular and 3/4G networks wouldn't work either.
While this is good work and patches should be made for the affected products, an attack isn't practical outside of a lab.
From what I see, this attack requires perfect conditions
Sigh. That's the first critique that's always leveled against side-channel attacks. Then someone goes and demonstrates how to increase the probability those conditions will hold, until they make the attack practical.
Seriously, people - did it ever occur to you that security professionals might have thought of this stuff before Joe Average Reg Commentator?
While this is good work and patches should be made for the affected products, an attack isn't practical outside of a lab.
That's what people said about the attacks on Netscape's original PRNG. They were wrong.