
No public code review --> security by obscurity.
And we know how well that has worked in the past (I'm looking at you GSM, assorted garage door openers and car remote key system suppliers).
Apple's Secure Enclave, an ARM-based coprocessor used to enhance iOS security, became a bit less secure on Thursday with the publication of a firmware decryption key. The key does not provide access to the Secure Enclave Processor (SEP). Rather, it offers the opportunity to decrypt and explore the otherwise encrypted firmware …
The Secure Enclave runs a variant of the L4 microkernel, one version of which (seL4) was proven secure using formal methods. No one knows if Apple performed the same kind of analysis on SEPOS. but they have clearly given serious thought to their design.
Code reviews don't happen much in open source land. Maybe someone gives it a glance once every 10 years if it's important enough. Otherwise it seems to be a matter of "seems to work, must be cool".
All OSes and software starts off being fairly rubbish when the hackers first start hammering away at finding flaws. Look at the history of Windows, Mac OS, iOS, Android, Linux. Those that have survived the firey attentions of the hackers are now pretty robust.
Linux, which has a pretty long CVE list, suffers from an inconsistency of approach, where you have one bunch like the kernel devs who will fix some flaws, to other bunches like the systemd who refuse to consider addressing some security mistakes at all.
So when it comes to the question of "is it secure?" there's no special practical advantage for open source. It's track record is pretty poor.
So when it comes to the question of "is it secure?" there's no special practical advantage for open source.
Err, apart from the advantage that you can actually look at the source to answer the question, if it is important to you...
But (as I think closer to the point you were trying to make) that doesn't - by itself - mean the answer is any more likely to be "yes".
> Except as a 'normal user' just because i can look at the source code, doesn't mean it makes any sense to me, or that i can fix it.
Indeed. A couple of years back, a team of researchers completed their audit of Truecrypt, am Open Source application. A team of them. And it took them some time. What hope a normal user? Given this, the principle of a user being able to audit code (or rather the whole kaboodle of software and hardware) starts to appear a tad dogmatic.
Since Apple's selling-point is partially built atop a reputation for security, it is in their interests to have been thorough - and pay an internal team or two to review their code. That is not to say they are infallible, of course.
"...the principle of a user being able to audit code ... starts to appear a tad dogmatic."
Maybe so - but just because it isn't necessarily easy to do doesn't make the effort pointless.
"pay an internal team or two to review their code"
"Security by Obscurity" - backed up by "Audit by Obscurity"
"That is not to say they are infallible, of course."
Or open to "persuasion" to come up with the "right" result...
"Except as a 'normal user' just because i can look at the source code, doesn't mean it makes any sense to me, or that i can fix it."
Except the point of the code being open ISN'T that you, a 'normal user' can look at it (well, you can if you feel like you can grok it but it likely won't help you much). The point is that someone independent, OTHER than the manufacturer can look at it and point out flaws that the manufacturer might not be inclined to look for, fix, or publish all that much. Open source is not some golden guarantee of flawlessness, but rather a guarantee that IF something is important or interesting enough to attract scrutiny, the flaws can be found. And you're indeed free to do that yourself, if you happen to be a security researcher, or able to hire one - otherwise you'll just have to rely on other interested partiers. No more, no less. But that's not the same thing as openness of the code being useless to you, as a 'normal user'.
So when it comes to the question of "is it secure?" there's no special practical advantage for open source. It's track record is pretty poor.
I would agree but I would also observe that, in contrast, its remediation record is hard to beat. When something shows up, interim fixes start showing up almost by the time you've read the report (the tricky bit is making sure you draw from a reputable source, criminals aren't stupid) with formal fixes fairly shortly afterwards.
Security by Obscurity is a useful technique as long as it's used in conjunction with other forms of security. It's just pretty useless on it's own.
Eg, if you move your SSH to a random port, it won't make you more secure per se, but it will cut down on the number of automated attempts to break in to it.
Better to have a real encryption system. Not that that can be a perfect solution, but I'd rather have that than something that just pretends to be a good solution.
Fair enough, but where did you read it wasn't a real crypto system?
If I hang a painting in front of a safe it's still a safe, I only added a movie cliché :).
"Isn't it a fundamental principle of encryption..."
Indeed: Kerckhoffs' principle
That's not what Kerckhoff said. The principle is that it should still be secure even if all details (other than the key) are public. He never said that the details should be public.
Denying an adversary access to the means of encryption is a valuable tool. It raises the bar considerably. For example, Bletchley had Enigmas fairly early on, but they never had a Lorenz machine. Until the Germans made an operational mistake Bletchley hadn't enough information about the workings of the machine to be able to attack it. The mistake allowed Bletchley to infer the crypto scheme used by the machine from just one intercept (it contained the same message twice). Once they had that they realised the scheme was pretty good, but slightly flawed. And then Tommy Flowers built Colussus.
By extension, a very good trick to pull off is to arrange for the adversary to be unaware of the communication in the first place. If he's not looking, you've already won. It's worse than security through obscurity (how do you know they're not even looking!?!?). But if achieved, it's a real result. Steganography anyone?
Anyway, researchers can now look at Apple's machine. If they've been paying attention to Kerckhoff they'll be OK.
Has the UK stopped making it illegal to refuse to hand over "passwords'? Or do PINs not count as passwords?
As for the US it may not be legal to demand your PIN, but if the friendly officer merely politely asks, while swinging his baton ever closer to your head, well, nothing wrong with that, right?
Also, the "no effect on security" is not quite true. When a black-hat examines that code and discovers a vulnerability either hoarded by the TLAs, or planted by their moles, you can bet that security will be affected.
That is not my fern.
Ah yes, Peter Seller's wonderful dog scene :).
I had the pleasure of living in the same house as the late god child of Peter Sellers for more than a decade, who had a similar fantastic sense of humour. It meant I grew up with a diet of Spike Milligan, Harry Secombe, Peter Sellers and everything that followed afterwards like the Cambridge Footlights with world's best pronunciation of the exclamation "Ooh shit" (by Stephen Fry in "The Letter", 6:37" onwards).
I love all sorts of humour, but I think it all has its roots in those days.
Essentially whatever you do, you'll always get to the point where you'll need to expand your PIN into the key used to encrypt your memory. Everything needed for that has to be stored on the device and can, in principle, be read out.
So the security hinges on the PIN, and since you cannot enter complex alphanumeric passphrases on a touchscreen, you're essentially left with a short 8 digit numeric PIN, often even shorter than that.
So essentially every moderately advanced attacker can just read out the "security enclave" and emulate it to try out all the PINs.
"So essentially every moderately advanced attacker can just read out the "security enclave" and emulate it to try out all the PINs."
So how exactly do they do that? There's limited communication between the "security enclave" and the main CPU. It has its own processor and storage. Your hacker may be able to see the source, but in order to be able to brute force the system they need to be able to snapshot the full state of the enclave and restore it on failure. The hardware doesn't support that.
"So how exactly do they do that? ... The hardware doesn't support that."
There is a thing called Focussed Ion Beam microscope
https://en.wikipedia.org/wiki/Focused_ion_beam
It allows you to cut through the layers of a chip and add new wires to it. So essentially you can get to the connections of the internal memory of those chips, unwire them from the internal CPU and connect them via microprobing to an external device which reads it out.
Which is something the Dutch claim to be able to do:
https://youtu.be/AVGlr5fleQA?t=34m23s
"they need to be able to snapshot the full state of the enclave and restore it on failure."
Actually depending on how it's done, just glitching the power at the right time could prevent the chip from storing its new state.
"There is a thing called Focussed Ion Beam microscope"
You expect them to attach 30+ wires somewhere in the middle of a billion+ transistor chip running at hundreds of megahertz, without effecting timings or state? You also expect the metal layers in the Enclave area to make that easy (there are typically 6 plus layers in a modern chip)
I think you overestimate the capabilities of these folks, especially given that YouTube video targeted a PIC32, which is fabbed on a 250 or 130nm process, more than an order of magnitude larger than the latest silicon processes.
There is a thing called Focussed Ion Beam microscope
It allows you to cut through the layers of a chip and add new wires to it. So essentially you can get to the connections of the internal memory of those chips, unwire them from the internal CPU and connect them via microprobing to an external device which reads it out.
Not so fast. The clip you use shows a regular chip package that was removed to get to the surface. It depends on how deep Apple has gone with its protection, but I worked with secure chips from Atmel which had a wire mesh built in to exactly prevent this sort of top shaving to gain access (also shielded better) and embedding an anti-tamper wire in the component casing isn't that hard either.
Apple's been at this for a while so it's not too wild to assume they may have addressed this.
" you cannot enter complex alphanumeric passphrases on a touchscreen"
Err, why not? I can enter almost all the characters on my phone that I can on my keyboard.
My most important passphrase has about 77 bits of entropy (I can be that precise because of the way I generated it). I enter it on my phone. (It actually only consists of lower-case ASCII, but length is more important than character set, and Password123! is not a secure password.)
I guess I'm just imagining that I've using an alphanumeric passphrase on my iPhone since I got a 3gs? Why in the world do you think you are limited to only digits? Maybe your phone is, if so choose better next time.
If someone is going to use a million dollar piece of equipment to access your secrets, they are so important you should consider hiring goons to protect you, and carry a Blendtec to physically destroy your phone before anyone can get their hands on it :)
If hardware on that level is a "moderately advanced attacker" I'd hate to hear what you think an "very advanced hacker" is capable of... Mind control? Antigravity? It sounds like you're trying to make the argument that the secure enclave isn't perfect security. If so, you're right. But it sure as heck protects you against ordinary cops or a private investigator getting hold of your phone. They would be completely helpless trying to access it.
So essentially every moderately advanced attacker can just read out the "security enclave" and emulate it to try out all the PINs.
Dammit. Apple just spend several man YEARS developing this stuff and you broke it already!
Or maybe not.
The PIN yields an access key to a storage container. THAT key is the full monty, 32 of 64 bit wide. The security enclave gives you up to 10 shots at a password that will convince it to cough up the access key, so that's a 1 in 100 chance for a 4 digit PIN, a 1 in 10000 change for the new 6 digit default and a 1 in <god knows> chance in the case of alphanumeric if a password is used and after that it's game over and you can entertain yourself trying out all the 32/64 bit wide keys and grow a grey beard whilst trying.
The only way you get in there faster is using the XKCD $5 wrench technique or take a film off the shiny case and see if any of the fingerprints match - there is no limit on the amount of tries for that and it can be faked using Tsutomu Matsumoto's (et al) gummy fingers approach. That's why I do NOT use fingerprint biometrics unless I know the reader is high resolution (IMHO the iPhone one is not), it is too easy to get hold of the required prints.
The security enclave gives you up to 10 shots at a password that will convince it to cough up the access key, so that's a 1 in 100 chance for a 4 digit PIN, a 1 in 10000 change for the new 6 digit default
Even harder than that... Your maths is out by a factor of 10! :-)
Why would the resolution of the reader matter? If you have good enough prints on the phone (glass is good like that) you could make a high resolution fingerprint from it. Biometrics are inherently insecure, you leave fingerprints wherever you go, your iris and face are exposed to the view of cameras (hidden and in plain sight) all the time. As the saying goes, they're a username not a password.
That's why Apple's "cop mode" in iOS 11 is handy. Hit the power button five times and the phone can't be unlocked via biometrics, but only via the PIN. Just make sure you do it before the cop yells "hands up" if in the US, or he'll shoot you and claim he thought you were going for a gun...
Why would the resolution of the reader matter? If you have good enough prints on the phone (glass is good like that) you could make a high resolution fingerprint from it
Nope. There is always a delta between the original and a copy unless you go through very controlled conditions of replication, a casually made copy won't work. Low res readers accept a fair degree of noise from a read, whereas a high res reader is FAR more picky. The stuff you see on TV where someone takes a print off a glass and uses it to open a door lock? Not happening with a high res reader.
A high res reader does not only see ridges but also pores, abrasions, dust, nicotine stains - the works. That's also why they are only used in high secure situations because they are at times a &^%$ pain to pass (classic example is problems when it rains and people's hands are moist), and over the years I've seen most of these replaced with contactless vein pattern readers, the preferred ones bases on Fujitsu chips which do palm reading (but, as yet, don't tell your future in the process). Hitachi makes devices that read finger vein patterns which is also an interesting approach. Both do not leave any residue, nor is it easy to even record, let alone clone such pattern. Anyway, I'm digressing.
Last but not least, most of the high end readers also tend to read capacitively (absorbed radio energy), and a gummi finger won't even register on those.
There's a difference between "high resolution" and "high end". Even supposedly high end fingerprint readers have proven easy to fool by various methods, which is why they have gone to reading the inside e.g. vein patterns and the like for high security needs.
And then someone will find a way to use a good 3D printer to create a passable jelly finger, complete with heat and flowing fluids if need be. And then they'll find a way to make it cheap.
You're moving towards that XKCD cartoon again - that's just too costly and complicated. It's much cheaper to blackmail the person in possession of the correct biometrics by either paying them, compromising them or threatening their family (or all of the above together), or bribe the security staff.
What if you can't give it because it hasn't been given to you and won't be given to you until you reach the meeting place: a contract negotiation with a prominent firm who will not doubt have some things to say to Washington should the agent they're expecting to close the contract isn't allowed through tout suite?
IOW, there IS such a thing as traveling with a locked box WITHOUT possessing the key which will be transported separately.
If I was traveling to the US from overseas, I'd wipe my phone before crossing the border. Tell them yours broke yesterday and you picked up a replacement on the way to the airport. Then you can sync to the cloud or restore from backup after you're through customs.
They're never going to catch terrorists this way, they aren't going to cross the border with a phone that has text messages from "ISIS commander" saying "your holy mission is a go for tomorrow at 4pm. Allahu Akbar!"
"The key does not provide access to the Secure Enclave Processor (SEP). Rather, it offers the opportunity to decrypt and explore the otherwise encrypted firmware code that governs it, affording security researchers and other curious types a chance to learn more about how the technology works."
I am trying to understand that the key provides the opportunity to unencrypt the code that handles the SEP? What does it unlock actually? Wouldn't it give an open field to the researchers to perform more than learn, like find new ways to actually provide means to SEP?
What a f**k up that was.
Cut and pasting both the hardware (MIPS chip) and it's software without any apparent review, including the no password needed management account "feature."
As other's have noted the PIN (code number, whatever) should be the only shared secret.
IOW even if "Master Cracker X" has the code the system remains secure because the security is that it has no exploitable flaws (note that qualification. If they have physical access to the device it's game over) by sending it "hand crafted" packets of whatever, or inducing it to send them from itself that can then be analyzed.
Time will tell if this is indeed the case.