Impressed by the Dutch Fuzz
Congrats!
Then again, the crooks should have been using a p2p implementation.
Dutch police claim to have snooped on more than a quarter of a million encrypted messages sent between alleged miscreants using BlackBox IronPhones. The extraordinary claim was made in a press conference on Tuesday, in which officers working on a money-laundering investigation reckoned they had been able to see crims chatting …
If the cops compromise the people running the service, they could simply modify the P2P software to send copies to a central server and push the update.
How many crims are going to sniff their outgoing traffic and figure that out? And if they do, how many will still be suspicious when they call support and are told the stuff being sent to the central server is harmless diagnostic information, to enable them to improve their software?
Standard fare in Android, from the Guardian Project (no relation to the Mancunian rag).
Central server is a massive no-no, as even without breaking the encryption you have access to all the metadata.
"Custom" OTR implementation?
3000 per year plus the douchy name tells you all you need to know about a) the security of the product and b) the credulity of the target customers.
Without evidence of a criminal activity they seized the servers.
Without evidence you've assumed this. The article mentions a drug lab, it is likely they already knew or at least suspected they were involved with drugs, and if you're a drug dealer you obviously have to be involved in money laundering, so...
Depending on the circumstances under which they seized the servers, they might be able to look at all their customers if those circumstances made it likely they were mostly criminals (i.e. they sold them on a dark web site that is invite only for drug dealers, for instance) or they might only be able to target certain individuals that they have other reasons to suspect.
@Steve 53
"Well, yes, but I'd say paying €1.5k for 6 months with a phone with "unbreakable encryption" and "a panic button if you get nabbed by the fuzz" is probably reasonably grounds to suspect it's not just a private conversation about what groceries to bring home."
Then why dont the police go out and arrest anyone driving a car that has an engine larger than a 1.6?
Honestly, anyone wanting acceleration from an engine greater than 1.6L is intending to speed, possibly while out-running the police after robbing a bank or kidnapping a child.
I saw someone driving what looked to be a Morgan recently. A wooden expensive car with a high top speed and huge acceleration! I shook my head as I drove my Hyundai Getz 1.6 (the "i'm innocent" limit) thinking of how many horrible crimes he must be involved in.
Why are fast cars on the market?
Why dont the police wire tap the phones of those who purchase them?
In a country that has a speed limit of 70/80MPh there is totally no need for anyone to even sit in one of these crim-cars unless its on a track and has a special license like a gun owner would.
Use your common sense man.
Although I agree with the general sentiment, they also could have just grabbed the customer list and listened in on their conversations in the 'traditional' way using a directional microphone and a court order to monitor a person suspected of committing a crime.
This wholesale grab of all data just rubs me the wrong way.
Someone like Snowden could be using this service and with the dutch government bending over backwards to the US interests usually, it wouldn't surprise me if this would be abused.
>Not at all. Probable cause (in US speak) is not the same as being declared guilty, that is the prerogative of the court.
Not really. That is the prerogative of the jury unless the defendant waives his/her right to a jury. A judge can give a directed verdict of not guilty, but cannot declare guilt.
"I believe it was 11 September 2001 if not before."
No, that was when the US learned that what goes around comes around and that terrorism wasn't just something that happened on an island across the Atlantic and was probably harmless anyway so there was nothing wrong with contributing a few dollars here and there.
"that was when the US learned that what goes around comes around and that terrorism wasn't just something that happened on an island across the Atlantic"
No, we already knew that from all the terrorist actions that came before 9/11. What the US re-learned from 9/11 was how easy it is to amplify and leverage fear in the population so that the government can get away with performing atrocities that would have otherwise been politically impossible.
If Dutch police have cracked this supposedly-secure communication channel, announcing it will serve to kill the channel and drive its users to an alternative.
As if Bletchley Park had announced to the world that they'd cracked Enigma. Which might have materially affected the War.
Dutch police presumably realise this, so it must be intentional. Why? It's a pretty high-value resource to give up!
The article mentions that they wanted to prevent retaliation within the group.
If they had this information then I imagine that the Dutch police could be considered at fault if they did not act, particularly if innocent 3rd parties were caught up in the potential attacks.
Usually authorities admit to stuff like this for one of two reasons. One, word has leaked that this happened (i.e. the guys who were arrested figured it out, or a cop on the take ratted them out) so there's no harm in making it public. Two, they will need to present evidence in court where they will have to disclose how they obtained the information so the cat's out of the bag if they want to get convictions.
.....so.....
1. Was the hosting company in bed with plod all along?
2. Same question applies to ALL public-server-based communication.
Maybe we need much more (privately encrypted) peer-to-peer communication, and much fewer public-server-based services.
Oh....and internet cafes also help!
So, not only were the comms not encrypted end-to-end, as is often claimed, but, if I understand correctly, there was no way to securely exchange encryption keys, e.g., at a personal meeting between Alice and Bob, to prevent MITM.
I have a distinct impression that the vaunted "end-to-end encryption" of WhatsApp, Telegram, etc., suffers from the same kind of flaw.
> So, not only were the comms not encrypted end-to-end
It's quite possible they were end-to-end encrypted *before* the Dutch Police got their hands on it, but relied on the server to aid in key exchange (or perhaps to specify some other important element).
If that's the case then they may have adjusted the server so that the client's unknowingly did KEX with the server instead (so that it could MiTM).
Even then, though, you'd hope that 2 clients that had seen each other before would then warn their owners that the other ends key seemed to have changed. The various "standard" OTR plugins you get for various apps all do at least that
> if I understand correctly, there was no way to securely exchange encryption keys, e.g., at a personal meeting between Alice and Bob, to prevent MITM.
I read it that way too - or at least, if there was a way it wasn't widely used (and probably wasn't the default).
That's fairly common amongst OTR libraries though, some won't even let you import keys from another system (so if you have multiple devices you end up with multiple 'identities'), so probably not too surprising.
Most, though, do provide a fingerprint for you to verify out of band, others let you use a challenge/response mechanism (again, out of band), and would show the fingerprint as unverified until you've told it otherwise. Perhaps that got dropped while they were customising it?
Can't find an awful lot of information on their implementation on the net, but with the very limited information that is available it does sound like they customised OTR and made it worse.
@Ben_Tasker
Why is there always an assumption that encryption on the internet can only mean ALL these things:
- users are using public-server-based communications (e.g. email)
- users depend on the public-server(s) for encryption
- each specific communication has an identifiable sender
- each specific communication has an identifiable recipient
In the place of these assumptions, suppose users did it differently:
- put in place a private cipher system (say a book cipher)
- the sender publishes a cipher message from an internet cafe using, say, The Register Comments as an Anonymous Coward (or using a fake identity on FB....)
- the recipient picks up the message in another internet cafe
In these alternative circumstances:
- it will be hard to identify the sender
- it will be even harder to identify the recipient
- ....and that's before the curious out there try to break the private cipher (irrespective of any end-to-end encryption provided by the services provided over the internet)
What am I missing here?
"Even then, though, you'd hope that 2 clients that had seen each other before would then warn their owners that the other ends key seemed to have changed."
This is a case of hanged if you do and hanged if you don't. If you use the same key all the time any messages which have been intercepted and stored in the past can be decrypted if the key is later compromised - which is more difficult if the server didn't store the key - but you can tell if the key's been changed. If you use a different key each time then past messages are safe but the key exchange is susceptible to MitM attack if the server is compromised.
"I have a distinct impression that the vaunted "end-to-end encryption" of WhatsApp, Telegram, etc., suffers from the same kind of flaw."
Whatsapp uses the Signal protocol. Adopted from the Signal chat app that is fully end-to-end with MITM protection. But as its now owned by Facebook, we might find something changes eventually.
Telegram has always been broken. They were audited and failed as they had "rolled their own" crypto, which you simply dont do. Telegram has the marketing but not the features. Its end to end encryption is off by default and it relies on a homegrown encryption method that is considered to be buggy and untested.
Use Signal, or something that implements the Signal protocol. Or Threema which is also good.
Best thing to do is listen to the EFF and Edward Snowden when they make recommendations. Its worth noting that the EFF have stated they have serious concerns over Telegram. Edward Snowden uses Signal almost exclusively.
Signal is also entirely licensed under the GNU GPL v3 and GNU AGPL v3. Unlike Telegram which has only parts licensed in any "open source" way.
Now politicians will finally have an iron-clad excuse to get backdoors into encryption. Look how it helped in the Netherlands, they'll say.
While I applaud the results, I fear for our encryption. The Dutch police didn't backdoor anything, they got a warrant, seized the server, and did their business. That's legal. Backdooring encryption for the purpose of snooping on everyone all the time is not only illegal and impossible, it's also highly immoral.
"they got a warrant, seized the server"
There's no mention of a warrant in the article. Even if they did get one then all the traffic through the service was compromised. There seems to be a presumption that all the traffic was illegal. If you were using the service to negotiate a confidential but legitimate negotiation - say a merger - you now know the Dutch police had access to it. They were snooping on everyone, at least everyone using the service. Who, apart from the Dutch police, knows what legitimate stuff has been compromised?
If you were using the service to negotiate a confidential but legitimate negotiation - say a merger - you now know the Dutch police had access to it.
You're thinking of a merger of a romantic and sexual nature there. With the daughter of an over-protective but volatile Dutch cop who you suspect might ignore the Rules when it gets personal ...
The ab-so-lu-te first thing you must decide on when you create secure services is how you are going to deal with criminals using the service - if you do it right, it WILL become a problem you have to address.
This is why I laugh at idiots stating they'll protect you from the law:that's not why you offer security. You offer security to ensure the law properly follows due process, and so that other 3rd parties don't gain access. You cannot, however, change the law so that a warrant no longer has any power - that's just stupid - and you should ask yourself why you run such an operation: surely you don't do that to support criminals?
What's a "criminal"? In real life it's just about "people who further my own interests" and "people who might harm my own interests". The actual difference between a despicable terrorist/criminal and a heroic freedom fighter only depends upon where _you_ stand towards the cause he pretends to be fighting for. *
What I'm saying here is that there is no way any service provider can filter out "criminal" elements, much like no crowbar manufacturer can make sure his products can't be used to break in (or bash someones head in). Pretending otherwise is either naive or disingenuous.
*This is *not* an apology of crime or a glorification of self-serving attitudes, just a call to be more honest. Call things by their real names, like "People who endanger society as I would like it to be", or even just "People I don't like seeing around".
:-p
> if the Dutch peelers were worried about drugs, I'd have to say the "crime" must have been a biggie,
The Dutch approach is pragmatic: Narcogangs are in it for profit, not for the drugs - and they tend to be rather ruthless about keeping that profit, which is where civilians get hurt, so they do stomp on organised crime quite hard when they find it.
That said, whilst the rest of the world has recently been waking up to the fact that the war on drugs has only succeeded in making narcogangs more profitable (and more dangerous), the Dutch have been under relentless political pressure to tighten up on drug laws and availability of coffeeshops. Ironically most of the more irritating problems they have are caused by German and French addicts nipping over the border, not by locals.
If that's all they cared about they'd simply legalize drugs and tax them. Even at triple the normal VAT they'd be far cheaper than they are now, because the risk of being arrested puts a "crime premium" on drug prices. The more they crack down on them, the higher the crime premium is and the more you pay for a joint or a line of coke.
"You offer security to ensure the law properly follows due process"
I disagree. "Security" that can be breached by law enforcement is not secure, if for no other reason than that if law enforcement can get access, so can others.
"surely you don't do that to support criminals?"
There are excellent reasons to run such a service that aren't related to supporting criminal activity.
You seem to think rule 1 is "let cops with a warrant snoop". If you set up a system where you cannot snoop because the communication is from one phone to another, you can tell the cops with the warrant "sorry, we can't help you".
Other than them trying to do like the FBI tried to do to Apple and make you push an update that changes that behavior, it is safe for criminals.
The reason most things aren't safe is that making a 100% secure communications channel is REALLY hard. Even with the unlimited resources of Apple, Google, and Microsoft they don't always get security right, so what chance does some guy selling a fly by night app have?
"The ab-so-lu-te first thing you must decide on when you create secure services is how you are going to deal with criminals using the service"
In principle this is no different to any other communication system. How are you going to deal with criminals using the telephone? The post? A taxi? A courier?
There's a trade-off between investigating crime and keeping innocent communications confidential. Perhaps the first consideration on deciding that it to ask what proportion of communication is innocent and what isn't. If you decide that the bulk of communication is innocent and deserves to be confidential then you have to accept that you have to forego bulk interception and accept that law enforcement is going to be harder. It's a temptation that everyone, from politicians downwards, has to steel themselves to resist.
A server in the middle that reroutes messages does not have to be a weakness as the article claims.
If proper end-to-end encryption is used, it doesn't really matter who sees the encrypted (cypher text) messages or not.
What seems to be the case here is a combination of a central server with an apparently no so well implemented end-to-end encryption syste or the use of weak cryptographic algorithms.
One reasons for this could be that the people selling the phones and subscriptions have enough evidence on their clients to use against them if they start getting nasty.
As most have a criminal intent, at least according to the Dutch police presser, it would be a nice backstop mechanism.
"that box could have been set up to decrypt and re-encrypt messages on the fly"
If that's possible, then IronChat wasn't really doing end-to-end encryption, at least not in a meaningful way. The server should not have had the keys to the data streams, and so it should not have been possible for the server to do something like this.
The article fails to explain on what grounds the people running the server were arrested and on what grounds the server was seized. AFAIAA it is perfectly legal in Holland to run an encrypted messaging site, just as it is legal to communicate using PGP. So what laws had the people running the server broken? I would have thought they were making sufficient money from selling such expensive hardware & services that they would not have needed to be involved in anything criminal.
How many of the users verified that what was running in the IronPhone was what was expected to be running in the IronPhone, and was correctly implemented?
Anyone with a smartphone gets a lot of "updates", so your IronPhone has an update for 'security' and what do you do? Leave the app running a low entropy key? Apply the potential plod back door?
At least AES256 super-encipher using a separate app (if you trust it)... on a separate HSM device so the keys are not surreptitiously purloined or seized... yeah, key exchange is a batch, but better than bubba the bunkie.
Let's stipulate that 'effective encryption' can mean, for most purposes, the ability to input a memorable keyphrase (say a 30-char speakable, but unguessable babble¹) into a well-written algorithm (like say AES256), thereafter to encrypt a pithy secret message (shorter the better), and even better to steganographically bury the result in the lowbits of large, deliberately noisy photographs posted anywhere among the trillions publicly accessible on the net (from among millions uploaded every single day) ... it would be a colossal surprise, would it not, to find that law enfrocement or security agencies were catching any except the most lazy and incompetent evildoers?
If you make the effort to create and remember good keys and to ensure your devices never store said keys, no one is going to be reading your mail. No amount of backdoors or other political ignoramuses' nonsense is gong to affect that in the slightest.
In short, if you have good crypto and practise the disciplines of persec and opsec, your focus of worry should be shoulder-hackers, cameras and keyloggers. Presumably you'll take care of those issues sensibly.
After that your main worry is loyalty. Will your Little Criminal Girlfriend³ keep shtum about the content of your message after three nights without sleep or food, and into the fourth hour of today's stress positions?
It's funny how, after all the security-service, law-enforcement and political drivelling about encryption and backdoors and how existentially threatening it is that the Black Hats have this tech—Terrible Ahmed, the Terrifying Terrorist Terrorising Near You—it always comes back to basic cunning, tried and tested police methods versus loyalty among criminals.
Perhaps instead of pretending there are slick, cheap lazy answers among backdoors and other rot, authorities should accept that there is and never has been any substitute for humint, and slogging, painstaking, boring, patient, thorough police work?
.
¹ '18mY8ud9er1gR_5x/4n1te=io<12yr' ²
² "I ate my budgerigar five times a fortnight, ten in one-twelfth of a year". It's meaningless rubbish, but a memorably absurd phrase which you only have to formulate as the enumerated gibberish to be able to remember accurately. With over 50,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible combinations, even an implausibly good QC is gonna take 50 years to crack it. NSA's best supercomputer would need the lifetime of thirteen universes, back to back.
³ Identify the movie, for a virtual beer
"I ate my budgerigar five times a fortnight, ten in one-twelfth of a year"
Getting a good pass-phrase right isn't easy, particularly if your throw some punctuation into it. You have a one in four chance of getting the comma and hyphen in the right combination, always assuming you can remember where they go. I have a similar approach to my Wifi pass-phrase. If I have to use it a few times in a few days I'll eventually remember the twists and turns after I've looked it up a few times. A few days after that and it's gone again. Having your unguessable babble written down defeats the object.
that using encrypted chat apps that rely on central servers “puts your fate in someone else's hands
For mobiles on v4 you have to deal with CGNAT and for v6 (if you are lucky to have it) there is a stateful firewall). In addition to that you have the problems of registration, "where am I", offline forwarding, etc.
That, however does not influence the fact that actual messages cannot be cracked. I suspect that Snowden's opinion on the matter still stands - the e2e aspect was not cracked. It is implemented on the phones with the server providing relay and NAT traversal.
The most likely thing to get the miscreants on the police record was the use of encrypted one-to-many channels. Same as in Turkish coup and many other cases recently. Those are usually implemented on the server (though they do not need to be) and can be intercepted.
"IronBox" "IronChat". Sure put the criminals into "IronCuffs", though that product might not have sold as well...
Kudos to the Dutch police for running such a tech savvy operation. Great to see both the criminals arrested and the weaknesses of proprietary, centralized solutions highlighted again.
The other tech web site says that app users are able to dynamically change encryption keys. It generates a warning on the receiving end, but those are ignored by people. If that's the case, public/private key encryption should be trivial to hack. All you need to do is get the public encryption keys of each end, which should be easy, and use them to encode requests to switch to a new public key belonging to the MITM. If I read it correctly, the flaw was that public key encrypted data was trusted as authentic without a round-trip challenge using the previous key.