
The best software features ...
are those you manage your users to gaslight into believing that they are there.
"End-to-end encryption?" "Yeah, definitely!" followed by nearly in-audible small-print caveat.
The world’s plague-time video meeting tool of choice, Zoom, says it’s figured out how to do end-to-end encryption sufficiently well to offer users a tech preview. News of the trial comes after April 2020 awkwardness that followed the revelation that Zoom was fibbing about its service using end-to-end encryption. As we …
Except when it's not.
I'm sure it will be held up as an example of how to crack the problem by the usual organsations.
It might be worth noting that Signal has finally released it's zero-knowledge, manageable group functionality. Doesn't include video chat, but it's still a milestone for a peer-reviewed important feature. The related blog posts go into the relevant technical detail.
And Telegram has started trailing group video chats. Telegram doesn't promise E2EE in group functions but so far it's done a pretty good job of thwarting mass surveillance of its communications as use in Belorussia shows: that the communications are visible is less of a problem than that they can't be suppressed.
The commonly accepted definition of end-to-end encryption requires even the host of a service to be unable to access the content of a communication
Commonly accepted perhaps, but anyone with a brain will know that if non-logged-in users can be added to a call then there must be key distribution going on, and therefore the provider can still tap in.
Key distribution is an Achilles heel of security.
My expectation is that much depends on how the key is distributed: If the Zoom client includes it in the invite email sent from your email client and thus doesn't actually pass through the Zoom server hub then it is as secure as most commonly used security mechanisms.
Be interested to know how tools such as Teamviewer manage this aspect of security (ie. avoiding the possibility of being able to eavesdrop on remote desktop connections) in their product.
Nobody seems to have noticed that Privacy Shield has been formally invalidated, so any discussion from the EU of personal information over Zoom could now be in breach of the GDPR unless it can be conclusively demonstrated that Zoom in the US cannot access the information. Hence the importance of real end to end encryption that actually works.
Does the GDPR require that it be 'proven' that it can't be accessed? I thought it just specified penalties for accessing/retaining information.
Because if your interpretation is correct then every web site in the world is in violation despite the click through, because they are obviously CAPABLE of collecting your personal information (like IP address, which can be geolocated to where you are with varying degrees of accuracy) and there is obviously nothing that can be done to make a web site incapable of seeing your IP address.
Not to go to bat for them, but I suspect this is more a case of gently onboarding the early adopters and ensuring everything actually works and scales. As opposed to the more big bang approach of just changing the defaults... which more of than not, results in a Big Bang and operational nightmare.
I agree most companies have a shitty understanding of what "Agile" really is, and use it as an excuse to cut QA and therefore costs, but, for things like this, at some point you can't test at sufficent real-world scale, other than releasing an opt-in tech preview and seeing if/where it breaks.
Even that doesn't scale up to the true load, but it's likely more than can be achieved with in-house testing.
That really depends on how competent you in-house testing is.
Still, it makes sense to make something like this opt-in in version x.y.z if the intent is to make it default in x.y+1. It's important not to break everyone, and if you implicitly admit that you might have made a mistake by making it easy to avoid a bug, that's generally a good thing. What I don't see here is a statement that this is going to become default.
"at some point you can't test at sufficent real-world scale"
Absolutely agree... I just doubt that many places go through the steps in advance of this - why bother, your users will willingly do it for you, and if it is optional they can still work by turning it off...
Extremely curious: One of the issues we used to face in secure broadcast video streaming (1->N) with DRM was bandwidth/latency optimization given that the viewer could be asking for a high-resolution or low resolution video based on the screen size or the viewing device. A mobile device could receive encrypted video from a server by just fetching from a URL (using Apple's HLS as an example) with the right bitrate. The video server would have the movie already encoded and encrypted with different bitrates ready to go.
When it comes to N x N senders/receivers, the bandwidth requirements can be pretty high especially if participants can arbitrarily switch between full frame and small frames for any of the other participants.
The server can't do transcoding of any video stream astray are opaque. The server can't relay the entire N number of full video streams to the participants. This can get to be an issue with 300 participants. So they may be downgrading the video quality or they may be using a smart mix of signaling and uploading a reduced resolution and full resolution video ...
WhatsApp does CLAIM to do end-to-end video encryption. I did video conferencing with 3 people and it SAID that it was end to end encrypted.
We will know more as the details are disclosed.