Simpler solution
Men in black suits visit sysadmin
"Persuade" sysadmin to upload new version of client software
New version copies keys/decrypted documents to men in black suits
Sysadmin has nasty accident
Mega, the New Zealand-based file-sharing biz co-founded a decade ago by Kim Dotcom, promotes its "privacy by design" and user-controlled encryption keys to claim that data stored on Mega's servers can only be accessed by customers, even if its main system is taken over by law enforcement or others. The design of the service, …
The Five Eyes already have legislation to enable them to do just that. But so far the threats of many service providers (such as WhatsApp) to pull the rug from under the service has refrained them from enacting this draconian measure.
I hope Mega threatens to do the same if the authorities knock on their door with a software patch for their client.
"The findings, detailed on a separate website, proved sufficiently severe that Kim Dotcom, no longer affiliated with the file storage company, advised potential users of the service to stay away."
Well, I still prefer MEGA over Google Drive, Dropbox or Microsoft OneDrive... the last 3 don't have encryption at all.
You can create a virtual hard drive, encrypt it with your personal favorite method (I use BitLocker), then stash the encrypted VHD in your Dropbox. (This doesn't work with Google Drive or OneDrive due to lack of bit-wise comparison - they'd upload the whole file every time it changed).
Don't have Dropbox running while the drive is mounted and decrypted though, just in case.
-> Well, I still prefer MEGA over Google Drive, Dropbox or Microsoft OneDrive... the last 3 don't have encryption at all.
Knowing that means you should roll your own - gpg encryption, for example. However as has now been shown, MEGA (or any other site offering encryption) may lull you into a false sense of security.
Personally, I just hang a small BSD-based file server (only 4TB of RAID-5, YMMV) off my Great Aunt in Duluth's network connection[0]. Encryption of my choice. No fuss, no muss, and it always works when I want it to work, not when some provider with its head in the clouds wants it to work.
[0] Mirrored at a couple of other relative's homes.
Google Play app signing..... Google builds end-user specific versions of apps, and signs them as if directly from the company. You hand them your app signing key to let them do that.
So inherently any Android app from Google Play is untrustable, even if the company that makes it is trustable, because Google could give you a tailored surveillance version, and sign it as if it was legitimately from the app provider.
It wouldn't have to be the target software either, that receives the special package, any app on the same device could receive a special package. The mere fact you installed say, Telegram or Mega, or whatever, may cause you to receive a special version of some other common software.
You see the push to legalize the undermining of encryption? All those "for the children" and "to catch drug dealers" stories? Yet the technical means to do so already exists and is trivial to do remotely and in bulk. Do you imagine it isn't already being abused? I think they're trying to post-legalize their surveillance methods.
The Kim Dotcom surveillance: NZ law forbade NZ spooks surveillance of NZ residents, they did it anyway. Their surveillance is supposed to be for security and terrorism, not copyright, they did it anyway. The law was changed to make it legal after the fact. So notice the push to undermine encryption and realize the purpose isn't to obtain technical means to do that, but rather to legalize their existing technical means.
You see the push to legalize the undermining of encryption? All those "for the children" and "to catch drug dealers" stories? Yet the technical means to do so already exists and is trivial to do remotely and in bulk. Do you imagine it isn't already being abused? I think they're trying to post-legalize their surveillance methods. .... Anonymous Coward
The crashing reaction of present day, current running incumbent petrified fiat MMT [Magic Money Tree/Modern Monetary Theory] capitalised systems of operation for administration which has resulted in serial ineffectual universal inaction exponentially exacerbating and compounding mounting existential difficulties, and which sees failed sysadmins venturing along the Fascist Big Brother Postmodern Nazi Ideological Root in attempts to maintain and sustain and retain overall absolute command and control, have both Secret and Security Intelligence Service Agencies all too aware of the Titanic Opportunities Available to Readily Exploit the Catastrophic Dangers which Exist in Any and All Colossal Proprietary Intellectual Property Blackholes Rendering a Universal Intelligence Deficit and Expanding General Knowledge Debt.
And it would be unnatural, and therefore most unlikely to an inordinate nth degree, that such opportunities are not liable to be fully explored and diligently exercised exhaustively in a sort of undeclared war against the law and order forces and sources of future ignorance supporting a self serving destructive arrogance.
To further suggest that such Secret and Security Intelligence Service Agencies deliver to one an overwhelming advantage is a colossal misunderestimation and therefore most probably of very great interest to more than just a chosen few with an avid and rabid need to know about such deep and dark enlightening exalted matters .... and how they are best immaculately used.
I wholeheartedly agree. Wuala was the last standout and they were probably clobbered into stopping their service because it was *to* secure and LEA were accusing them of harboring child abusers (oh, the horror!)
These revelations will lead to improvements which will make the service eventually secure.
...avoid repeated ad hoc implementations that repeat the same errors...
It seems to be just security implementations that get detailed analysis that point out the design errors, surprising and incorrect assumptions and software bugs.
There are a thousand other software categories that don't receive the same attention. And they also suffer from "ad hoc implementations". But they don't get fixed via responsible disclosure.
Seems to be an issue with the programmers, and not the technology.
Why would you trust a service Kim Schitz (aka the YIHAT asshat) built? He's popular now because of the pirate bay generation, but the clown is a serial fraudster, and was one all the way back to when he was in Germany.
Regardless of his disavowed involvement, or recommendations to stay away,
Generally the best way to keep stuff in remote storage private is to encrypt it yourself. Yes that can lead to decrypting your decrypted files again. You'll live. If it's important, you should probably use a second and separate tool to protect your data.
The potential disclosure of private keys for in-platform messaging points at issues that over the top encryption won't fix as easily, so over the top encryption isn't a cure-all in this case.
+1 re you words about Kim Schmitz.
But -1 for "the BEST way to keep stuff in remote storage private is to encrypt it yourself".
This is not the best way, it's the ONLY way. I have accounts w/ Google and pCloud and absolutely nothing leaves my LAN going to their servers that hasn't been locally encrypted... check out rclone if you haven't done so already.
Plus some things (ie my KeePass databases) are additionally stored in a secure 7z archive before being uploaded.
Quote: "... allow full compromise of all user keys encrypted with the master key..."
Ah...."master key"....... and then there's the public key stuff (PGP, etc) with public keys and private keys......
In 1976 Diffie and Hellman published a mechanism which allowed CALCULATION of keys only twice: at encryption time and again at decryption time. These calculated keys are:
(1) Random FOR EVERY TRANSACTION
(2) Destroyed after use
Oh.....and nothing at all related to the keys is ever available in public. Snoops can only see the encrypted transactions in transit.....nothing else!
So when PC Plod comes along "demanding keys" users will say "Sorry....the keys are calculated by the software....I have absolutely no knowledge of keys....take my computers and do your worst!!"
Why am I still hearing about security problems created by "master keys" and other permanently saved keys? Someone else here can explain!!
P.S. Tooling needed to implement D/H encryption might, as a minimum be: gcc, gdb, gmp ..... all open source and widely available.
P.P.S. FYI -- gmp helps with 8192 bit processing!!
Diffie-Hellman is a key exchange protocol, not an encryption protocol. It's used in TLS to generate session keys for both sending and receiving parties. The keys the article is talking about are symmetric encryption keys used to encrypt the files. If you use PasswordSafe or similar then the "Master key" is the password you use to unlock the safe, and the "user keys" are the passwords stored there.
Quote: "Diffie-Hellman is a key exchange protocol..."
Did you read the "Old Fool's" post? The point seems to be that D/H allows for COMPLETELY TRANSIENT RANDOM keys for ANY TRANSACTION.
.....Yup.....no "master key" needed!!
Did I miss something?
DH allows two peers to construct an encryption key for the sender and a decryption key at the recipient while talking in public.
The recipient still needs to store the decryption key if she opts to not decrypt right now but only later.
For a scenario where a person wants to decrypt their own data later on, the procedure is pointless.
Did I miss something?
An opportunity to learn how Diffie-Hellman (which should really be "Diffie-Hellman-Markle", as Diffie pointed out some years ago, and arguably should be called "Diffie-Hellman-Markle-but-only-because-Ellis-Cocks-Williamson-weren't-allowed-to-publish") key exchange works without UNNECESSARY SHOUTING.
OP's post was wrong. That's the other thing you missed.
Encrypted data requires a key to decrypt it; that's what "encryption" means. Can you replace the actual key data with a procedure to generate it? Sure. Does that gain you anything, from a technical or legal perspective? It does not. By Kerckhoff's Principle, whatever is secret is the key; making the key fancy doesn't mean it stops being a key. And laws are rarely stymied by someone being a clever dick. A judge isn't going to say "oh well done, you've got us there!". He'll just throw you in the pen for contempt.
It's quite likely various groups have examined it and found these or other vulnerabilities, and kept them to themselves. You know there's a whole industry around selling vulnerabilities and exploits that haven't been published, right? There's a good free RAND study on that business.
And researchers research the things that attract their attention. It's not like the IT-security community is rigorously organized. Someone says, hmm, today I'll poke at this thing. Read researcher blogs if you want to see how it generally goes. Some people (e.g. SANS researchers) typically chase attacks that come into their honeypots; some track down malware and analyze it (Marcus Hutchins does a bunch of this, as does someone who posts to Full Disclosure as "malvuln"); some go after particular commercial targets (e.g. Stefan Kanthak and Microsoft, particularly software installers); some follow what's hot in the news (Graham Cluley, say, or Paul Ducklin); some are more interested in people (Brian Krebs) or policy (Bruce Schneier).
So this may just be the first time a prominent research organization has had some of its people publish research on Mega. Or maybe it's happened before and you and I just didn't notice. It's a big industry.
You must trust someone. The problem is that few people even have the education to even understand WHY they should not trust themselves. (Most can be browbeaten into submission, but that's a separate matter.)
I was just talking about this to a friend. There are probably about 3000 people in the world today who I would trust to write a crypto library. I'm arrogant enough to include myself in that list. But because I _am_ properly trained, I also know that it would take me FAR longer to convince myself that I had not messed something up than would be worth it.
Implementing an encryption algorithm is straightforward enough, assuming the algorithm is sound. The problem is always in the exchange and storage of keys. So I trust myself to write the encryption -- there's usually test cases to verify the code's correct with the standard -- but key exchanges are a completely different matter. Just implementing a cryptographically secure random number generator is a work of art.
I'd also be very wary of any persistent code in the system that performs encryption services. This probably OK for day to day work but for anything realistic you need something with zero persistence -- its used, it goes away and erases all traces. Running anything on a general purpose machine is asking for trouble -- you think you know and control what's running on that system, now prove it.
@martinusher
Quote; "... key exchanges are a completely different matter..."
Have I mentioned Diffie/Hellman and CALCULATED KEYS?
(1) The D/H handshake DOES NOT EXCHANGE KEYS
(2) The keys are calculated LOCALLY and then destroyed
(3) The keys are RANDOM for every transaction
There are NO PERSISTENT KEYS.
"Key exchanges" cease to be a "completely different matter".......because there are no key exchanges!!!
Did I miss something along the way?
"Did I miss something along the way?"
An understanding of what Diffie-Hellman is for, and what it does, perhaps?
DH allows two communicating parties to generate a shared key between the two they can use for a secure communication channel, without publicly communicating anything that can be reversed to discover what the key is externally.
And yes, once communication is complete, both sides throw the key away.
The problem is, what do you do when you need the key again? Say, to decrypt your own encrypted data later? You can't, because you threw the key away. That's the point of using DH for perfect forward secrecy.
You "somehow" use DH to generate a key to encrypt a file (leaving aside the fact that DH is a key exchange protocol, meaning you need to communicate with another party to generate DH keys), encrypt the file, and throw the key away.
Congrats, you now have a pile of useless bits.
@AcornAnomaly
Quote: "....decrypt your own encrypted data later? You can't...."
Congrats.......you failed to understand D/H!!
Didn't you notice that the throw away key is CALCULATED locally when needed? Yes......the key was thrown away....but the two D/H tokens are still available....so that the key can be CALCULATED when required.
The whole point of D/H is that the shared tokens tell a third party NOTHING about the key!!
Congrats......but perhaps a bit more research might be useful.
It's not always in key management. People make plenty of other mistakes, too. I note of the attacks listed in the article, none appear to be key-management vulnerabilities as such. They have a couple of oracles that leak key material, which is a protocol implementation error; they have integrity violations, which is separate from keying; and they appear to have a vulnerability which escalates the key-recovery attacks which may be due to the use of ECB mode. (I haven't read the paper and it isn't clear from the article.)
There are a great many mistakes people make. Many of them have to do with key generation, key hygiene, and other keying issues; but many do not.
Really?
The Bruce Schneier book Applied Cryptography would be a great place to start learning about "roll your own".
And anyway, most encrypted messages don't need to be secret for ever (which is quite a long time)....two or three months is usually sufficient!
*
CfyNOPwZKPw74xQlgPs1IlaJK5YDQZGJcH0JAXoRkXWhyTstUZsHmvW5AXoZEtqP2Hmh4HoF2H8J
iVUzutyHQB6L458fCJ0dcrU52noXQ9KDqRUVOh4JKdGlGvAtMTEtud0fO9anud0fGV4hgFg76xur
4rqjcH4VqbQjwvqVe36pKHkDyVKBcBGV6LolEFI90Laz6zITSZKzqH6HCNEX8hoDK3ClMHCJoXe5
gLebAZQNMpYFevGHCNaB8hQtcZ8bwheN4n0XcFIdQx6LWlMjuve7wTg3EhOzcZmD2tkP0lGbuTgR
2t2VglOpcLsrshipmdktiluDQbSDG5cBY52Lij4HWFqH6haLiLqNEjudk3gxgNotirEx4h4FqfAJ
AzWPc1azSzQRsTAfCZy3oVaNGtMFIra1wZSrwhcHSDmrQ9WTeF4vW94Ji98PsxyLiJefkD2fghsH
0ZCZKxGFaZEnarcx27s9WBId6LQTKvwROZgzKniNOBQRkDId0LevGz6DSDe10hgTyfkhEZibmH2F
43WLUFyzoNm1qru70xy7KxUnunoPOpsZYvE74pmNaFsdc5EbepuzGVoP0lOx0z2LA1eLKLonodcb
wFOfILM1UJoPibIbWX2zcB2zEzIvirOjGX8R0P8TUpODUxQ9gliHkvohibQ1ANojUhWNu5A1IbCp
8Pah0fOJelQj0XsDq5C7G9qdUPgxOxIvmVSROlqjI9CR
*
Cisco has alerted customers to another four vulnerabilities in its products, including a high-severity flaw in its email and web security appliances.
The networking giant has issued a patch for that bug, tracked as CVE-2022-20664. The flaw is present in the web management interface of Cisco's Secure Email and Web Manager and Email Security Appliance in both the virtual and hardware appliances. Some earlier versions of both products, we note, have reached end of life, and so the manufacturer won't release fixes; it instead told customers to migrate to a newer version and dump the old.
This bug received a 7.7 out of 10 CVSS severity score, and Cisco noted that its security team is not aware of any in-the-wild exploitation, so far. That said, given the speed of reverse engineering, that day is likely to come.
The latest version of OpenSSL v3, a widely used open-source library for secure networking using the Transport Layer Security (TLS) protocol, contains a memory corruption vulnerability that imperils x64 systems with Intel's Advanced Vector Extensions 512 (AVX512).
OpenSSL 3.0.4 was released on June 21 to address a command-injection vulnerability (CVE-2022-2068) that was not fully addressed with a previous patch (CVE-2022-1292).
But this release itself needs further fixing. OpenSSL 3.0.4 "is susceptible to remote memory corruption which can be triggered trivially by an attacker," according to security researcher Guido Vranken. We're imagining two devices establishing a secure connection between themselves using OpenSSL and this flaw being exploited to run arbitrary malicious code on one of them.
1Password, the Toronto-based maker of the identically named password manager, is adding a security analysis and advice tool called Insights from 1Password to its business-oriented product.
Available to 1Password Business customers, Insights takes the form of a menu addition to the right-hand column of the application window. Clicking on the "Insights" option presents a dashboard for checking on data breaches, password health, and team usage of 1Password throughout an organization.
"We designed Insights from 1Password to give IT and security admins broader visibility into potential security risks so businesses improve their understanding of the threats posed by employee behavior, and have clear steps to mitigate those issues," said Jeff Shiner, CEO of 1Password, in a statement.
Slowly but surely, software package registries are adopting multi-factor authentication (MFA) to reduce the risk of hijacked accounts, a source of potential software supply chain attacks.
This week, RubyGems, the package registry serving the Ruby development community, said it has begun showing warnings through its command line tool to those maintainers of the hundred most popular RubyGems packages who have failed to adopt MFA.
"Account takeovers are the second most common attack on software supply chains," explained Betty Li, a member of the Ruby community and senior front end developer at Shopify, in a blog post. "The countermeasure against this type of attack is simple: enabling MFA. Doing so can prevent 99.9 percent of account takeover attacks."
Blockchain venture Harmony offers bridge services for transferring crypto coins across different blockchains, but something has gone badly wrong.
The Horizon Ethereum Bridge, one of the firm's ostensibly secure bridges, was compromised on Thursday, resulting in the loss of 85,867 ETH tokens optimistically worth more than $100 million, the organization said via Twitter.
"Our secure bridges offer cross-chain transfers with Ethereum, Binance and three other chains," the cryptocurrency entity explained on its website. Not so, it seems.
Amazon at its re:Mars conference in Las Vegas on Thursday announced a preview of an automated programming assistance tool called CodeWhisperer.
Available to those who have obtained an invitation through the AWS IDE Toolkit, a plugin for code editors to assist with writing AWS applications, CodeWhisperer is Amazon's answer to GitHub Copilot, an AI (machine learning-based) code generation extension that entered general availability earlier this week.
In a blog post, Jeff Barr, chief evangelist for AWS, said the goal of CodeWhisperer is to make software developers more productive.
Travis CI stands for "Continuous Integration" but might just as well represent "Consciously Insecure" if, as security researchers claim, the company's automation software exposes secrets by design.
Aqua Security Software on Monday said its researchers had reported a data disclosure vulnerability with the Travis CI API. The response they said they received is that everything is working as intended.
In a blog post security researchers Yakir Kadkoda, Ilay Goldman, Assaf Morag, and Ofek Itach said they had found tens of thousands of user tokens were accessible through the Travis CI API, which provides a way to fetch clear-text log files.
Patch Tuesday Microsoft claims to have finally fixed the Follina zero-day flaw in Windows as part of its June Patch Tuesday batch, which included security updates to address 55 vulnerabilities.
Follina, eventually acknowledged by Redmond in a security advisory last month, is the most significant of the bunch as it has already been exploited in the wild.
Criminals and snoops can abuse the remote code execution (RCE) bug, tracked as CVE-2022-30190, by crafting a file, such as a Word document, so that when opened it calls out to the Microsoft Windows Support Diagnostic Tool, which is then exploited to run malicious code, such spyware and ransomware. Disabling macros in, say, Word won't stop this from happening.
Broadcom has made its first public comment in weeks about its plans for VMware, should the surprise $61 billion acquisition proceed as planned, and has prioritized retaining VMware's engineers to preserve the virtualization giant's innovation capabilities.
The outline of Broadcom's plans appeared in a Wednesday blog post by Broadcom Software president Tom Krause.
Updated Two security vendors – Orca Security and Tenable – have accused Microsoft of unnecessarily putting customers' data and cloud environments at risk by taking far too long to fix critical vulnerabilities in Azure.
In a blog published today, Orca Security researcher Tzah Pahima claimed it took Microsoft several months to fully resolve a security flaw in Azure's Synapse Analytics that he discovered in January.
And in a separate blog published on Monday, Tenable CEO Amit Yoran called out Redmond for its lack of response to – and transparency around – two other vulnerabilities that could be exploited by anyone using Azure Synapse.
Biting the hand that feeds IT © 1998–2022