The Gemalto hack is the product of poor use of cryptography that requires the private key exist somewhere other than on the SIM. The NSA/GCHQ took advantage of it, but they are one of many actors who are capable of doing so. Perhaps the result will be that GSM is redesigned so that it's no longer vulnerable to such an attack. If that happens the result will be GSM being banned in China. I.e. China aren't just taking advantage of other's incompetence, they are mandating incompetence. I think on that basis, the US does have the higher moral ground, if not exactly high moral ground. Of course our PM says he want so to take the same position as China, so perhaps we really should have an ISO standard back door.
26 posts • joined 11 Apr 2011
Just use profiles
Android is better than IOS in this regard, because you can just create a profile for your little angel, and not include access to the store in the profile. Add the app in your account, grant access to it in the profile and log out. Now they can use the app, but in app purchases won't work.
I complained about a similar problem with the Ocado app. It wanted access to the phones camera, so it could read bar codes.
The argument that all the permissions are required up front isn't valid: Multiple applications can co-operate, so you can install the additional apps to provide restricted access to resources. Only these apps have the required permissions. You can ask a user to install an extra app from within an app - this is how, for example various apps get you to buy a license for the premium version, in the app store. The experience is reasonable. You might say 'what's the difference?'. The gatekeeper apps can be very simple, and change rarely, so they should be much harder to attack than a large complex app.
In this case, an app with no UI waits for text messages matching a particular pattern, and then forwards that message to the facebook app when it matches the pattern. Otherwise it does nothing. It accepts no incoming messages, and has no state.
Learn what an algorith is
The combination of several algorithms, I think you will find, is one algorithm. Hyper-fluxing is fluxing. They both generate a pseudo random stream of domain names. Either you can run the algorithm and find out the domain names in advance (you have isolated the bot), or you can't. Secure pseudo random streams are easy to design - they don't need to be complicated. This article is the worst kind of security bollocks. It does nothing but introduce a bunch of meaningless terminology.
But yes - I agree, we seem to be useless at taking down botnets, because we don't seem to bother decompiling the bots to predict their behaviour. The code is inherently not a secret.
Why the hell doesn't the pseudo random domain name generation make it easy for law enforcement? Once you have the virus, you know all the command and control domains it will ever use. You can contact the registrars with the list, and tell them to forward any requests for those domains to law enforcement who can then attempt a sting.
You could also offer a public blacklist for firewalls and DNS servers to use.
"Such underground platforms are implementing stronger mechanisms to ensure that participants are who they purport to be (or at the very least are not law enforcement officials). Ironically, while the platforms that facilitate the services marketplace for illegal activities are going deeper underground, the trade in zero-day vulnerabilities is more transparent than ever before," Samani and Paget report.
I would have though criminals would prefer other criminals didn't know who they were, so the above seems implausible, if I'm being generous. The markets are designed so that it doesn't matter if the police can participate, and that you don't know who you are dealing with.
Do you even know what FIPS140-2 is?
'know which stored procedure to execute' - so security through obscurity then?
You need to assume the attacker has a copy of the database, which they can load into their own database software and can discover and run any stored procedure they feel like running, and that they can use your HCM device to decrypt stuff, because they got root on your database server.
In your scheme, they can do all this and you wouldn't even know about it afterwards, because you haven't described how you audit the HCM.
That wasn't the point of my comment: Good key management is possible, and it's far more important that that is mandated, than what cipher is used. For example, if you said - the exemption only applies to systems where key management is designed according to the principal of least privilege, that would provide consumers with good protection. If you just said - use AES256, and some idiot encrypted all the data with one symmetric key stored in lots of places, that would be next to no protection.
Forget about the 'type' of encryption required. The system is secure if only authorised persons have the key. If unauthorised persons can get the key, the cipher is irrelevant. The complicated part is the key management, so this is the most likely problem. If there's a single key, that all the systems that process the data have, then you just have to compromise one system, and you have both the key, and access to the data.
Category error: SHA-1 and MD5 are Digests, AES256 is a Cipher
SHA-1 and MD5 are used to Digest passwords. Digests are one way functions: you don't ever need the password back.
There is a reason for the confusion BTW: there are sound ways to use a Digest as a cipher, and vice versa, but the result is always less good (usually the computational advantage of the defender over the attacker is less) than a best of bread function designed for it's purpose, which shouldn't come as a surprise.
The arstechica article you link to might leave people thinking that the low cost of calculating a digest is a problem, which should be fixed by making the category error of using a cipher instead, but that's not the case: digests are designed to be collision resistant. You can prove that if a digest is collision resistant, then repeating the digest N times (I.e. digest then digest the digest, ...) is the cheapest way to arrive at that answer, so you can make an arbitrarily slow digest, given a collision resistant digest.
The problem is the way the digest is used. You can equally make the mistake of not salting the digest.
MD5 is not all that collision resistant, that's it's problem. SHA1 is not as collision resistant as it's designers thought, but no one has actually found one yet. By all means use SHA2, or SHA3.
More complicated schemes are harder to prove things about: an implementation may be slow, but without a proof that that's the cheapest way to get the answer, the scheme may later prove to be weak.
People bang on a lot about how GPUs are being used to crack passwords, but attackers and defenders have access to GPUs to calculate digests, and because hackers benefit from economies of scale, they will always use commodity hardware.
The reason neither party can tackle this is that both understand the value of communities,and having one requires people live in the same place for long periods of time. In they are so valuable in fact, that we better figure out how to bring the work to where people are, not the other way round:
By not having a job for a period of time, people are demonstrating that for a job that comes to them, they will accept a lower salary. The market is failing to use this fact, so the market is failing. Wow, a failing market. Who would have thought it?
This fail is for the author.
The 3 digit CVC is such strong protection
OK - so now there's a 3 digit number between you and the attacker. They'll never guess that...
Just to spell this out. Pick a CVC. Capture a 1000 card details. Try each of them with the same CVC. You aren't scanning through the CVCs on the same card, so fraud detection, which is card oriented, won't notice you trying. Your odds of getting a card are pretty good. Presumably you can actually try several CVCs for each card without the issuer noticing, so you can improve your yeild.
There must be loads of sites that handle 10s of 1000s of cards a day. It's those sites that NFC is aimed it.
'According to Government Communications Headquarters, four in five (80 per cent or more) of currently successful attacks can be prevented by simple best practice, such as ensuring staff do not open suspicious-looking emails or ensuring sensitive data is encrypted.'
I assume this link will tell me how:
Contains the instructions. When I click it, Chrome says 'This download could harm your computer'. Oh well, that's not suspicious, I'll just agree.
This document actually isn't bad, but it doesn't say how to identify an email as suspicious. Perhaps the spokesperson should read it.
Settlement + costs
There's something not right here: An offer of compromise is meant to cover the the claimants costs, or it's irrelevant. Either the fact that the costs up to the point of the offer is reported in the article is a red herring, or it appears the judge made a serious error in taking the offer of compromise into account.
This is the most disappointing bit of the article: Why do kids bring their own PI when they can just bring their SD card to the lesson, plug it into the PI that's screwed down to the desk, then take it home and plug it into their own PI to do there homework, or just experiment. If a kid forgets their SD card, or its rendered useless in some other way, just give them a new one with the class's default image.
Re: Automatic memory management
When a buffer overflow happens in Java, it's because of a bug in the JVM, not in the Java code. That bug is usually in the JIT. The JIT is complicated: probably more complicated than a static compiler, and most exploits run in browsers as applets, where attackers get to attack any bug in the JIT they choose.
Of course, things would be even better if we used a statically compiled language with automated memory management. Popular languages in this category don't exist, but you can statically compile Java to C, and disable class loading.
SSL != browser
"virtually all of which count on SSL to secure their internal networks": Companies use SSL to secure their internal communications which do not involve browsers, but they can easily manage their own keys, and reject authentication based on public roots to avoid the issues cited in the article. Perhaps they don't, but unlike the system used in browsers, there is no political reason to avoid change, only ignorance. This article doesn't help with the ignorance by confusing SSL with browser behaviour. The concession to accessibility should stop after the byline.