... 27 million personal data records, including a million people's fingerprints ...
And this is only what they've found, which I think we can quite safely assume is just the tip of the iceberg.
Or maybe not?
Two infosec researchers found 27 million personal data records, including a million people's fingerprints, exposed to the public along with plaintext admin credentials for the Suprema Biostar 2 system they were associated with. The database powering South Korean company Suprema Inc's Biostar 2 biometric access control system …
Efookinxactly! Nail on the head. If this was oil leaking into the ocean, or toxic gas into the air, there would be monumental fines, cleanup charges and possibly criminal charges (think Deepwater Horizon or Exon Valdez). It wouldn't prevent all incidents, but it would change corporate culture enough to reduce the worst excesses. But as it's "just" our identities at stake, nobody gives a toss. It'll be interesting to see whether a South Korean company can be prosecuted under GDPR given the have a large European client base. That would at least show that the law is moving in the right direction.
Well, it's hard to tell, because we don't know how many data stores quietly get their security improved after a breach elsewhere is published, and consequently are never breached. While we continue to see examples of massive breaches due to gross incompetence, we don't know how large the pool of insecure but not-publicly-reported sources is, and whether a significant number of sources have been taken out of the pool.
It's entirely possible that for every Suprema or First American there are a dozen firms which see the public disclosure, realize they're in a similar boat, and silently fix the problem before it becomes public. There's no way for us to know. Have they already been quietly breached by non-publishing attackers (criminals, intelligence organizations1, etc) before remediating the problem? No way to know that either.
The rate of massive breaches isn't getting better. The population of still-vulnerable targets might be getting smaller. Or it might be growing, as more firms move data to public clouds or otherwise increase their attack surfaces without due diligence. But I can't think of any way to measure that, directly or indirectly. The breach rate isn't a well-correlated proxy because there are too many variables.
1Arguably a redundant formulation.
Seems like a rookie mistake. Plaintext.
Sounds like not enough time to generate the crypto, business expansion/cash on hand and sold future promises or sweatheart contracts. Failing that lousy network practice. Poor recruitment practice.
Damage control might be academic. Don't think PR will be thinking they can Lazarus that shit in a hurry. Cambridge Analitca time. Think that'll be toxic in their line of sales.
Bob would know.
I shudder to consider the answer to your question sir.
That should read "By an illusion of security company. "
Which seems to be what most of these outfits actually specialize in, as does any IoT supplier.
Actual security is quite hard (and therefore expensive)
But the illusion is quite cheap.
A question to those in the security industry - is this normal?
It's a perverse question. There isn't any "normal" in this area for IT security firms (or security BUs within IT firms, etc). There's too much variation for there to be a meaningful single cluster of "normal" practice.
In my experience, IT security vendors mostly fall into one of two categories. There are those which make an effort to have adequate broad IT-security knowledge, so they devote some resources to hiring and training employees in IT security in general. These are the ones you see sending developers and other technical staff - not just sales and marketing - to security conferences. You see their employees writing and presenting for IT security organizations like ISSA. There's some external sign that they have people doing security research.
The other category are vendors that are only concerned with selling products in some narrow category. They may have some genuine expertise in that area (or they may be selling snake oil), but they often show a bewildering lack of basic security knowledge elsewhere. That generally means their products are rubbish, of course, because they either don't have people who understand security engineering, or they don't listen to those people if they do. And it generally means that they have poor security practices elsewhere, as in this case.
Why don’t IT people blow the whistle when they see this at the organisations they work for? Is it just fear of losing a job (real enough/fair enough I gusss...)?
Or is it that IT skills have dropped so deplorably low that really no-one in these companies is aware that unencrypted data, plain text passwords, is really seriously bad. In an organisation this size, with a database this size, my guess is 10 to 100 people would have known the database scheme.
Finding these things by trial and error is too painful. There must be a better way. Any ideas?
And do I really want to hire one of those 10-100 folk who thought this was not worth blowing the whistle on?
Am I completely out of touch or what? I want to know, seriously, because this just looks crazy to me.
Lol! I doubt that even ten people knew how the data was stored - it was quite possibly a couple of people, and neither one was actually a decently trained and experienced DB designer AND aware enough of what cloud best practices are. It probably ended up online because the business started screaming CLOUD a lot.
Yes, all of this.
Many times, some security bloke would tell mgmt: OK, quick and diry like this, only if for internal use.
Then, same incompetent mgmt would go to network, to link this to da net. Network of course had no idea of the content.
Et voila, nice scandal. But yes, for a security company, this is extremely worrying !
Have you tried blowing the whistle on something like this?
I attempted to report my employer to the ICO for ignoring GDPR and a DPO who thinks we can keep data indefinitely if she says so but they are not interested. They have links to report organisations that are misusing your personal data, that Google refused to remove links relating to you, cookie use issues, issues with data transferred to the USA or for reporting an actual breach but nothing for reporting systematic and willfull ignorance of the law.
The ignorance, though deplorable, is not necessarily wilful.Some businesses employ lawyers as DPO, but the training of many of the others currently consists of little more than a one-week crash course and a multiple choice quiz. The UK DPO pay rate for the non-lawyers is also typically in the £25k bracket, which doesn't exactly attract the brightest and best informed. The real DPO role is a C suite position with a wide range of required expertise and responsibilities and a duty of independence, but this is not yet been widely understood. The very fact that the crash courses can publicly masquerade without challenge as adequate training for a role that could render a business liable to crippling penalties indicates the current inadequacy of public understanding, official guidance and regulation.
Or is it that IT skills have dropped so deplorably low...
I wish I knew... The systems in my care; people keep screwing em up, I've e-mailed politely, I've begged, I've screamed, I've provided easy to follow, step by fucking step instructions.... And they still get the most basic things wrong. And these are people who actually have a business need to be on the machines and who in theory have half a brain in their heads
Because blowing the whistle can be difficult. You don't want the risk of being sued. You're well aware if you are exposed as the blower, you'll Potentially, wrongly, be black listed from working again. Especially if you are exposed in the press. You worry that if you provide email evidence by abusing your position to view mailboxes, would that be crossing the whistle blower line?
A list of the process' for blowing the whistle would be good for people. Like what email systems are good to use to email the press anonymously? Do files like word docs etc have any hidden fingerprint info in them that would say what the last PC they were on was called? Etc
Clearly a GDPR issue, but every organisation that uses this bunch of incompetents needs to complete an urgent risk assessment and, if necessary, self-report to the ICO for fines all round for not conducting the necessary due diligence before buying their services.
No encryption and no hashing? It’s RockYou all over again - but with a new and exciting law and order twist.
Meanwhile please form an orderly queue to have your fingerprints and other biometrics replaced.
Any list of all their clients so we can know if our data might be affected. Oh and that we too can complain to the relevant regulator.
Putting aside the issues that made this data accessible to the two researchers, to me, the most unforgivable thing is storing people's passwords and biometric data as non-hashed.
So often now, products come to market as experimental internal proof of concepts that are then productionised and rushed to market. If you are transporting and storing such sensitive data you should start your design with the question: what if the data leaks - how can I minimise the risk? The evidence appears to be growing that this is rarely done.
Further, before go live any such system should be fully audited for security.
to me, the most unforgivable thing is storing people's passwords and biometric data as non-hashed.
I'm just curious, how easy is it to work with data where there won't be an exact match stored as a hash?
Passwords are easy, your not trying to match "well that's nearly your password I'll say that's good enough" you take the password entered, you apply the hashing algorithm and you match the hashes.
I'm probably being thick here but I assume that finger print matches aren't likely to be precise. Even without things like scratching ones finger tips the scans are noisy. So the entered data won't exactly match the stored data. The hashing algorithm is going to result in any difference in the data producing completely different hashes.
Not thick at all - the short answer is "complex maths", a longer answer is here in an academic paper:
It feels as if every week there is some horror story about unencrypted data being exposed all over the intertubes or a cache of such data has been found on the dark web etc.
We here on the Reg forums and reg hacks fume about poor decision making and design catastrophes but it continues nonetheless.
Perhaps GDPR will help but ...
As ever, I have no answers of course being just the lowly sysadmin who still puts his faith in the shadow file
"Perhaps GDPR will help but ..."
While GDPR is an improvement, it's unfortunately rather too late when it comes to bolting the stable door. Even if it all works amazingly and a few years from now everyone has perfect security, that doesn't help if your name, address, DoB, email, favourite passwords, face and fingerprints are already all out in the wild. It certainly can't hurt to try to stop things getting any worse, but so far we're not really doing anything to figure out how to cope with the fact that everything has already been hacked and leaked.
Its not like you can replace fingerprints. This is the big problem with biometrics as I've been saying for years.
Get a copy of the hash and its possible to rapidly do a go-no go and iterate to a possible print that will work on *other* readers.
Sounds like someone will be getting the mother of all GDPR ass kickings.
Whoever thought of storing biometrics in clear was smoking something. You have to process them and store only the processed information, not the whole image. Theft of fingerprints mean that anyone can masquerade as you; they can murder someone and leave fingerprints that look like yours.
they can murder someone and leave fingerprints that look like yours
That is most unlikely. What is stored is a data representation of a fingerprint obtained from a biometric scanner. It is not reproducible as an actual fingerprint, as far as I know.
However, that said, if stored without encryption, it means that the data, if stolen, can be used to fool a biometric security system by simply bypassing the scanner and sending the data to the computer doing the comparison with the dataset.
My kids' schools offer fingerprint payment systems to pay for school dinners etc. They keep re-assuring me (as the parent) that the system only stores the hash. But they (teachers/admin staff) don't know that definitively as they didn't engineer the system. Even the sales/support staff are only spouting what the brochure/manual says.
This security company Biometric has failed that security mantra in a big way. So what chances are there that a state funded school's payment system of choice has done things the way it should be?
They are also supposed to offer a non-biometric method (a swipe card). One school ignored my requests for that, and the other did supply the card as requested.
As numerous other posters have already said, once it's out, how do you change your fingerprint?
Oh and a hospital offered to scan my iris to secure my hospital records... obviously I refused. Don't fancy a Minority Report style eye change...
They must love you. I noticed as a child (not a parent) questioning anything they did was a problem. Mainly because, even if they agreed, they had possibly hundreds of people above them, who insisted on "this is the way to do it" mentality. If you're not following along, you are labeled as a problem. :(
El Reg - I hope that using the "anonymously" posting designation on your website is truly what that means. I used to think I was being paranoid, but now I just think I won't even bother with fear. I will just wait for your offer of a year's free credit monitoring...seems almost inevitable.
Yep. ElReg anon is only a user-facing mask.The backend (and anyone with appropriate access to it - eg: ElReg editorial staff and mods, at least) can see who you are (in as far as your profile information and IP access records allows them to, at least). As Dazed says, it needs to work this way for tracking dis/likes, the 10-minute edit window, 'My Posts' and forum moderation.... Well, there might be some super-clever way to anonymise some of that, but probably not worth it for this type of forum.
Treat it as a light-privacy option, not a security one.
Note: they have never claimed otherwise.
(Also note: I am not with ElReg. The above information is extrapolated from observed system behavior).
Secrecy only serves capital. It is anti human...AC, because, those are the rules you all want just now.
We are used to secrecy, but it makes zero difference when it doesn't affect the bottom line. Which is the only reason that it's there in the first place.
And what's a human to do if they have to use a system and the system gets breached? Are they supposed to just change their finger-prints? The point is, it's all just a race for no reason at all. What's "secure" today is wide open tomorrow.
So, suck it up...I don't care...my bank covers my losses ( it's money they created out of thin air anyway ) and no harm comes of anything.
So, ask yourself, why do you care?
"So, suck it up...I don't care...my bank covers my losses ( it's money they created out of thin air anyway ) and no harm comes of anything.
So, ask yourself, why do you care?"
That is possibly the most depressing post I've read here - worse than code junky and his Brexit crap.
Biting the hand that feeds IT © 1998–2020