Sadly very true
Get hacked and virtually no impact to reputation or share price.
Compare it to the cost of doing secure IT and yet again the fucking bean-counters chose the low cost option and fuck those impacted by the breach.
Whenever mega-hacks like the Yahoo! fiasco hit the news, inevitably the question gets asked as to why the IT security systems weren't good enough. The answer could be that it's not in a company's financial interest to be secure. A study by the RAND Corporation, published in the Journal of Cybersecurity, looked at the frequency …
Company officers actually have a legal duty to operate the company in a financially responsible manner.
Oh really? And this is written where exactly?
The only one to complain will be the shareholder in any case...
Besides, the "financially responsible manner" is open to interpretation and risk management.
Perhaps others may have missed the sarcasm in your post thereby resulting in you getting more down votes than up votes.
I am sure that what you meant to say rather than imply is something along the lines of...
<sarc>Company officers actually have a legal duty to operate the company in a financially responsible manner.</sarc>
However the regulatory authorities involved, in this case financial, are either clueless shitwanks, limp thickdicks, in someone else's pocket, taking bribes or expect to parachute out of their 'public service' job into an executive role for the offending company on more money than they get at the moment along with a Golden Goodbye and Pension Pot from their previous 'public service role' so fuck all happens.
The previous applies to any and all regulatory authorities, unless they are fucking over a 'public service entity' whereby any fine and/or costs are paid for by your taxes, and, as a result, nothing short of exploding dildo LiION batteries will wake them from their slumber, probably not, having crawled off the wife in order to drool on the dog whilst snoring after another exceptional one minute performance with 10 inches of rock hard meat, 'Mr Limp' failed to protrude beyond the belly, leaving the wife to finish the job off and then the bean counters will still do a cost-benefit analysis.
Err... You might prefer to use different words.
HTH
"However the regulatory authorities involved, in this case financial, are either clueless shitwanks, limp thickdicks, in someone else's pocket, taking bribes or expect to parachute out of their 'public service' job into an executive role for the offending company on more money than they get at the moment along with a Golden Goodbye and Pension Pot from their previous 'public service role' so fuck all happens.
The previous applies to any and all regulatory authorities, unless they are fucking over a 'public service entity' whereby any fine and/or costs are paid for by your taxes, and, as a result, nothing short of exploding dildo LiION batteries will wake them from their slumber, probably not, having crawled off the wife in order to drool on the dog whilst snoring after another exceptional one minute performance with 10 inches of rock hard meat, 'Mr Limp' failed to protrude beyond the belly, leaving the wife to finish the job off and then the bean counters will still do a cost-benefit analysis.
Err... You might prefer to use different words."
Wow. Just wow. And I had to quote it just to flavour the full awesomeness again. Best rant I've read in years. If I could up-vote multiple times, I would.
"Company officers actually have a legal duty to operate the company in a financially responsible manner."
Indeed they do.
But this is like a bank deciding: "The clients money is insured anyway, so why build an expensive vault? Just store the cash in cardboard boxes in the back room."
Bank saves millions, but would you call it "financially responsible"?
Only unless and until the insurance company "recovers" the loss from the bank for failing to meet the terms of insurance.
Or the regulatory authority and/or court awards punitive fines and/or damages.
Clearly, insurance companies will be the main driver for good security for the foreseeable future.
It was a cost benefit analysis.
The cold equations said it'd cost them a shed load of money to do the changes and save about180 lives and 180 burns cases so the Board said f**k em.
IIRC one of those burns case was an 11YO boy. :(
Please note insurers regularly put a value on human life and some industries or products specify the model. IIRC a weighted average life time salary is often used, about $1-2m.
Equating the value of human life is a very complex issue. I'm pleased to say that in my 20+ years of infosec I've only had to weigh the cost of the loss of data and reputation. Very often the cost of securing data is more than it's worth. The classic example is to question the value of storing a £200 lawn mower in a shed with a £5000 lock and vault door. Whilst you may feel that your personal data held by a bank or business is priceless, sadly you are in your own with your estimation of its value. It's not always the bean counter making the call: it's much more likely the infosec manager deciding how much to spend securing your data.
The effect on your reputation varies based on your industry. A bank getting hacked and losing customer information is going to be quite expensive, not just in mitigating the effect on existing customer, but the loss of selfsame customers and a dearth of new ones for a significant period of time, not to mention the fines imposed by the regulators.
Pay the money, get the good stuff, and be the best in the business. That's where you want to be.
Decades later everyone still quotes it (that and the "Ratner Effect"). A few contributing factors:
(1) it was a stark pricing of life - in public spaces we're too squeamish to do this (see the "can't put a price on human life" items posted on every public heath decision)
(2) it used that pricing to directly drive company behaviour, rather than pretending to be "nice" (we all know that companies run on cash not cuddles, but we'd prefer to believe it otherwise and they spend a lot of advertising bucks on the pretence)
(3) it was a company still thought of as a cornerstone of the American Dream (early 70s, US car firms still coasting on fumes of previous glories)
(4) it was novel (the public revelation, if not the behaviour)
(5) for whatever reasons brand identities matter in that sector: automobile manufacturers like to tout their legacies. In IT who does, other than IBM (and that's pretty hollow nowadays)? Some firms "re-invent" themselves, like HP/Compaq/whatever-the-fuck-is-left-of-them-now, some supposedly valuable identities get transplanted like McAfee (private/Intel/make-us-an-offer)
Since this doesn't apply when a dot-com loses our personal data (most of us lack a similarly visceral reaction of its value and with so many similar cases in recent years there's no sustained shock) we can't hope that the risk to the reputation will be a useful deterrent. So if we don't want the damage externalised (i.e. shat all over the customers, as it is today) are we looking at somehow devising standards and imposing serious penalties? (so much devilry in the details of how standards should be developed and compliance tested, etc)
The "Ratner effect" is an interesting study in reputational losses to a company. People think of it as a comment that destroyed a company. But it didn't - Ratners is still the largest jewellery chain in both the UK and North America. Of course, the name is different now. Ratner's comment didn't sink the company, just forced it to ditch one brand and continue business as usual through its others.
".....(1) it was a stark pricing of life.....' No, it wasn't. The original Ford paper was nothing to do with "corporate culture" or "greed", it was simply a cost-benefit paper produced by Ford in 1973 for the NHTSA when the NHTSA was suggesting new rear-crash testing regulations. The original paper was a comparison of the costs to Ford of changing the Pinto fuel system and the cost to society of crash injuries and victims relating to burns from the existing design, not the cost of Ford being sued. This was subsequently taken waaaaaaaaaaay out of context by "progressive" Mother Jones journo Mark Dowie in a 1977 article, in which he even lied about the figures (he changed the analysis from 180 deaths to 500-900) to suit his Big Bad American Corporation theme. Ford was completely in line with the 1967 regulations when it originally designed the Pinto and the Pinto was later shown to be no more at risk from rear collisions than any of its competitors.
It's yet another example of biased risk-awareness.
You can take a service - let's say...a calendar. You've got a choice of going with provider A, who will give you a product that's free but with a few adverts and some behind-the-scenes data-slurping and the possibility that any details you give them may end up being sold in bulk by whichever unscrupulous group has compromised the security of that organisation.
Or you go with company B, who don't give you adverts, don't mine your data and invest heavily in their cyber-security platforms - but it'll cost you £10 a month for a product of a comparable standard.
Probably 95% of people would go for the former, and accept the risk that there's a very slight chance that some of their credentials will be compromised. If company A doesn't need your address and bank details, then the compromise is an inconvenience to the average user. If company B is compromised - and let's remember that no connected system can ever be 100% secure - then potentially you'll be exposed to a significantly larger loss - not just getting spammed for viagra and russian brides, but you may lose real beans-and-beers money from your bank account or credit card.
So yeah. I don't like the message but I kind of understand it. It feels like people increasingly see "being hacked" in the same vein as getting a speeding ticket - you do what you can to avoid it, and if it happens you'll be annoyed, but it's not the end of the world.
Interesting.
"If company A doesn't need your address and bank details, then the compromise is an inconvenience to the average user."
Really? A calendar as per your example? What's on the calendar?
Uncle Fred's 60th birthday next week? Ooh look, with got Uncle Fred's DoB. Is Uncle Fred identified in any further way? Does it have his email address? A little bit of information for ID theft and material for a more convincing phishing attempt - click on this e-birthday card.
Leave on holiday in 2 weeks time, return 10 days later? House unoccupied - nice.
Lots and lots of scope from a busy calendar.
"Lots and lots of scope from a busy calendar."
You can replace 'calendar' with whatever you like - the OP was clearly just trying to come up with a simple example.
Company A provides a Facepalmascope. It's free [but slurping]. Company B provides one for a small fee, with no slurping - but payment details. Etc.
Sometimes there's a time for pedantry, and sometimes there's a time to look beyond it to see the abstraction.
"You can replace 'calendar' with whatever you like - the OP was clearly just trying to come up with a simple example."
Of course. I was just taking his example in the same way. Whatever you use a service to store there's likely to be lots of criminal value in it beyond the login and financial stuff. What other examples do you want? Email server? Password store?
Curious to know to what degree affected companies have been able to externalise the costs of data breaches. What price can we put on the hassle / worry / embarrassment / waste of time / personal financial loss accruing to the people whose data is lost?
Companies won't invest in good security until they are liable for the true and full costs of their greed, laziness and incompetence. We need to push our politicians to formulate laws and enable compensation schemes that make it too expensive to be cavalier about security.
Well, once Article 50 is signed, it's supposed to be a two year process - and if Supreme Commander May signs it in early 2017 (as has been suggested), that means we'll still be in the EU until early 2019 - so it'll be interesting to see what happens with such things.
I'd rather it not be interesting, though, and just know.
Well, once Article 50 is signed, it's supposed to be a two year process
ITYF it's not more than two years; it can be shorter.
Now given that we're negotiating form a position of weakness - and the treaty seems to be set up to do that deliberately - I can't imagine it taking less than the full two years. But Boris was banging on about doing it more quickly the other day...
Vic.
Ultimately the decision is to be taken en-masse by the buying public, and time and time again the public proves it couldn't give a rats if a) They've not yet suffered personally and b) Insecure company A is cheaper. For example, look up phone/broadband in the UK, you'll almost certainly find TalkTalk is cheapest and they're getting new customers even as we speak yet everyone remembers their data breach, which proves TalkTalk were right to do a piss poor job as long as people can save about £20-£30 a year.
"For example, look up phone/broadband in the UK, you'll almost certainly find TalkTalk is cheapest and they're getting new customers even as we speak yet everyone remembers their data breach,"
Yup. I know a few - including two clients - who have signed up with them since that. When asked - with the breach specifically mentioned - they've all said the same thing: They were the cheapest.
People are stupid.
Take Yahoo (you can keep it). They want your phone number to do 2FA. They get hacked and the phone numbers they've got get stolen so if you gave them your phone number you're less secure, not more.
Why have they not used FreeOTP (or made a brain-dead tap-and-drool Yahoo-branded version based on the open source original) to do 2FA? If they had done this then I would have used 2FA, but they don't, they insist on a phone number which I'm never going to give them precisely because of hacks like this... or because they've probably sold them on to their "trusted partners" anyway.
Trust goes both ways, very few companies manage to show they're worthy of it.
Or another step: require insurance for handling of users and customers data and let the insurance premiums factor in the possibility of breach in your particular setup. You cannot have "100% secure" and you have to pay one way or another - either in your security setup or in insurance premiums.
However, step one to this would be for companies to actually need insurance money, in significant quantity enough to bother with insurance in the first place. That's when fines (or perhaps civil action lawsuits) come in.
" fine the companies that are hacked."
You want immediate action? Howls of agony and outrage? Actual results?
As well as fining the company, freeze 10% of each of the shareholder's stock for two years, or until the problem is fixed.
Most companies would be secure before the next quarter.
(And a lot of politicians would find lose campaign funds next election, so win/win!)
Per comments by other posters, the cost doesn't really hit home... until it gets personal. Taking this kind of risk is a gamble to be sure, and one component the article doesn't look at is the future risks of IoT. We're on a digitization fastrack to ramping up connectivity of everything from cars and homes, to things we haven't even conceived of yet. The study cited looked at 2004-2015; even if we agreed with those round average numbers, that's not the future we're looking at.
The author of the report clearly knows nothing about cyber insurance.
How can a cyber insurance company know anything about the actual state of a company's cyber insurance when its primary data intake is a 9 page, user filled out form?
What cyber insurance company's do is 2 buckets:
1) Value at risk: how much would they possibly have to pay. This part, they can judge reasonably well.
2) Is the insuree trying at all? Trying but incapable? Not trying?
That's where the industry is at right now, and will continue to be until some way of understanding "good" security vs. "bad" security can be automatically and easily computed.
"That's where the industry is at right now, and will continue to be until some way of understanding "good" security vs. "bad" security can be automatically and easily computed."
But the human factor always gets involved which is why computers can't do it and why you need human actuaries; it takes one to know one, basically.
This is the kind of insanity that inevitably happens as a consequence of the mindless pursuit of wealth/capital for the sake of acruing more wealth/capital. Eventually no one wins, we're not quite there yet, soon enough we will be.
PS Marx had a thing or two to say about this kind of degeneracy.
PPS AC because of all the neo-con cuntz that lurk here
I will just state that you don't know what a neo-con is.
Protip: It has nothing to do with capitalism. It has, however, a lot to do with Troskyism and Zionism.
The real problem, if you ask me, is that is the operators of these sites are never accountable for their own shitty security. Everybody blames the hacker. The real black-hats are rarely caught, but sometimes a white-hat will politely point out a vulnerability and expect to be rewarded - instead he is ignored (perhaps to save face) and the vulnerability often remains unpatched. So a grey-hat comes along and rudely makes the vulnerability obvious to all. In most cases he is attacked by the organisation (never mind rewarded) and frequently prosecuted by the state (who want to make an example of him in the hopes that this will scare the black-hats).
IMO the real reason that companies never bother to secure their networks, is because they can always label it as a "cyberattack", as if NOBODY could have stopped this ACT OF TERROR on their systems.
When a system of this scale gets compromised, it should be the sysadmin, not just the "hacker" who gets held accountable by the state. It would be nice if there was a neutral authority that white-hats could report vulnerabilities to, which will confirm them and then force (by law) the companies involved to close them.
Then again, the cynic in me says these sites are deliberately left open so that the state spies can get in, whilst having yet another excuse to pursue and destroy anyone else who wields the same power.
"When a system of this scale gets compromised, it should be the sysadmin, not just the "hacker" who gets held accountable by the state. It would be nice if there was a neutral authority that white-hats could report vulnerabilities to, which will confirm them and then force (by law) the companies involved to close them."
No such thing as a neutral authority. Any that rises gets corrupted the moment it appears. That's how cutthroat the industry is around here. As for pointing blame, everyone will just point fingers (the executives/sysadmins refused to listen) and then disappear beyond extradition. How can you properly punish transnationals who can hide behind the sovereignty of another power?
Read much?
From the link:
"More startling, Schwartz shows that everyone's received ideas about the fabled "smoking gun" memo are false (the one supposedly dealing with how it was cheaper to save money on a small part and pay off later lawsuits... and immortalized in the movie "Fight Club"). The actual memo did not pertain to Pintos, or even Ford products, but to American cars in general; it dealt with rollovers, not rear-end collisions; it did not contemplate the matter of tort liability at all, let alone accept it as cheaper than a design change; it assigned a value to human life because federal regulators, for whose eyes it was meant, themselves employed that concept in their deliberations; and the value it used was one that they, the regulators, had set forth in documents."
"When a system of this scale gets compromised, it should be the sysadmin, not just the "hacker" who gets held accountable by the state."
You mean the the guy (or team), that's repeatedly brought up the issue with superiors, only to be brushed aside with "No, that's not in the budget.", or, "No, we see no problem here."? That guy?
Better: The authorities should lock down the system and interview the sysadmin as a witness, as soon as a breach is reported.
If it's the sysadmin's fault, then hold him/her/them accountable.
If not, follow the trail of "No!" upstream until the source is found, then hold that part(ies) as accountable as the hacker.
Amen to that.
I have raised security issues until I am blue in the face and got nowhere. In fact I STILL raise such issues and get nowhere. I've even had to sneak some of the more basic security measures in when deploying new systems (You did it how? No, now you have to do it like this...).
Crazy way to work. Even if the PHBs don't give a shit about the security of data (yours or otherwise), some sysadmins, like myself, sure as hell do!
Anyway, why pick on the sysadmin? What about application security? What about database security? What about basic file security? Where does the witchhunt end? Not all data breaches are external to the company.....
You mean the the guy (or team), that's repeatedly brought up the issue with superiors, only to be brushed aside with "No, that's not in the budget.", or, "No, we see no problem here."? That guy?
Of course! After all, most employees only real worth is to be the scapegoat, isn't it? Just ask the new former employees at Wells Fargo and the CxO who got a $118 million payday for it.
I can see the insurance industry and some in-house risk management departments getting a look at this and saying "So what if we lose a few million accounts?" It's all about the bottom line and, despite some politicians making capital on some mild outrage, we might not expect much better security looking forward.
We will most likely see just an acceptable risk formula applied to our very dear personal identities.
Obviously, the Ford Pinto debacle would not have happened if, instead of $49 million dollars, Ford would have been expected to pay the actual medical care costs of saving the victims of their decision not to engage in a recall harmless from its consequences. The cost of raising them from the dead would, of course, be an untold number of billions or trillions of dollars, since it would include the cost of the necessary medical research program to find out how to do that.
So the message would be clear: do not do anything that puts lives at risk. Ever. Unless you're God.
Criminal negligence causing death, known as involuntary homicide in the United States, could also carry the death penalty; since it's a culpable act that results in death to its victims, that's hardly excessive.
In computing security is just about avoiding to do risky things. The problem is that people doing IT seem to have learned their trade from TV shows like "Jackass".
Instead of building simple interfaces to systems which everyone can use, they design hugely complex web interfaces everyone hates as they include GUI frameworks that keep you from copying and/or pasting the information you want to transfer. Just think about how simpler it would be if you would just have your users ssh into a computer where they can access shell scripts for the things they want to do. For your average user that would still just be "magic" that's involved by copying and pasting "magic words" from their text file to their shell, but behind the scenes you'd have a _lot_ less code and a lot less things that can go wrong. Plus you can do things like authentication the sane way (public key) instead of passwords.
Also, I think the original work makes a statistical mistake. The average breach may cost X but what is the distribution of X and what is the percentage X to company revenues and its distribution. Also, most shoplifting and billing fraud does not affect the stock price either and people know it is a real problem. The other issue with computer security is some information is legally protected so if it gets out in the wild you are the hook for real nasty lawsuits for the real victims.
Those website interfaces are not done by IT, but by marketing and their cadre of designers who really don't know fuck all about computers, networks, security or user interfaces, but know everything about buzzwords, "metrics" and the scripting language du-jour.
Quick story: I'm doing support for this billion dollar company and I get a deskside call from a director. Legal no less. Go there and they tell me there is an Outlook issue. An HTML email is not displaying correctly. Right away I know what the problem is but have do the whole Outlook diagnosis song and dance to cover my ass when I finally tell them.... the HTML code is fucked. (it was. I found the problem in the first five minutes then spent the next hour explaining how this wasn't and Outlook issue and doing the diagnostic dog and pony show)
This only got me into trouble with my boss and the director or marketing as marketing was sending the email to legal for final approval. Turns out the two directors didn't like each, but I sure as shit did not care and was sure as hell not going to lie to the legal department. My boss had the fucking gall to tell me I shouldn't have said anything for which I would still have been punished.
And that is just ONE of the reasons I have for my very deserved hate toward marketing.
Within the last year, el Reg has reported Infosec experts charging $10k a day for their services. At those kind of prices, it's really not surprising that businesses are opting for Pinto solutions. The reason all just boils down to the continuing shortage of infosec experts, leading to a tiny number of overworked professionals charging inflated prices and many businesses just deciding to take their chances instead.
There's no shortage of infosec experts. I know several who have decades of experience and are out of work and have been for some time.
As for the 10k consultant, that's an outlier.
No the problem comes back to and will always come back to manglement. The infosec folks I know who ARE working, tell me horror stories all the time that C level is textbook clueless.
"There's no shortage of infosec experts."
That's odd, because more or less everyone in Infosec and everyone in infosec recruitment disagrees with you aside from 'some guys you know'. The skills shortage in infosec is not some crazy controversial thing. It's been the standard situation for the last 5 years, with hundreds of thousands of unfilled positions and a tiny set of people who have been thoroughly trained. If your mates are out of work, then the simple answer is they aren't infosec experts, despite what they might believe.
Where I live, in Manchester, mid-level Infosec positions are offering 80-100k salaries with benefits and are struggling to fill them. Similar level positions in other areas (systems engineers, say) are pulling in 40-50k and filling quickly. There's a clear supply-demand issue in infosec.
Management doesn't help here (for example, they might want to try, I dunno, paying for some of their staff to take training courses to fill that gap instead of all chasing a tiny talent pool), but they're basically doing their job properly - they look at the risk, the fallout from it going wrong, and the cost to mitigate that risk, and they're taking the cheapest option. That's what management is for. The problem is that the cost to mitigate the risk has been enormously distorted due to the skills shortage, leading the worst decision to become the most cost-effective option. Exactly as in the Pinto case.
I disagree with this article for various reasons, mostly from the actual experience of dealing with breaches and situations where a company has had to clean up over an incident. It can be very costly and the fact this article only focuses on what is going on in the IT Department is very narrow in thinking when Information Security is a company wide initiative and program. The PR ramifications can be wide and long lasting as the article touches on but not the full picture. To me, this is article is just a small sprinkle of the overall issue itself. I agree that costs of risk mitigation must be weighed against the risk itself, especially likelihood with additional compensating controls. Yahoo is about to learn a very valuable lesson on how skipping particular security practices and controls will potentially cost them millions of dollars and jeopardize their sale to Verizon.
You know that's not gonna happen. First, corporate hierarchies are designed to ensure scapegoats (deflecting liability is one reason corporations exist to begin with--otherwise, investors won't invest). Second, why do you think corporate bigwigs donate to Congressional campaigns and the like?
"Insurance companies would also be in an ideal position to judge what IT security systems work best, he pointed out. After all, their job is to price risk and they would have the data on incidents and how they occurred. "
This is only sorta true, and I don't think it will work.
Insurance companies are in the business of making money. They do this not by assessing the risks and charging appropriately for an insurance policy. They do it by deliberately seeking to avoid paying out any policies. You get flood insurance? A flood comes along and the company will do its level best to avoid paying out any of the policies - "act of god, not covere", "sorry, the coverage only relates to rising water, but before the rising water flooded your house, the storm winds and falling water destroyed your house first, therefore there was nothing left when the flash-flood 2 hours later came through, there was nothing left for it to damage", "sorry, you didn't add your child who was born 23 minutes ago to the policy, and the policy required all dependants to be listed as part of the flood insurance policy" and so on and so forth.
Insurance means jack for ACTUAL risk, as they are institutional predicated on never paying out a policy, even valid, they will try and find ways to avoid paying.
They'd rather make $3bill by never paying out any policies than making $2bill and paying out policies.