"“Better products will do better because buyers will quickly be able to determine that they’re better,” he said."
Kinda weird seeing such a naive comment emerge from Bruce. He must have been asleep in the 80s, 90s and 00s.
Hacking attacks are more or less inevitable, so organisations need to move on from the protection and detection of attacks towards managing their response to breaches so as to minimise harm, according to security guru Bruce Schneier. Prevention and detection are necessary, but not sufficient, he said. Improving response means …
The ironic part is, he said this while introducing a new product.
One that an FNG (F**ing New Guy) can fardle into the vastness of imagination.
Meanwhile, I've said the very same thing, for the past 20+ years.
An Advanced Persistent Threat (APT) can and *will* enter any enterprise that it desires to enter.
Which is why governments have segregated networks for various security levels.
They're well funded, experienced and know the various platforms and software platforms. It's their job to do so.
One can only hope to delay the intrusion before information is exfiltrated.
The real problem is, it's expensive to do so.
Expensive enough in terms of inconvenience to operations, as well as financial considerations.
Find a solution to that problem, you'll be rich beyond royalty wealth. Good luck finding that magical balance. I haven't, Bruce hasn't. Even smarter people haven't.
So, it's all down to delaying the SOB and noticing entry, then cutting the link and remediating.
I'm not rich (I'm an engineer..) but I believe we may have the "magical balance".
For the last two years, our website has been under attack by hackers from 71 countries, So far, they've lobbed every hack you can imagine at us (lately, bash exploits), with a total of over a quarter of a million hack attempts in that time.
Result? Bad Guys zero.
I don't buy the premise that the hackers will always get in. With good engineering practices, it's possible to make a Unix system bulletproof, but you need to decide where your priorities lie. We're in the security business, so security is Number One, and everything else is of secondary importance.
That's where we differ from people like JP Morgan, Target and all the other victims of cybercrime.
We don't run anything with PHP in it, even if it does make the graphics more fancy, or save development time, we don't use DHCP, so people are forced to use the company's hardware, and we don't use scripted languages on anything that faces the web. Try doing this for a start, and a lot of your problems will go away.
In addition to the above, we run an IDS/IPS which actually does what it's supposed to do.
I was so disillusioned with the rubbish that was on the market, that I wrote my own, which isn't a rule-based shell script, but is a content-based executable. It will identify an incoming query as a hack, drop the connection and add a new firewall rule to block the address in under a second. It will also report the IP address to its ISP, so the parasite gets taken off the internet.
This isn't a sales pitch, I'm happy to give the source code away for free to anyone who drops me a line, either direct, or via Linkedin.
Show me the money, Sitkowski.
Seriously, unless you are independently wealthy, and given that you have disavowed the second clause of the Title, something smells rotten in Denmark. My first suspicious guess would be back doors in anything that's given away. Even in source form, it could be obfuscated in some harmless-seeming utility function.
To me, the root of these problems are obviously economic, but it just depends on how you understand economics. For example, start by imagining that Microsoft was not allowed to use a EULA that absolutely absolved them from any legal liability for their most egregious and damaging mistakes. Yeah, hard to imagine, but you can be sure that they would be MUCH more cautious about what features they put in the OS.
Actually, I'd go even deeper and say that the real problem is that economics itself is in desperate need of a new paradigm. My candidate is TIME, not money. The problem is that time is harder to measure and quantify, so the economists picked money instead. But try to imagine how you would budget your time if you actually knew how much you had left?
"With good engineering practices, it's possible to make a Unix system bulletproof"
Sorry, but there is no such thing as a bulletproof unix system. That particular punchbowl has been drained dry and the hangover has started to kick in.
Security is an OS agnostic issue and ALL of them have weaknesses awaiting exploit. You too will be hacked just as soon as someone figures out a new route and wants into your company enough to hit it first.
A little more sobriety needs to come forth from the unix community. After decades of it being Microsoft, things have changed, you are the weakest link.
All your experience means is that you haven't been attacked by a motivated, funded and skilled hacker yet. Having script kiddies launching automated attacks at you is one thing. Having someone decide that they're going to go after you specifically, and have the resources to spend a lot of money getting top talent to do so would provide a different result.
Unless you have NEVER had a security hole - i.e., you've NEVER applied a single patch that fixed a security issue that you were previously vulnerable to but didn't know, you can't claim you are as invulnerable as you think. No one can claim that anyway. Heck, Shell Shocked showed something that has been available to exploit for 23 years! You may have configured your systems so you weren't vulnerable to it, but that doesn't mean there aren't other issues that have been around for many years you are vulnerable to.
Someone, somewhere, knows about most security holes before they were found and patched, whether found by security researchers, found by the software owner/author, or accidentally fixed via a code change. Such attacks would be used against you if they were going after you. By the time an exploit is used for automated attacks, the best hackers have left it behind and moved on to others that the script kiddies don't know about. They've probably got a nice library of ways to attack you with things you are vulnerable to today but don't know about. They are careful about using their best stuff as the less it is used the greater the chance it will still be available to them next time they need it.
> Kinda weird seeing such a naive comment emerge from Bruce. He must have been asleep in the 80s, 90s and 00s.
Quite. I wonder what exact audience he had in mind, as he's usually more thoughtful than this. As long ago as 15 years we regarded recovery as more important than discovery, but you do still get views that the militaristic adversarial approach of "keep 'em out" is still prevalent, as indicated in comments below. But that means that any intrusion means strategic failure, and almost certain subsequent paralysis.
The big problems with Bruce's statement, though, are the decisions regarding the severity of attack. Rather like disaster recovery being used in instances short of full scale disasters, at what point to you kick in recovery processes.
Security is complicated. I don't know if you can never prevent a determined attacker, but it is certainly difficult in the extreme. It is effectively impossible for non-experts.
Our entire infrastructure militates against security. Technical people need to become more literate and the public needs to demand reasonable security. What we have now is not reasonable by any measure. We can't protect against the neighbors kids, let alone state sponsored crackers.
It is hard not to be paranoid when any person with even passing familiarity with our security situation knows it is beyond broken. There is simply no will to fix it.
Until everyone starts getting considerably more serious about security we will continue with ever more dangerous breaches. Are the techies on here not aware of how bad it is or are they just apathetic?
And with what resources do you propose we "take security seriously?"
Good security is expensive. In someone's time or in money to buy good products. Usually both. That's a business decision; techies don't get to make those.
Throw in the "security versus usability" arguments, and add a dash of the current data sovereignty mess and you have a huge question around "who's stuff do you buy, and can you really trust them?"
Even for those of us who know what's what, it always comes back to money. Nobody's willing to spend on it. Not even IT companies. All they want is for you to buy cloud solutions. American cloud solutions.
So the choices we face seem to be "caught in the bottom of a dark pit surrounded by strange noises without the money to buy a ladder" or "in a wide open feild covered in gasoline with yankee politicians playing with matches at the edge of the feild."
>So the choices we face seem to be "caught in the bottom of a dark pit surrounded by strange noises without the money to buy a ladder" or "in a wide open feild covered in gasoline with yankee politicians playing with matches at the edge of the feild."
So which one is the American cloud solution?
In the common business model, where we rely on technology for protection, maybe. Probably, even. But we can do better. We HAVE to do better.
Our typical business security model is roughly equivalent to putting your front door on the side of the building and painting it purple, because no one will ever expect it there or to look like that. And stunningly enough, the average cyber thief is completely stumped by this, as they aren't overly clever (and are REALLY proud of themselves when they recognize the door on the side of the house, even though it is purple). Problem is...stopping 99.99% of the cyber-thief-wannabes is not enough when millions of attempts are being made...or one person wants your data really badly.
My other analogy is:
You run a business with a fleet of vehicles driven by your employees. A few of your employees are responsible for an unusual number of "events" with those vehicles. Do you:
1) Fire the employees?
2) Reassign them to non-driving jobs?
3) Train them to drive better?
4) Put bigger bumpers on the vehicles?
In the IT world, we just put bigger bumpers on the vehicles, the one thing that most people would consider the only WRONG answer.
I hate the statement "You can't achieve perfect security" -- while it may be true, it almost always is used as an excuse to not even try. Just because you may SOMEDAY make a mistake behind the wheel of a vehicle isn't an excuse to not try your best to drive safely, nor is it a vindication for those who perpetually put themselves and others at risk.
Technology can not counter stupid people and bad designs. You cannot take a horribly insecure applications and rely on technology to make them "safe". You cannot antivirus/firewall/technology your way to security. Yet that's all we do.
And yet, that's what we do. We implement bad designs, let untrained people have access to things they shouldn't, and managers offer to terminate and replace any IT person who has the guts to say, "that's a bad idea from a security standpoint".
Realistically, security is almost never the first priority. In fact, it is usually close to dead-last, behind convenience, cost, something to stuff on my resume, and coolness.
I used to work for a large company which had a rigorous set of criteria for company-network connected smart phones. At the time I started, only the Blackberry came close to meeting the requirements (central manageability, remote wipe, full encryption, among others). We heard word that the CIO personally owned five iProducts. Those of us at the grunt level knew what was coming, and sure enough, it did: iProducts were to be permitted onto the company network, even though they didn't (yet) meet most of the security /requirements/ for attachment, and our job was to figure out how to make the new iProducts as unbad as we could make them, not say "we got bigger problems we need to solve first before you give us new problems".
We can do much better than we have. Step one will probably be liability for the people who allow data out. Not "We followed all these compliance steps so it isn't our fault" -- doesn't matter, YOU collected the data, you retained the data, you lost the data, IT IS YOUR RESPONSIBILITY. Simple.
Yes, I'm saying Schneier is wrong on this, and that puts me on the wrong side of a lot of people. But I feel he is. Can we make something 100% "secure"? Probably not. But we always need to try. And we can't take the totally full-a**ed attempts we've been making at something pathetically called "security" and say, "See? It doesn't work!".
We can't keep using the same insecure apps, no matter how "common". We can't keep using bad designs. We can't keep letting untrained, ignorant people play with dangerous tools like computers, and we can't keep taking a "Security Last" approach to design.
"1) Fire the employees?
2) Reassign them to non-driving jobs?
3) Train them to drive better?
4) Put bigger bumpers on the vehicles?"
You can't do (1) because they're probably in positions of trust. Fire them and you run the very real risk of retaliatory sabotage, and their position of trust means they can leave secret backdoors in their wake. (2)'s out because they're not stupid. ANY kind of relegation may as well equate to a firing. And they may not be willing to undergo (3). So what happens when you're caught between Scylla and Charybdis: caught with an employee already in a position of trust but now found to not be trustworthy?
"Yes, I'm saying Schneier is wrong on this, and that puts me on the wrong side of a lot of people. But I feel he is. Can we make something 100% "secure"? Probably not. But we always need to try. And we can't take the totally full-a**ed attempts we've been making at something pathetically called "security" and say, "See? It doesn't work!"."
But what happens when the openings come from UP TOP? Plus how do we convince people to care when they'd rather put their effort into deflecting the damage, a la a professional slacker?
actually, I told my vehicle analogy to a long-time HR person (unionized rust-belt US -- I'm sure the answer varies depending on location), he immediately told me that 1) was the only answer: fire 'em. Retraining or reassigning is showing preferential treatment for those that screw up, this goes over very poorly.
And yes, bad policy usually does come above, it seems. Worked for a company with rigid "no wifi" rules...until one day the owner said, "I don't like all these wires on my desk" and demanded a wireless laptop. Now this guy was such a non-expert at computers that he had his sixty+ year old secretary start up and shut down his laptop every day...but he wanted wireless and anyone who said "no" would be promptly looking for a new job. And, he owned the company outright, so one could certainly argue he had the right to do that. Except ... it was an insurance company, which meant lots of personal and (hopefully) private information was stored on our computers. The choice was do what we know is wrong, or go try to do the right thing -- somewhere else? (and this guy wasn't going to spend the money for lots of protective technology, either).
That's why I say accountability is something that will help -- and this attitude of "you can't stop them" is only going to stop people from trying to stop them.
"In the common business model, where we rely on technology for protection, maybe. Probably, even. But we can do better. We HAVE to do better."
As IT professionals and business people who care about the reputation of your companies you should
But why bother when you can just drop the costs on the customer or pay a bit more insurance?
Until Board level staff start doing jail time for (effectively) reckless endangerment of users data, shareholders start cancelling bonuses for f**kwitted security breaches or companies starting going out of business directly as a result of data loss (kicking in the Board level survival instinct) this will not be a sufficient priority.
Yes you can do better if
a)There is Board level commitment.
b)The user groups is sufficiently small and security conscious.
c)Security is a factor in all hardware and software decisions. Not just purchasing, all configuration decisions.
No one thought twice about adding in LZW libraries and yet that rendering bug existed in them for 20 years, and by extension every app that used that library inherited that bug as well.
So despite your site or your core apps not using that functionality all it would take would be a properly crafted file sent to them to get the ball rolling......
If the targets worthwhile enough to do people will commit time and resources to it. Most may well be amateurish skiddies who can be swatted like flies, but some will be be serious players, possibly as part of a team contributing different elements of the penetration.
Then it's about about damage limitation and repair.
You're not access TOR services by IP. You're accessing them through TOR. Think of TOR more like a VPN, but made out of a different protocol. Your IP connects you to the TOR network. But thins occurring inside TOR don't occur with your ISP-provided external IP.
TOR is essentially a distributed message bus. You don't really do peer-to-peer anything, and even if you did, you wouldn't be talking "your external IP to their external IP". It would be more like "your temporary TOR GUID to their semi-permenant TOR GUID". Where the GUID of the service is periodically changed and re-announced to TOR's service directory (like a DNS server).
Remember, your access to TOR is always gated by your first-hop node into the service.
So you could track a TOR GUID, but converting a TOR GUID into an IPv4 or IPv6 globally reachable IP address? You have ot wait for them to fuck up and reveal it. (Or you control all access points to TOR and you do traffic pattern matching, but that's another story....)
The main problem with MSS (Managed Security Service) work field is that there are just too many logs to analyze and this is because there are too many false positives from the security solutions such as IDS, IPS, WAF (Web Application Firewall), etc. It's still very difficult problem to the seller and the buyer.
Schneier said "Economists have shown that because there’s no good way to test for quality," But there's an easy way to test for quality. Purpose of All security solutions finds an attack. It has only to figure out How accurate and what is the cause of inaccuracy.