Without downplaying the possibilities...
The names, addresses and phone numbers of many million UK residents were published by a hack of BT - it's called the Phone Book(TM)
Hackers broke into JPMorgan's network through a giant security hole left open by a failure to switch on two-factor authentication on an overlooked server. The New York Times reports that technicians at JPM had failed to upgrade one of its network servers, meaning that access was possible without knowing a combination of a …
Doesn't The Phone Book list the forename(s) by initial? So the entry for Joseph Owen Edward Bloggs get listed as
1 Bloggs Towers
If so, is knowing the initials sufficient to commit some sort of ID fraud (registering for loans, store cards etc...) or would you be required to know the full names? I assume for a credit check you'd need to know the bank details as well which I hope were not included in the JPM hack.
"Huge data breaches seem to be becoming the norm."
Precisely. And why is that? Because everyone's network, including internal is built on TCP/IP, an inherently insecure protocol.
Nobody wants to talk about the elephant in the room. TCP/IP is a nightmare and building everything from VoIP to intranets based upon has been de facto for over a decade but it was a wrong decision IMHO. I'm going to get flamed but until someone creates a protocol where encryption is standard or makes it standard on TCP/IP and makes it default across the entire network system including intranets, or conversely goes back to a non-routable intranet protocol with configurable bridges or isolated public front ends, the breaches are only going to get worse as hackers gain more and more experience, more and more state backing.
But this is the dirty back-room secret of the network world. Everyone is so infatuated with the idea of rapid access with minimum hassle - everything IP can always talk to everything else IP - that the topic is taboo. Open protocols are such a wonderful addiction that nobody wants to deal, at the fundamental level, with the negative of being fundamentally insecure. The DNS system gets hacked because it gives TCP/IP the basis of human-readable addresses...but itself has its own infrastructure network based on TCP/IP. Sony gets hacked because the front end of the network, based on TCP/IP, is also used on the back end of the network. The only thing protecting the entire world against intrusion is code, software - firewalls - and code is never perfect, never foolproof. NEVER. The recent discoveries of Linux bugs even proves this.
So we create, fundamentally. one continuous network, as everything is based upon a single access protocol, and then try to segment it into "private" and "public" spaces with a hodgepodge of boxes and apps running code, our firewalls. We try to create "VPN's" with encrypted data streams...but the receivers on each end still run TCP/IP, with their ports and firewalls and blocks, fundamentally open to all just with a software "cop" saying what can and can't happen, all of which with an OS sitting behind hoping that the cops do their job.
JPMorgan's failure was authentication. Where was the encrypted VPN behind the authentication to prevent additional data breach failures in case the first line of defense failed? It didn't exist. Why? Because everyone was so concerned about easy connectivity that nobody wanted to implement a VPN across the entire JPM data network, trusting in the "security" of the authentication. Once a user was in you could have the entire JPM world available to you ('based upon user access rights', some would say, but once inside a network "access rights", to a powerful hacker, is just another shiny object to grab).
Segmentation of a network has (simplified) become "add a user account, restrict access to those allowed and keep your firewall rules updated" rather than "You couldn't get in here even if you tried as we don't even communicate in the same language, and that's very intentional. But here's a nice sandbox to play in where we've given you the toys you usually use, and nothing more".
Like I said, I'm gonna get the hate but this is my belief and I'm sticking to it.
The only thing protecting the entire world against intrusion is code, software - firewalls - and code is never perfect, never foolproof. NEVER. The recent discoveries of Linux bugs even proves this.
OK, this is patently false. From a very basic perspective, there are both physical (e.g air-gapping and other physical access controls) and social (Have you ever heard of phishing or other social engineering attacks?) aspects that apply here. As far as the rest, well, if you throw a lot out there, some of it might stick.
Why would the difference between internal and external protocols matter once you've got a foothold on a public facing server?
Presumably that would also be able to talk to internal servers for management. If it can't then it's firewalled off or simply not connected, just like it would be with TCP/IP.
If you're saying a different protocol on the same network would stop attackers, I doubt that installing a NetBEUI driver is beyond the wit of someone determined to get in.
"Why would the difference between internal and external protocols matter once you've got a foothold on a public facing server?
Because the front-facing server is only a socket and the datagram translation can even be done with a mask-programmed ASIC. Hard to reprogram that.
In other words, you don't need a server to do the translation, a dedicated hardware box can do it, which leads to the next answer:
Presumably that would also be able to talk to internal servers for management. If it can't then it's firewalled off or simply not connected, just like it would be with TCP/IP."
Absolutely NOT. Just because the TCP/IP socket is visible to the outside world does not mean that the management interface is, as the interface will only be visible to the internal sockets.
In other words, don't you understand the theory of a sandbox? The TCP/IP stack will be sandboxed against the internal protocol. TCP/IP will communicate to the internal network exclusively by preprogrammed calls and sockets, which have no relation to internal network operation. None. Think of it as a HAL for networking.
The only way to guarantee security is to fundamentally change the datagram. This is the principle of VPN, take an original datagram and transform it into another (in this case, encrypted), then wrap it back up into a datagram of the original type for compatibility with the transport layer. We must either VPN - encrypt - the entire internal network or change the datagram to be fundamentally different than anything which can be accessed externally, using translation between inside and outside only when we want it. In this way the management programming for the interface is not available to the outside world as it is only accessible via the native internal protocol, the translation matrix is either sandboxed software or custom hardware. Break into the translation matrix and what do you get? A sandbox or a brick wall of hardware, nothing more.
The only way to guarantee security is to fundamentally change the datagram.
AC, security is not an all or nothing proposition. What you are claiming is that if we implemented a non-routable protocol for internal traffic, we will have achieved perfect security. There will be no more breaches. No more malware, No more things that go bump in the night. While what you suggest has a reasonable place on a network, given that best practice for security would have you implement a layered approach, it should only be regarded as a part - and a relatively small part - of an overall security plan.
There are plenty of other attack vectors for bad actors to make use of as a cursory read through El Reg's security articles will reveal. From personal experience, I had to deal with plenty of malware on a campus network that had no TCP/IP installed (IPX/SPX and AppleTalk). Your enthusiasm is to be admired, but you are really missing a few details. You might try getting yourself a Security+ or CISP study guide for Christmas (or the gift giving excuse of your choice) and do a bit of reading. You should find it enlightening.
"AC, security is not an all or nothing proposition. What you are claiming is that if we implemented a non-routable protocol for internal traffic, we will have achieved perfect security."
No, I'm saying that implementing an inherently secure protocol will reduce the security workload, not eliminate it. If I sounded as if it would solve all problems, I apologize. The machine OS and other areas will always be vulnerable, via phishing or malware or what have you. What it will do is allow security services to focus on the important areas, the holes in the dyke, rather than have to inspect every square centimeter of the dyke at every runthrough.
Of course you had to deal with malware on your campus - the malware targets the MACHINES!! Don't you get it? Right now, malware can target not only the machine OS but the tools that are used to communicate between the machines - the firewalls, the routers, the protocol stacks - EVERYTHING is suspect. Everything. Sandboxing the outside protocol stack will allow you to reduce, or even remove, that security vulnerability from the equation - once hardened, the protocol stack sits on its own, in its sandbox, and can't see anything else but what it was created to see. You can then work on keeping tabs on the rest of the system, which should be a reduced workload.
Why do people have such a difficult time understanding the concept of "sandbox" in this - is is because everyone is so infatuated with TCP/IP that they can't think along different lines?
AC - if I have two servers that need to talk to one another (lets say a classic client-server app where a front-end website talks to a back-end database).
How would choosing a protocol other than TCP/IP or a protocol supporting encryption provide me with more protection than a well firewalled (i.e. only necessary ports opened between zones, ensure zone separation for non-related functions), TLS-encrypted transport stream running over TCP/IP? In terms of one continuous network, firewalls provide an easy way of addressing the "one continuous network" issue to provide security, assuming they aren't configured with allow any rules.
The problem isn't TCP/IP, the problem is poor implementation - alternative network protocols will suffer the same issues.
"How would choosing a protocol other than TCP/IP or a protocol supporting encryption provide me with more protection than a well firewalled (i.e. only necessary ports opened between zones, ensure zone separation for non-related functions), TLS-encrypted transport stream running over TCP/IP? In terms of one continuous network, firewalls provide an easy way of addressing the "one continuous network" issue to provide security, assuming they aren't configured with allow any rules.
See, there's that word: EASY. And you missed the entire point.
A firewall, even a hardware one or a server acting as one, is still a TCP/IP device in itself. All those ports, all the rules, etc, must not only be applied to the data flow for the internal device but to the device itself. Did you catch the story on the front page right now, that a trojan is out that actually reprograms your firewall? As the recent notifications of firewall vulnerabilities has proven, the firewall itself can be a point of attack and, once down, the entire network behind it is vulnerable due to the fact of instant communication equality.
A non-routable private network protocol is that - UNROUTABLE. Always. The only way to get the information out would be to use a translating bridge, either software or even better dedicated hardware like an ASIC. The basic data flow to a bridge is more easily controlled than a security gateway and much easier to guarantee security on. Unless the entire gateway is compromised and reprogrammed, and the data feed from the non-routable source is a well, a security breach is just a slight inconvenience. How are they supposed to reprogram a server if they can't fully communicate with it, as it fundamentally talks a different "language"?
A firewall is nothing more than a rules set applied against a huge listing of ports. Having the ports to protect in the first place is the problem - can you, with 100% certainty, guarantee that all those ports will always stay protected no matter the attack or situation? Can you guarantee that something behind the firewall won't initiate an opening of a port, or simply communicate via an already open one?
No, you can't. A trojan on a TCP/IP internal network had full communication rights with anything everywhere uness you have something, hardware or software, preventing access to non-authorized outside locations. That's a heck of a lot of rules to keep up on.
By having the internal network on a fundamentally different protocol, you gain an automatic level of security - you have a transport layer to overcome, never mind the security layers plus the translation layer.
It's not just you, the reply before you did it also - I KNEW someone would miss the point.
Moved to the US this year.. January... a few short months after receiving a BoA debit card, a replacement had to be issued. No less than 3 major retail chains (visited by me) had their PoS terminals compromised. Now a major US bank is compromised to through sheer stupidity/sloppiness.
The IT security muppets of US need to get their act together.
I wonder when someone is going to wake up and realize all these "big scale hacks" are basically the money men leaving your private details in a box on the side of the street and claiming "someone stole our shit".
They blame technicians now, but who wants to bet this "server" was some director's old vpn entry point and the guy was too stupid/pig headed to change how he connected to the network from home on his windows XP laptop (they forced him to give up his windows 95 when the HDD died, but he screamed then too)? He'd probably convinced some "security manager" it was perfectly safe by bugging him until he gave up in frustration?
As for the "It's all TCP/IP's fault" AC, you're missing the point entirely here. Sure, we could have a better protocol then TCP/IP. It sucks. But even that would not do squat against the kinds of vectors this was done by, IE, old uncared for boxes that should have been retired 10 years ago, but with someone in power who "can't live without it", in the network. You will also always have machines that need to access other stuff (users talking to servers) and those will have to talk to all those different protocols, increasing the surface of attack IT staff have to maintain over the entire park. So more of a "loose loose" situation. This wasn't a protocol attack. This was a "we keep the door locked with just a rope and a do not enter sign" kind of attack.
The real criminals are the people who didn't take basic precautions. Those are the ones who should face a day in court.
This is nothing but one link of a long chain of failures :
- someone who has higher than necessary privileges on his desktop falls prey to a phishing attack and his machine became commandeered from outside
- some user on that machine could access the server without strong authentication
- that server contained a copy of sensitive data or a pass-through to a server that had it
- there was no isolation between a non-compliant server (the one missing strong authentication) and the production environment where sensitive data was stored.
Besides this, it seems strict AAA rules were not being applied :
First A : Information considered sensitive should be stored on servers located inside a secured network area with access allowed exclusively through a jump point using strong authentication.
Second A : Privileges to access that information should be granted on an as needed basis and on a temporary basis. Admins and DBAs need to elevate their privileges in order to get access.
Third A : Access requests should be documented, all access should be logged all logs should be audited.
If JPMC didn't have all these AAA principles in place, their security architects did not do enough to protect the critical information. If all this was in place and information still managed to escape in the wild their security policies were not present or not followed accordingly.
If that vulnerable server was inside the security zone attackers should have had to find a way to cross the strong authentication on the jump point. If the server was outside, it should have not access to sensitive data.
Without knowing exactly what happened, I would vote for a case of developers carelessly manipulating a copy of sensitive production data outside the production environment, an insider job or a combination of the two.