It's not about the air gap
My view on this is that it’s not about the air gap is about imposing, dark satanic mills style control upon the proletariat of the employees
It seems intuitively obvious. Disconnect your PC from the internet, and it's safe from attack. Google thinks enough of the idea to try cutting off a couple of thousand workstations from the pestilential swamp. The air gap is an experiment in increasing the cost of mounting an attack, says the company. A cut-up ethernet cable …
Does it?
Personally I thought it proved that people who don't know what they are doing in a complex environment and deal with that by over simplifying to "don't connect it to the internet and it's perfectly secure" still don't know what they are doing after over simplifying things because they can't comprehend the complexity.
Stuxnet was quite simply a trojan. Even running windows out of the box with no additional software you could harden the box by disabling auto run and putting in a software restriction policy/applocker policy which prohibits running any form of executable code (eg, .exe, .bat, .vbs, .etc) unless specifically whitelisted and it wouldn't have been able to run when the removable media was plugged in.
Even taking the laziest way of doing it which is just a default level of "deny" and then only approving %program files% so that only existing installed programs would work would have prevented stuxnet from running, since it had to execute from removable media which would have been disallowed.
"Stuxnet was quite simply a trojan"
I think you are being over-simplistic here. I would call a simple trojan able to mess with radioactive centrifugues, a cyber weapon.
"prohibits running any form of executable code (eg, .exe, .bat, .vbs, .etc) unless specifically whitelisted"
There are ways to bypass this. I saw a real-world pentest scenario of this whitelisting technology and there are flaws. Was an ATM.
Stuxnet was a just a trojan; which happened to have two important additions which made it into a cyber weapon: The first was the highly specific targeting, the second was the payload/bomb to drop on reaching the target.
So implementing the normal Trojan defences will block some trojans such as Stuxnet. Okay these don’t protect against zero day etc.
>” There are ways to bypass this (whitelisting)”
The bypasses in the main rely on the (misplaced) trust the OS places on the object labels. Hence why OS security features need to be augmented typically by a third-party security suite (although once again not a 100% solution).
The people using the system did.
Just like it would be unfair to say an encryption protocol was broken because someone using it re-sued a compromised private key.
And the fact that the combined military and intelligence assets of Five Eye plus some bonus players had to bypass it instead of breaking it mean still raised the level of effort to somewhere between "ridiculous" and "nearly impossible".
Properly implemented, and observing proper discipline, that would have stayed at "absurdly difficult" and probably never been broken.
But one of the Achilles heels of any secure system is failing to do your requirements analysis. The security team dictated a pretty secure configuration, then left out the design and construction of all the parts to keep it running. And since the staff onsite were required to do exactly that, they created their own Ad Hoc methods to get the job done. Risk crept in and visibility wasn't a sufficient priority.
Running a completely air gapped system is a PITA, but it isn't magic, nor is it impossible. Though the question of when it is truly necessary is also valid.
In fairness, Stuxnet had the advantage that it was installed by spies, from whatever country. As one of the interviewees in Alex Gibney's excellent documentary, Zero Days, says getting the virus into the plant wouldn't have been a problem. The CIA (and other spy agencies) have plenty of people who are quite used to getting things into, and out of, even the most secure places.. But the CIA, MI5/6, Mossad etc won't be interested in most companies..
Air gapping isn't perfect. No security method is, short of locking the computer in a lead box in a heavily defended bunker somewhere that has no connection to the outside world (not even electricity, which would have to be generated in the bunker) and ensuring no one ever touches the computer. But such a computer would be pointless. We buy them so we can use them, not leave them doing nothing.
This post has been deleted by its author
Before the internet, all the company records were written on pieces of paper and stored in filing cabinets. All the reference materials were printed on pieces of paper and stored on book shelves.
Customer and supplier orders were written on pieces of paper and carried to their location by a postman (and yes it was a man, because we are talking about historical times here).
Yes, we managed it, but to we really want to go back to those days?
Actually not.
General purpose computing, with terminals and latterly stand-alone and LAN connected PCs were used in businesses before The Internet really existed (or at least before company's data processing systems were attached to the Internet - remember the NSFnet links [part of the original backbone of the Internet] were only opened up to commercial organisations in 1991).
And before that, company records were stored for decades using paper, but the type that had holes punched into it that could be processed by card readers, discriminators and sorters.
And you're forgetting how much business was done via fax.
I think Katrinab is using "the internet" to refer to the point in time that companies made a wholesale move to interconnect permissively, using the internet to transact from * into their internal data bases, on demand from the external source. Not to mention online communities.
Fax is closer to a voice call in terms of attack surface. Not to mention BBSes
Doen't matter if it's a serial cable or a fax, if it's wired into the rest of the world it's not an air-gap.
I'm happy to cede some ground to usefully isolated systems that don't require hand cranked power supplies, but if it wired to any communications it is at most a secured network, not any form of air-gapped system.
Also, any salesman that sidles up and tries to sell you box called an "air-gap" is full of it and not to be trusted. If they call it a secure bridge or gateway that's one thing. But a box that intermittently connects your otherwise isolated network isn't an "air-gap" and neither is it an isolated network at that point.
And air-gapped system or network is only an air-gapped system for the duration is it not in fact connected to an outside system. The whole plan is that you set it up from reliably clean hardware, only touch it with verified media, and never let anything inside the protected system ever talk to anything on the outside, then keep it running until it's no longer needed and wipe everything with extreme prejudice.
A usefully air-gapped system requires attacks that involve heist movie plots and science fiction devices. Possible but highly implausible, and they will probably just attack the human side of it because it's easier. And it's a solution that's often only expensive in terms of discipline when you are talking about systems that can run efficiently in isolation.
"Doen't matter if it's a serial cable or a fax, if it's wired into the rest of the world it's not an air-gap."
Not to take away from the excellent post, but an air gap is about unauthorized or priveleged read & write access. Fax doesn't give you that. Nor does the AC.
Also, the original question was "is it worth going back to those (isloated) systems"
> Exactly how the air gap will be implemented isn't clear ... It doesn't matter. It won't work
There is an old (older than the internet) saying along the lines of
Those who say something is impossible should stay out of the way of those making it happen
Which could be a corollary of Arthur Clarke's observation about elderly scientists.
There is, of course, Isaac Asimov's Corollary to Clarke's law:
"When, however, the lay public rallies round an idea that is denounced by distinguished but elderly scientists and supports that idea with great fervour and emotion – the distinguished but elderly scientists are then, after all, probably right."
This post has been deleted by its author
We'll find out when there is a stupendous malware breach that takes control of the non-airgapped PCs and spreads to the so-called air-gapped ones still connected to the same network and the mayhem and forensic report that follows demonstrates that the whole thing was worse than useless because false sense of security.
Any day now, just waiting for it . . .
If there is any connection from systems that are not air-gapped to the protected system, its is NOT AN AIR-GAP. If the attack requires loading malware inside the air-gap off a thumb drive, and doing some rube goldberg crap to exfiltrate a signal, the staff still failed to implement and follow their isolation protocols properly.
That's no more a failure of "air-gaps" than and idiot leaving all their companies private keys in an unsecured Amazon bucket.
Presumably "the normal tools" will include a browser, as most internal Google apps will have a web interface, but it'll be restricted to some sort of intranet? So, if someone needs to find something out and the answer's not on the intranet, they won't be able to "google it".
Would they have to walk down a corridor to find a connected PC, and write the answer on some paper? Or maybe they'd be tempted to take screenshots of info on their Adroid phones for convenience (which some malware on the phone could subsequently filch).
Google Search is an internal service if you work for Google, and you can look at the cached version of the page. Some of the websites will be hosted on Google Cloud anyway, and therefore will also be available.
If the air gap is implemented with "You can contact Google Cloud, and Google Cloud can contact the internet", then they're doing it wrong. I can put a machine in Google Cloud, so their air gap needs to isolate their machine from that instance I've created in both directions. A network on which everything is disconnected from the internet qualifies. A network in which some things are and some things are not is just a more inconvenient part of the internet.
Clearly from the article, another team which does not know what they are talking about is also about to do an air-gap wrong. But there are people that work their that do know the difference. It will be interesting if they tap the other team on the shoulder, or wait to toast marshmallows over the fire when their new secure network folds because the team that built it doesn't know what words mean.
I get why Google may want to build different tiers of security and isolation into their internal networks, but if it has access to a bunch of web based resources that are in turn being accessed by other employees who are accessing the outside network, the new system can only be so robust, and the whole will potentially fall to a pair of compromised hosts.
What the article says is reasonable - if IT worked with their clients to actually solve problems, then we'd have both satisfied AND secure customers in our corporate divisions.
The problem is that this is entirely the opposite of what those same companies do when they outsource or otherwise budget for their IT (support) services. The typical outsource support contract is pared down and minimised so much the support people are specifically paid to NOT engage their brains, because that requires a longer support callout (and is thus more expensive).
The Outsourcing firms don't like that either, because if a customer's problem is actually solved, they don't need to call support again (ie: fewer callout charges). Instead they focus their team on solving *symptoms* only - and *never* looking into the actual problem - thus keeping the support calls (and charges) coming as long as possible...
(oh, and the above business model, on both sides, is from personal experience. At least one person was sacked for providing *too much support* because that gave the wrong picture to the firm who was about to buy the helpdesk operation)
Security/support personnel and customers working together also assumes that security knows a LOT about your business, so they can recommend something thats to the point. But of course for that to happen - these same staff need to be long term employees of YOUR company, so they have all that essential in-house knowledge. Kinda hard for an outsourced person at a call centre in another city/state/country to have that knowledge, or the handful of service people who make the in-person appearances (who, as I said earlier, aren't allowed to spend actual time with you because they aren't paid for it)
Wouldn't it be interesting if companies who fall victim to the large scale hacks could sue the executives who (10-20+ years earlier) had begun the cycle of outsourcing and removed any hope of the company withstanding the attack - and as such are directly responsible for the company and its staff being so vulnerable!! Gee, executives making a decision that was focused on something other than the next quarterly return, and taking responsibilty for it...?
(anon, because...)
We used to have much more freedom with our kit, back when we used to do proper R&D and make things, rather than just resell Microsoft junk. Heck, I remember when we used to have soldering irons and bare electronic contrivances on our desks - it wasn't an H&S violation, it was part of the job.
Slowly the restrictions came, half-hearted at first, then more strictly enforced.No more building your own PC and just plugging it into the network, it had to be a corporate build, with more and more nanny software to restrict what you can do. Then they took away admin access, and we've now reached the ultimate lockdown with everything proxied via the corporate spyware, complete with m-i-t-m ssl interception and no "split tunnel", so even when working from home your corporate PC is totally isolated from everything. Basically it's now an MS office client and restricted web browser, with access to any real kit only via heavily managed terminal servers.
I do understand why this is the case, but it is a sad reflection on a companies progress from innovative development to boring reseller (or ad broker).
"I do understand why this is the case, but it is a sad reflection on a companies progress from innovative development to boring reseller (or ad broker)."
However somewhere there will still bea competitor who knows how to be innovative and will eventually eat their lunch.
When I worked for a major IT company's support centre, one of my colleague's 'workaround' for a particularly severe security issue that was entered into the problem tracking system was exactly that.
"Turn it off, unplug it, put it in a locked cupboard and throw away the key".
Officially it did not go down well with the team it was escalated to, but unofficially, they were amused.
Do you know why rich people are rich? They don't do the same stupid crap with their money as we plebs do, they keep away from things that waste money favouring things that work for them and earn money.
Google is basically telling their staff that the internet they helped to foster with it's mass of data harvesting, is sick and insidious and to protect itself it's cutting itself off from the cesspit it helped create. They think the internet is a waste of time, they made billions and now even they're sick of it. A lesson for us all.
Originally we used to be sitting in the "pestilential swamp" using our computers as they refer to it. But these days the "swamp" is full of alligators and crocodiles, so sitting in the swamp environment means the risks are much higher because each one of a gator's teeth is a lot larger then a pest.
Normally I appreciate Rupert's articles, but I think he's missing the point of this experiment (also, he's missing the point of experimentation). I see two possible objectives: 1) improve productivity by limiting access to the endless pointless distractions of the Internet and 2) improve security by reducing the attack surface of the affected employees. Note the word "improve" rather than "perfect." Many computer geeks see the world, unsurprisingly, in a binary fashion: either something works or it doesn't. In this case, however, I think Google is trying to observe whether there's an upswing in productivity and a downswing in security incidents.
Time will tell, and I'm sure the results will be interesting.
What Google is doing here is what Microsoft already does (and I assume Google and Amazon) for privileged access—a locked-down computer with only specified applications/websites allowed. If you need something that's not available in that list, you can open a remote desktop session to a less restricted system or use your phone or other laptop.
Back in the olden times, borders were clearly defined, but with todays cloudy-think, those clearly defined borders are gone. Even the web browser has become all powerful and zero day no user interaction exploits in Outlook have become the norm, times have changed dramatically
I think I sometimes come across as quite anti cloud sometimes. I am actually quite pro cloud, as long as it's the best tool for a given job. At the moment, I'm typing this, but we have a cloud based management system (that I specified) for our Macs. To minimise traffic on the company internet link, we have a local cache for the application installers and scripts. I'm typing this in one window, and in the window next to it, I'm running a small utility that syncs the data between the cloud and local server..
I could have gone for an on-prem server, but in this case, cloud is best, as we also need to manage Macs used off site, and the firewall we have protecting our network is quite restrictive, and the process we need to follow to get it changed is deliberately bureaucratic (the process itself is designed to discourage change).
That said, I think the cloud over complicates things sometimes.. I updated my work Mac this morning. Because I needed the network connection it was using for something else, I downloaded the installer for the new version, ran it, and while it was installing the update, unplugged the network cable and started using it on another PC.
The installer quit because, despite being a 15 gig download, being freshly downloaded from Apple's own servers, and being verified on first launch (Apple installers are protected by a digital certifiicate), it apparently needs an active internet connection. Previous macOS installers, while not being the fastest installers in the world, have at least worked offline.