
If they cut off Stackoverflow for their devs, Google is doomed.
In a bid to shrink the attack surface of its army of employees, and thus boost security, Google is taking an experimental approach: cutting some of their workstations off from the internet. The Chocolate Factory has seen fit to sever staffers' links to the outside electronic world, admittedly on a small scale, according to …
@David 132: “Also affected by this policy would be that one guy at Google whose job it is to read incoming search queries and then really quickly type them into Bing and copy-paste the results back to the requester.”
You are partially correct except it was Microsoft scraping Google:
2011: Google Catches Bing Copying; Microsoft Says 'So What?'
That job was outsourced years ago: https://archive.google/pigeonrank/
《 "If they cut off Stackoverflow for their devs, Google is doomed."
Personally I'd expect the code quality to improve.》
I think the devs are now asking chatgpt and seeing some of the answers - "wuh dooomed...."
I never did understand why "stack overflow" - in the bad really old days this normally meant your run time stack was scribbling over your heap and usually fairly quickly ending in tears. Not sure whether stacks auto grow now or you get a SIGV.
On the Intel side, from 32 bit times the system could just add another (4K) page in when the stack grew (down). I enable this, there was a guard page below the current bottom of the stack and when that was hit, the OS would pick up the exception, map in a new page and set the new guard page. This meant though that you could screw up some old OSes by declaring 8K of stuff on the stack. Which you would have to be a complete psychopath to do in practice, but hey.
In 16 bit then yeah, the stack could grow down into the heap but more likely it would run out of space in a 64K segment.
Assuming your system has finite storage, if the stack keeps growing – due to unbounded recursion, say – then at some point you'll hit the limit. Where and how an OS imposes that limit varies; UNIX users should be familiar with ulimit and get/setrusage, for example (and their quirks, such as hard versus soft limits and the "unlimited" setting, which means "not limited by this mechanism, so you'll hit some other limit").
On a protected-memory system growing the stack shouldn't actually "overflow" as such; the process's request to add another page will simply be denied. But "stack overflow" is still commonly used for that condition.
"Stack overflow" is also used to refer to overflowing a specific stack-allocated area into an adjacent area, which doesn't involve (attempting to) resize the stack at all.
This meant though that you could screw up some old OSes by declaring 8K of stuff on the stack. Which you would have to be a complete psychopath to do in practice, but hey.
When my now wife was the company's latest, fresh out of Cambridge University(*), programming hire, I had to explain to her gently why using alloca to get a 64kbyte buffer on the stack of a Motorola 68020 Unix workstation was not a good idea.
(*) She had been invited to do a PhD with Andy Hopper, but had decided that academic computer scientists were all mad. So she moved into the software industry. [Imagine a facepalm emoji here.]
Admittedly, an internet airgap will have a similar effect on on prem systems as it would for cloud systems. Either they mean that they're making a complete air gap, where the machine is not allowed to talk to any machines that have internet access, in which case both are out, or they're allowing use on private networks but not on public ones, in which case it is possible to create private networks with cloud instances on them. The only difference is that you could have an air-gapped private network with some other air-gapped on prem boxes, as is done in particularly secure facilities, but I doubt they're intending to do that. They haven't been clear about what kind of employee would be using this system, which makes it hard to understand what kind of facilities will be needed.
Some jobs could adopt this system easily enough; someone working on code which is all internal or uses static dependencies which have already been cloned could download all the needed docs and proceed without a connection. Other jobs would find it nearly impossible to successfully implement. I hope they're considering that before enrolling people in this. Having recently made the switch from a job where the internet going down was a minor inconvenience because I had all my tools and VMs on my laptop to one where even my temporary code needs to reside on a remote server, needing the internet at all times is more annoying than I'd have predicted given my connection virtually never dies.
>Other jobs would find it nearly impossible to successfully implement
What are the odds that they have carefully air-gapped all the developers, who only work on internal tools that would be no use to anyone else, in case they click on a dodgy link.
But all the finance / marketing / HR / reception / sales / office staff have internet because they need it for their jobs, and they would never click on spam and all they have access to is HR, finance and sales data so nothing worth stealign.
>where even my temporary code needs to reside on a remote server,
Except if your remote server is Google cloud and the Google cloud is in the same room - you are OK
Type! Luxury!
We had to scribble coding sheets with incantations and send them off to the high priestesses of the Hollerith. After the next full moon, if you had been a good boy, in the great tray of doom, a small banded package of cards, bearing your name and containing their great wisdom encoded via holes and not holes, would await you. Full of hope and trepidation, you would transfer the holy package to the tray of feeding awaiting the uncaring attention of the operators of the great engine. If you were lucky, your feeble set of cards of wisdom would be scooped up and join the many others awaiting attention of the primary carcass in the hall of wind and noise. Once fully digested and considered for less than a blink of it's invisible eye, a great tearing noise would erupt from the chained scribes. Devouring banded sheets of bi-folded parchment at an infeasible rate, they would transcribe the great engines calculations, dumping the results in the bin of outputting. Some time later, if the correct muse entered them, a kindly operator might dip their holy hands into the bin and harvest the great engines uncaring doodles. With aplomb, they would separate each judgment, write the name of the troublesome acolyte in righteous script atop the engines conclusion and place them, without a care, into the pidgin holes of lost hope. Full of trepidation, you would retrieve your response. Unfolding the worryingly short deliberation, you would scan for the engines wisdom and there it was:
ERR 101, MISSING SEMICOLON< RUN STOPPED>
"If I understand correctly, internet is still accessible from the laptops, so engineers can use them to access documentation, and code on the workstation."
I cannot see real engineers not connecting their laptops to their workstations possibly in ways that your philosophy hasn't yet dreamt of :)
Tunnelling network protocols over HID interfaces comes to quickly mind...
Ssssh. Don't give it away.
This is meant as a test, to find the Real Engineers so that they can be Inducted into the Next Ring, Google 5.0[1]
[1] don't ask; all we know is that Google 4.0 disappeared, leaving behind only one last, unfinished search "What is that orange swirly thing over by the"
I would assume that a company the size of Google definitely have their own package indexes, not just for Python, but for everything. Google employees do not use packages from PyPI directly. The article states that "employees still have access to Google services", which would include their own internal development services (DevOps, package indexes, etc).
OP was referring to documentation, not the package implementations.
That said, there is certainly no reason why Google couldn't have their own copies of the documentation, and indeed they should. If you're hosting your own package repositories (which ought to be standard practice everywhere; public code repositories are toxic and have been the source of many, many vulnerabilities over the years), you should also be hosting documentation to match the package versions you're hosting.
Developer systems are a huge source of vulnerabilities in most organizations. Developers build and execute code, often of uncertain provenance. They tend to download and install a lot of tools and toys. They often run with excess privilege – I don't know how many devs I've seen running browsers as root / Administrator, out of sheer laziness. Studies have shown that developers are more likely than average users to commit certain security failures or fall for certain types of phishing and other attacks, possibly due to a combination of comfort level (so lower suspicion) and overconfidence. Developers and development systems are a tempting target for attackers, since they provide a route to supply-chain and infrastructure attacks.
Well I think it was pretty effective. Let's see what the experiment has brought in so far :
1) Ordering a vast swath of peons (2000+ is not a small number when it concerns people) to have their internet cut off without warning leads to vast, immediate and (possibly) angry response from said peons
2) Understanding that the default choice should always be opt-in, and making the announcement general, soothes the flock and keeps the peons happy
That in itself was quite effective, at least to bring Google management out of the Soviet era of management. One can only hope that the evolution will be permanent, but I'm not holding my breath. The speed with which manglement can forget past lessons is truly awe-inspiring (in the bad sense).
Any jobs that require consulting documentation require Internet access these days, so everyone who actually does anything to the Google stack is going to need Internet access.
Upper management needs to find their latest set of buzzwords.
All the staff who do directly physical work like lugging boxes around, cleaning etc won't have a desktop, they'll have a phone, because they need to be on the move most of the time.
So that leaves who?
I am sure there are a lot of people who do clerical and administrative jobs. Some of whom probably don't need internet access from the screen they use which is connected to corporate systems (HR, accounts payable, etc) - particularly if they are provided with separate phones or laptops to use when they do need to access the Internet.
Just be very careful not to upset the secretaries - even Google could not withstand such wrath!
They need Internet access because they must interact with external suppliers - paying bills, confirming payroll, checking whether said supplier actually exists in the industry suggested etc.
There may be ways to avoid some of that by spreading responsibilities, but it seems unlikely.
And of course, secretaries do a huge amount of interaction with disparate external suppliers, booking and arranging all kinds of things.
Any jobs that require consulting documentation require Internet access these days, so everyone who actually does anything to the Google stack is going to need Internet access.
This is simply false. It is entirely possible to mirror or proxy all the documentation that a set of developers legitimately require.
I refer the honourable gentleman to the concept of "updates", with their accompanying "release notes", "new version of the manual" and ongoing guidance.
I suppose Google do have the capacity to mirror a significant percentage of the Internet within a few hours of the changes.
Most companies don't.
Doesn't Google have most of the Internet stored and indexed on its own computers? Cutting off "the Internet" while keeping access to internal networks might just work for them then.
The one with the dog-eared printout of the page-ranking algorithm specification, please ----->
Another great way to increase IT security is to throw all your computers in a pit and fill it with concrete. No way to hack those then.
What a bizarre experiment.
If you have sensitive stuff being developed, you would run a split environment. One set of devices for the low needs day to day use, and then an isolated, high security environment for those high risk things.
Its especially weird to consider when your primary business is literally providing services on the internet.
I've come to the conclusion (and this is NOT a joke) that to solve the security problem we need to revert back to mainframes (or a modern variant of these). Fortune 500 companies literally have hundreds if not thousands of individual programs running all over the place, which is nigh impossible to secure.
One centralized big machine is easier to secure and its architecture cannot be easily simulated on other computers. Compare with x86 which runs on virtually every server and desktop in the world. Any bloke with an internet connection could conceivably break into any server since acquiring the knowledge to do so is trivial these days.
running everything you have on one server would require your software to be written to a higher level of stability than anyone would be willing to spend in order to achieve.
Back in the 70s/80s this would have been possible, but these days there is so much shitty code around somehow still essential, no way monolithic server would be able to cope.
I don't think that will do what you think it will. There are several problems that mainframes would bring. Here's one: process isolation. If you're running lots of software on one computer, that software has a lot of chances to mess with other software. Well-written software won't, of course, but if somebody manages to hack one of the pieces, they have extra chances to attack other programs also running on that machine. When the only connections between programs are network links, effectively serial lines, the attacker needs to find new attacks for each system and firewalls can go in between to block many attempts or set off alarms if it happens. Two processes on the same system are much closer together, given that they are sharing a lot of resources which are maintained by a single management system, and there have been many vulnerabilities which are much worse for two processes under the same OS than two computers on the same network.
There are also some problems with your idea. For example, you refer to X86 being insecure because it runs on everything. This is really not a major factor. X86 has had a couple vulnerabilities in itself, but so has ARM and probably so will any sufficiently complicated processor architecture. Most vulnerabilities, though, are in software instead of hardware. I have no less of a problem breaking into an insecure Linux box that has a RISC-V CPU at its heart than I do with a similarly-configured box with an X86 chip, since in almost all cases, my attack pattern and payload will be exactly the same. If people have access to the software that the mainframes are running, and they will, then they will be able to attack it no matter what the hardware looks like. People will have that software because people who are going to build for mainframes at some point will want to test their code somewhere. Somebody will compile it for the common architecture.
There's one more class of problems, and that's the feasibility of switching to mainframes anyway. Regardless of whether or not it would help, and I've already explained my view on that, there are a lot of places that can't just swap in a mainframe for the many servers they use today. Things that operate at scale may have so many records that a single mainframe, no matter how expensive it is, isn't sufficient to process all the stuff they have. I'm not sure if you're allowing clusters to qualify as mainframes, since they're not a monolithic system. There is also the issue of reliability, because most large systems are geographically distributed and a mainframe generally isn't. There are some classes of job where a single mainframe is perfectly capable of doing the job, some of which are already using existing mainframe systems, but since it won't be all problems, and since general purpose hardware can be used for nearly all problems, it's more likely that people will continue using those than adopting a more limited and no doubt expensive alternative.
Personally, having worked IT in a couple of different three-letter classified environments, I think corporations that produce security sensitive software have dropped the ball on air-gapping their dev systems a long time ago. eg I found RSA's breach to be astonishing.
100% agree. Any place I have ever worked which had sensitive information not only mandated air gaps, our own gear was not even permitted to have WiFi interface enabled and mobiles were not allowed.
New team members who did not follow our explicit instructions to have the interface disabled before they even left the hotel would rather quickly work out that we (and they) were not kidding - it took only minutes for security people to show up after those laptops were booted up, and those were not friendly conversations - IMHO completely justified.
Only two separate networks? I knew of at least nine and I was not privy to how many there were, even though I was part of the security department. On top of that, I am speaking of separate hardware networks. Many of these other networks were operating in separate TEMPEST approved spaces. That's when air gap meant absolutely no connection. Even back then there was talk of allowing other networks at the desks on separate machines. The beginning of the long slippery slope.
I remember in the late 90's I worked at a company that wanted to cut-off almost every employee from the internet in order to "improve productivity." But even back then the internet was already a pillar of office workers' lives and they quickly reversed their decision.
I can't imagine any company contemplating this today. Those 2% of Google staff will feel miserable and will likely leave the company. Or maybe that was the goal of the experiment.
"...staff will feel miserable and will likely leave the company. Or maybe that was the goal of the experiment"
And I said, I don't care if they lay me off either, because I told, I told Bill that if they move my desk one more time, then, then I'm, I'm quitting, I'm going to quit. And, and I told Don too, because they've moved my desk four times already this year, and I used to be over by the window, and I could see the squirrels, and they were merry, but then, they switched from Firefox to the Edge browser but I kept my Firefox because it didn't crash as much, and I kept the plugins for the browser and it's not okay because if they take my internet then I'll set the building on fire...
My first reaction would be that google is cutting off internet access for some select employees, the article says not the entire internet.
I bet there's some conspiracy theory going around with silicon valley execs that people are time-stealing by watching Netflix or YouTube while they should be toiling at the coal face of code. They may find productivity goes up or down.
This is giving me an idea.
Us dev peeps need to look stuff up on the internet, so how about a "dual-head" machine (in one box of course) where the "surfing" internet connection is serviced by a "locked down" CPU with security, virus-checking etc, and the dev CPU(s) are air-gapped/highly firewalled, but both display on the same display(s).
You mouse over between windows and can type into either.
The only way of copying data from one domain to the other, locally, could be the clipboard/cut-n-paste.
Dev machines generally need to be networked for code-control, teamwork, tools licences etc, but that could/should be on a separate physical firewalled network.
You could implement that now with a couple of VMs (or a VM and a host OS). One VM has Internet access but restricted access to the corporate network, and the guest OS only allows signing on as a limited-privilege account (with maybe some provision for installing approved software by the end user). The other VM has access to the corporate network, source repositories, etc, but not to the Internet.
If you run a grownup windowing system, such as X11, you could even have your side-by-side application windows.
You can move large amounts of data via clipboard cut and paste. Drag a file into a Word doc to embed it on a remote desktop connection, cut it, paste it to a local Word file, voila. So remote copy + paste usually has to be turned off for this kind of dual-level setup, sadly.
It’s not optimal, for convenience, but I’ve lived through an actual attack, at a former employer who did a fine job of ultimately limiting the damage. They had to shut down everything. Development actually stopped for a couple of days as access to dev systems was shut down. Serious take off and nuke the site from orbit stuff, but it worked. Want to know something funny? The attack vector was a cheaper 2FA system the company bought in because they were trying to cut costs!
Just in case you were wondering what the anonymous cowardice was all about.
Anyway, separate devices for internal and external work okish, if things like Teams and OneDrive are open enough, but still not the most convenient way of working. Google has the advantage that it is, well, Google.
It's the only way.
Back before "SSL everywhere" it worked fine. Blindingly fast with a squid cache.
I can only assume that Google bought a man-in-the-middle firewall and installed a private CA on all those locked down machines to accept all the fake certificates.
That's fairly standard practice these days - quite a lot of large organisations either don't advertise a default route to the Internet or have only recently started doing so. The private CA sanctioned man in the middle - or SSL inspection as it's euphemistically called - is also pretty common. You can't really trust your browser to only have legit CAs in it, especially if it's managed by someone else.
My employer is locking down our laptops - no admin access, can't use unencrypted thumb drives, and considering locking down all production-system access to require going through Citrix.
I'm an automation engineer. Corporate IT doesn't seem to get the concept that maybe sometimes I need a piece of software not in the "officially approved for our laptops" list (like, say, something to communicate with PLCs), or the ability to use a thumbdrive to transfer files to a system that can't handle drive encryption, or that Citrix is both unreliable and wouldn't let us transfer files - preventing us from making or restoring backups. You know, those minor details that keep the plant running.
Sometimes the workers really do need the ability to download and run arbitrary software, sometimes even with admin rights!
(Anon for extremely obvious reasons.)
I have worked in various organisations - from payment providers which are extremely secure, to those elsewhere which are achingly insecure. Internet access is given as standard nowadays in most organisations on a default allow basis. This is a massive change from when I started out and most organisations were on a default disallow basis - you had specific sites allowed if they were relevant to your role. Much of this change comes from the massively lower cost of internet transit - no need to be precious about people on YouTube or Netflix if it doesn't cost much or you won't hit your ISDN dialup capacity. The genie is out of the bottle for user systems. For non user systems, servers etc, the default should always be specific allowed flows only.