back to article 'We're finding bugs way faster than we can fix them': Google sponsors 2 full-time devs to improve Linux security

Worried about the security of Linux and open-source code, Google is sponsoring a pair of full-time developers to work on the kernel's security. The internet giant builds code from its own repositories rather than downloading outside binaries, though given the pace at which code is being added to Linux, this task is non-trivial …

  1. Charlie Clark Silver badge

    Convenience

    downloading a binary image for a Linux distribution…

    If they're building their own distros, they'll be building their own binaries and putting them on their own repositories and configuring their clients to search there first. In exceptional circumstances the latest, greatest version might no be available but this is the point. This has been the way BSD admins have done things for, er, decades.

    1. Anonymous Coward
      Anonymous Coward

      Re: Convenience

      Their ChromeOS build process is Gentoo based, which in turn was inspired by the BSD approach.

    2. teknopaul

      Re: Convenience

      Their internal OS distro is debian I heard.

      Given they invented k8s and go I would imagine they run many kernels pretty bare in terms of what's around the kernel.

  2. Anonymous Coward
    Anonymous Coward

    You cant engineer away buffer overflows without a cost

    The cost is either you have runtime bounds checking as per the C++ STL, or you simply prevent direct memory indexing altogether. Theres no magic bullet and pretending Language-of-the-month will solve the problems is disingenuous.

    1. sreynolds

      Re: You cant engineer away buffer overflows without a cost

      I laugh at rust because to do anything useful you need to be unsafe. Unsafe is used in such a blasé manner that there are more unknowns at the moment as to how secure it really is. Mind you the STL took 20 years to become useful and that was only thanks to boost. History never repeats, or so I tell myself before I go to sleep.

      1. teknopaul

        Re: You cant engineer away buffer overflows without a cost

        You can write a lot of useful code that has no unsafe{} code in what _you_ write, this means you can do the the "fearless" thing.

        I have written a lot of rust and not had to write much (or any) unsafe{} blocks. I have had to write Arc<> and RwLock<> a lot to achieve that, which feels wierd and looks ugly.

        Rarely do you _have_ to write unsafe{} .

        If your code is not unsafe there are a whole class of error you don't need to test for.

        If _other_ people use unsafe or not is irrelevant.

        The underlying lib may be written in C or unsafe rust, the jvm is written in C (and assembler) , the kernel is in C, python interpreter is written in C, microcode and hardware might have bugs. Probably best to worry more about your own unsafe{} code tho.

      2. DrXym

        Re: You cant engineer away buffer overflows without a cost

        Sorry but that's bollocks. I wrote a complex piece of software written in Rust consisting of tens of thousands of lines of code. The word "unsafe" appear in 4 blocks representing about 60 lines of code. Three of those unsafe blocks are wrappers around some calls out to OpenSSL. The other is a wrapper around a low level async buffer.

        What that means in practice is that if my code crashes I KNOW it is one of those blocks. I've seen exactly ONE crash in four years of development and it was in a call to OpenSSL where I got a buffer length wrong. On top of that the compiler has kicked my arse so many times I've lost count.

        In fact there is a general ethos of scrutinizing every single use of unsafe in code and generally speaking the only place you'll see them is in calls to C, calls from C or calls to system APIs, i.e. boundaries.

        1. martinusher Silver badge

          Re: You cant engineer away buffer overflows without a cost

          I've written thousands of lines of code in various languages, mostly 'C'. Most of my code involves communications of some sort or another --- networking -- where its not possible to control what the external environment sends you. Its one of those "this should never happen" things that experience will tell you is the one assumption you can never make reliably.

          Coding in the language du jiour may help a careless or naive programmer, especially someone used to an applications environment where the result of an error will usually be the program being terminated. If you write systems or embedded code you don't have that luxury; you have to control everything and you have to design the code from the system down to exercise that control. Its not particularly difficult but it can involve a fair amount of extra work.

          1. DrXym

            Re: You cant engineer away buffer overflows without a cost

            I've also written and continue to write thousands of lines of C and C++. I am not a careless or naive programmer, in fact I'm rather cautious and fastidious. But as a mere mortal I make mistakes and mistakes take time to fix, impact on productivity, degrade the quality of the code, and make customers sad. Not to mention that I feel after doing this for decades that if *I* make mistakes then what does it mean for other people writing code who have less experience?

            So I will take any help I can get to eradicate or isolate the causes of error in my code. Including new fangled, high-faluting languages with their pesky compilers that stop me from compiling code which is broken and helpfully describe the problem. It seems I'm not alone given the interest in Rust from industry minnows like Microsoft, Amazon et al. But hey what do they know?

            I'd also point out that Rust is being used for kernel development and embedding, negating the idea that somehow it's too high level or doesn't offer sufficient low-level control.

    2. teknopaul

      Re: You cant engineer away buffer overflows without a cost

      Take a look at rust. in many cases you can, array size defined at compiletime. The magic is in the compiler.

      It's fool proof but it's not perfect. The current version can't handle oome which I imagine is a showstopper for many kernel use cases.

      It's a bitch to write, even harder to refractor, and you end up doing a lot of copying to avoid sharing which is not ideal. It's clever tho.

    3. DrXym

      Re: You cant engineer away buffer overflows without a cost

      As you say bounds checking has a cost and quite honestly it is so small it is barely worth caring about. So what if I have to compare two numbers before a memcpy or a loop? Even in C, the recommendations are to use safe functions and not the unsafe ones, i.e. sprintf_s over sprintf because the consequences of not doing so far outweigh the runtime cost.

      And indexing is not something you need to do so often in some languages. e.g. in Rust, you can access elements by index but more often than not you'll use iterators on a buffer or slices of it where the bounds check is up front and not per iteration of the loop. Same for C++ iterators and Java streams. And Rust will also check slices at compile time if it can, e.g. this would fail:

      fn main() {

      let data = [0,1,2,3];

      println!("Element 5 is {}", data[4]);

      }

      All that is to say there is NO EXCUSE for inadequate checking in code. About the only time I can think it would be acceptable is if you have a buffer that you declare yourself and therefore know the size of and proceed to do stuff with using compile time constants so it cannot do anything else. But for general purpose buffer stuff - check the bounds.

      1. Anonymous Coward
        Anonymous Coward

        Re: You cant engineer away buffer overflows without a cost

        "All that is to say there is NO EXCUSE for inadequate checking in code."

        This sums up the problem exactly - there should be adequate checking but unfortunately this is poor wording for a requirement (same as should be safe, shouldn't be efficient).

        What do you test for? Its not just buffer overflows. I&T will come up with all sorts of things they need checking. Also from a data point of view you have syntax, semantics and pragmatics as you go up the chain.

        Checks are akin to insurance. You'll need more in my code than say Alan Cox, but you'll need more in a novice than my code. Also the cost of checks and maintaining them can mount to the point of being more expensive than letting it break, depending on what you're doing.

        So you are 100% correct but I don't have an answer for what adequate is either!

        1. DrXym

          Re: You cant engineer away buffer overflows without a cost

          It would have to be an extremely performance critical path or concrete inputs to forego the bounds check. But Rust would let you do it with an unsafe block and a call to std::ptr::copy / or std::ptr::copy_nonoverlapping which is more efficient. You could even just call C's memcpy directly from Rust using the libc bindings.

          I've never had any reason to but if I did then the fact these blocks are marked unsafe would still give me a pretty good clue of the cause if the code were to crash.

          1. Anonymous Coward
            Anonymous Coward

            Re: You cant engineer away buffer overflows without a cost

            Wow; compiling code with automatic bound checking that can be selectively disabled by an "unsafe" option - suddenly its the 1970s all over again !

      2. Anonymous Coward
        Anonymous Coward

        Re: You cant engineer away buffer overflows without a cost

        "As you say bounds checking has a cost and quite honestly it is so small it is barely worth caring about."

        Well that depends doesn't it. If its the core of some gigabit ethernet driver or some realtime code on an embedded low power device then a few extra cycles really does matter.

    4. Charlie Clark Silver badge

      Re: You cant engineer away buffer overflows without a cost

      I think the point is that, over the years, we've learned what many of the common mistakes are and worked on ways to help users avoid them more often. What's not to like?

  3. Claptrap314 Silver badge

    Mentions security. Builds from source. Doesn't connect dots.

    Dowloading binaries is always a greater security risk than building from source. I find it curious that this was not mentioned.

    Again, as I mentioned in other posts where I advocated for this exact thing, it is not cheap. But the payoff...

    1. sev.monster Silver badge

      Re: Mentions security. Builds from source. Doesn't connect dots.

      I think it's obvious enough to not be worth mentioning, and also tangental to the conversation.

      The talk of downloads in the article was about convenience, not security—and it was also an assumption, not actually something the Google team said. For all we know every devops Googler could be running source-only Gentoo.

      1. Claptrap314 Silver badge

        Re: Mentions security. Builds from source. Doesn't connect dots.

        Convenience generally runs contrary to security. This being the exception makes it particularly important to point out.

        And no, we did not run Gentoo ourselves...

        1. sev.monster Silver badge

          Re: Mentions security. Builds from source. Doesn't connect dots.

          "And no, we did not run Gentoo ourselves..."

          So does that mean you were once a devops Googler? If so, use your connections to petition the Chocolate Factory to open source the Google Play and FCM codebase, so that I can build my own stripped versions that do just enough to facilitate Slack notifications. I just want to stay up-to-date with my channels without Google knowing everything about me, dammit.

          Let me restate: Most reading el Reg are probably well aware of the security benefits, tradeoffs and methodology involved with self-compiled code and self-hosted binaries vs out-of-shop downloads. Lorenc directly mentioned one of the primary reasons—if not the biggest one—self-compiled packages are more secure: provenance. Compiling your own packages being less of a security risk wasn't stated in so few words, but the implication was clear, and most of us reading know it already. That's why I said it was obvious, and reiterating that in the article would be pointless fluff for the majority of us, especially after Lorenc explained it.

  4. martinusher Silver badge

    Seems the right thing to do

    The essence of building a reliable system is control so you'd probably want to build from known sources and not automatically incorporate 'the latest' until you've got a good idea that it really is what it says it is and it does the job we're told it does.

    I'm used to building for industrial systems -- firmware -- where the SKU for a particular product inclueds the firmware version. Often firmware is certified for version 'x' of the firmware and customers don't want changes and if they do they just want changes to that version, they're not interested in the latest and greatest because they don't know what they're getting and they don't want to have to re-certify it. (This goes for tools as well -- the build tools are part of the release package for a particular version.)

  5. ecofeco Silver badge

    And nothing of value...

    ... was communicated.

    (not dogging the article, just interviewee)

  6. Missing Semicolon Silver badge
    WTF?

    2 engineers?

    That's all that Google can afford to keep the kernel in use by what is probably the fat end of a million machines in the Google estate secure?

    Given the comically large amounts of cash washing about, surely they could afford more like 50 - and then feed all of those fixes back upstream, of course.

    1. Dan 55 Silver badge
      Devil

      Re: 2 engineers?

      If the gold membership is $100,000 then Google have got a bargain for two dedicated systems programmers at $50,000 a pop, or other companies are subsidising their work.

      That's how important it is to Google.

      1. doublelayer Silver badge

        Re: 2 engineers?

        That's not it. They pay $100000 in donations annually, but now they're also subsidizing the work of these people. Still less than a platinum membership, but more than they used to be doing. Still, I have to wonder if that's all Linux is worth to them. Amazon is perhaps the most galling; they run a bunch of Linux servers for their cloud which earns them a bunch of money and on which their store runs, and they're only silver members?

        1. James Hughes 1

          Re: 2 engineers?

          They have many many more engineers working on Linux than just those two. Just check out the commits to the Linux tree. Old figures but Google sign off on the tree were over 5% in 2017.

  7. reGOTCHA

    They don't trust the binaries so they compile themselves

    Do they trust the compiler? or they compile the compiler from source too?

    Someone once gave me this link some time ago:

    Reflections on trusting trust - Ken Thompson

    https://dl.acm.org/doi/10.1145/358198.358210

    1. G Watty What?
      Thumb Up

      Re: They don't trust the binaries so they compile themselves

      That was really interesting. Feels like it predicted the SolarWinds issue. I appreciate its not quite the same in that it wasn't a compiler but still a tojan horse in the source.

      Thanks for sharing.

    2. FIA Silver badge

      Re: They don't trust the binaries so they compile themselves

      It's more to ensure that they can compile them.

      Bitrot has bitten me before and I work on one small scale project, at Googles size you probably just hit a point where you simply have to.

    3. doublelayer Silver badge

      Re: They don't trust the binaries so they compile themselves

      Probably. Once they have a large enough project, it becomes very important that they can rebuild it even if something external goes down. If the compiler breaks something (there are projects which only work with very specific versions of certain compilers), they'll need to have a copy of that compiler which they can use again. The easiest way to have that is to have a copy of the source at all the versions they use and the ability to compile them. Doing otherwise on critical projects can lead to wasted time.

  8. Hueyy
    IT Angle

    Programming away a whole class like buffer overflow by using (more) safe languages in a 40+ yo OS.

    Introducing the same classes like buffer overflow in an entirely new build OS by Google like Google Fuchsia

  9. sitta_europea Silver badge

    I've never really understood the insistence on using unsafe memory access functions in C.

    I suppose it's easy if you're lazy and don't care too much about code being robust, but it's certainly not necessary.

    Three decades ago I engineered out all possibility of buffer overflows in my C code by creating a few library functions.

    It's not exactly trivial, as you might expect a multi-user application that runs on DOS not to be, but it's been working fine for 30 years and counting.

  10. sitta_europea Silver badge

    While finding and fixing kernel faults is probably never going to be easy, building the Linux kernel should be trivial for an organization like Google, I don't see that that's something to make a fuss about. You can just make your own package.

    For customers who use the old Intel NUC devices I have to build the kernels, because otherwise the system runs about three orders of magnitude slower than it does with a custom kernel because of all the SPECTRE and other mitigations.

    You just have to set yourself up with a kernel config file (admittedly that can take a while, the first time) then hit the button.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like