back to article Decade-old bug in Linux world's sudo can be abused by any logged-in user to gain root privileges

Security researchers from Qualys have identified a critical heap buffer overflow vulnerability in sudo that can be exploited by rogue users to take over the host system. Sudo is an open-source command-line utility widely used on Linux and other Unix-flavored operating systems. It is designed to give selected, trusted users …

  1. DS999 Silver badge
    Facepalm

    How is this possible?

    Surely sudo is one of the more closely studied applications, since it is designed for privilege escalation. If it had a buffer overflow for years despite all those "many eyes" there's no hope for less closely studied or closed source stuff to be secure.

    1. trist

      Re: How is this possible?

      Hmmm so even the unintended consequences of general C library and/or system calls that are meant to be generic and all useful? I think that this shows how simple it is.

    2. cantankerous swineherd

      Re: How is this possible?

      not engineering but blacksmithing.

      1. trist

        Re: How is this possible?

        "not engineering but blacksmithing."

        So you want "developers" to start using fudge factors when building stuff?

    3. Sam Liddicott

      Re: How is this possible?

      Stop using C, peeps!

      It's too hard to get right enough everywhere, as we are seeing.

      1. Anonymous Coward
        Anonymous Coward

        Re: How is this possible?

        Stop using C... too true, that's why there is never any bugs in C++ applications by Microsoft or Adobe. Well, it's either that or that they have the utmost highest standards.

        1. Sitaram Chamarty

          Re: How is this possible?

          I think he meant to include C++ in what he said.

          I hear Rust is becoming very popular...

          1. Sgt_Oddball

            Re: How is this possible?

            Give it another 5 years before the shine* comes off it and it'll become just another dev language.

            * sorry, not sorry.

        2. Wayland

          Re: How is this possible?

          I think he is implying that we should use something like Rust instead of C and C++. C is great because you can do what you want. The problem is you can do a lot that you never intended. Pointers, C's greatest strength really are a problem waiting to happen.

          It ought to be possible to write in a language that disallows buffer overflow yet compiles down to code that's just as efficient as C. Nothing wrong with a bug free program compiled from C, if you can achieve that.

      2. Robot W

        Re: How is this possible?

        The fact the your comment has only 2 up votes and 18 down votes highlights the problem.

        How can this not be a problem with the language? This bug was generically fixed at least 25 years ago. I.e., when was the last time that a programmer had memory corruption due to going past the end of a String (or Array) in Java, Go, Rust, Scala, Kotlin, Python, etc. They can't because the language/runtime prevents it from happening or raises an exception.

        Where as in C, programmers (and code reviewers) are expected to get this right every time, further hampered by the fact that the standard C functions have different semantics as to whether they will null terminate a string, you don't know whether there should always be a null character to account for, etc.

        Giving freedom to a programmer to write highly optimized code in the necessary places is okay, but making programmers manage string lengths in the mainline code is just poor design by modern programming language standards.

        Most of the languages above would have allowed the argument checking to be written in shorter, more correct code, and given that the arguments are parsed once on program start they would have no meaningful performance impact.

        The world has moved on, unfortunately C has not. It is perhaps also worth noting that the bounds checking C APIs don't really help solve the problem, e.g. see http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1967.htm, where the concluding recommendation was they should be removed from the C standard.

        Fuzzying, etc, can help, but still is not as good as making the problem impossible to happen in the first place.

        But making a language that is really hard to always get right and then blaming the programmers when they occasionally get it wrong doesn't seem like a good path to reducing the frequency of severe security bugs in our programs.

        1. Electronics'R'Us
          Stop

          Re: How is this possible?

          Commandment 5:

          Thou shalt check the array bounds of all strings (indeed, all arrays), for surely where thou typest ``foo'' someone someday shall type ``supercalifragilisticexpialidocious''.

          This rule is perhaps one of the most important and f it is not adhered to then very bad things can happen.

          The Ten Commandments for C Programmers

          1. Anonymous Coward
            Pirate

            Re: How is this possible?

            Or 'thou shalt labour for thousands of hours doing, by hand and never reliably, the work that the fucking language should be doing for you, but can't because you are trapped in 1970.'

            In fact, that's the only commandment for C programmers, isn't it?

            1. Mage Silver badge
              Headmaster

              Re: How is this possible?

              And the developer of C++ didn't want the C backward compatibility. It's possible to really write C programs and use a C++ compiler.

              Then there are the C libraries.

              The best practice C++ doesn't use C libraries or copy and pasted C constructs.

              1. Aitor 1

                Re: How is this possible?

                The fact that the many people use char* in 2021 should be a good indication of the extent of the problem.

                Look at any active so called C++code in 2021, chances are you will find:

                char *name = "name";

                Essentially ppl not using strings.

              2. Anonymous Coward
                Alien

                Re: How is this possible?

                C++ is the solution to the problem (to any problem) in the same sense that sawing your leg off with a rusty hacksaw is the solution to a blister on your toe.

          2. Robot W

            Re: How is this possible?

            I doubt that the mistake here was that the coder didn't know that they should be doing bounds checking.

            The problem here is that:

            (1) It is hard to to always get it right every single time.

            (2) It is hard for a reviewer to easily spot whether or not the coder got it right. [Noting that nobody had spotted this bug for 10 years in security critical code!]

            (3) You only need to get it wrong once and you potentially have a severe security flaw.

            Note, there is an assumption that the language doing the bounds checking makes it slower, but it doesn't. It just means that the compiler always puts in the bounds checking code that the programmer should have been writing anyway. I.e. a decent compile will spot that the coder has already checked the bounds and hence it doesn't need to do it a second time.

            It is also a fallacy to think that it is only poorly trained, or inexperienced, programmers that get this wrong, or folks that aren't smart enough. Everyone gets these sorts of things wrong at some point, the only difference is how often they make the mistake, and whether they find and mitigate the bug before attackers do. Otherwise the penalty is the same in all cases - your code is no longer secure, all for an issue that compilers can trivially get right for the coder.

            1. O RLY

              Re: How is this possible?

              "Programs must be written for programmers to read, and only incidentally for machines to execute." - Abelson and Sussman, Structure and Interpretation of Computer Programs

          3. Wayland

            Re: How is this possible? - Real programmers don't each Quiche

            So funny that people are defending C and attacking those who point out the problems.

            I like C but then only because it's better than Assembly Language. I like Assembly Language but only for re-coding a small function that was too slow in C.

            We should not be writing big complex programs in C or C++. You really are just trying to look tough at that point.

            1. Electronics'R'Us
              Go

              Re: How is this possible? - Real programmers don't each Quiche

              C actually works extremely well for embedded microcontrollers which can have quite large programs as the architecture often maps very nicely to C constructs.

              Certainly, a lot of discipline is necessary to write large C programs, but it is perfectly possible to write bug free code if you follow a decent coding standard and thoroughly test.

              Courses for horses and all that.

              1. Aitor 1

                Re: How is this possible? - Real programmers don't each Quiche

                I am not saying it is impossible, I am just saying it is unlikely.

                I have yet to see a large project that is bug free, and I doubt there exists one in the wild.

                If you have one.. well, you don't KNOW about the bugs, but you should know there are bugs.

                This is a well known issue, nothing new here, but if you language of choice is C/C++ chances are, you have potential buffer/memory issues.

        2. Anonymous Coward
          Boffin

          Re: How is this possible?

          In other words, for many programmers the world is still a giant PDP-11: because the computer they think they are programming for has almost no memory, because it's really slow, and because compiler technology has not moved on since 1970 using a language with bounds-checking is simply not possible. And thus we are doomed to an endless cycle of 'here's a buffer-overflow bug with security implications ... OK we've fixed that one, there won't be any more, will there?'. And they will never learn anything, because learning stopped in 1970.

          1. naive

            Re: How is this possible?

            The pdp-11, it is a holy machine, which lay the foundation of the world we know today.

            People forget that Unix is based on the 80/20 philosophy, if it solves 80% it is good enough, since the remaining 20% will be 80% more work. C is exactly this, an elegant 3GL, light, generic enough to be easily translated into machine instructions on a wide range of differing machine architectures. Even when many C constructs can be 1:1 translated into PDP-11 machine instructions, it enabled Unix to be ported on virtual every CPU which was 16 bits or more since the mid 70's. Which is still a great feature compared to, commercially more successful, competitors who are welded to the x64 architecture. The absence of language constructs requiring insertion of huge amounts of monkey proofing code kept programs small, reduced disk and memory footprint, allowing more flexibility and speed. Anyone who takes some time to study CVE databases will find out quickly that an alternative OS written in proprietary Visual C++ and 32-bit 386 machine code, does have more security flaws. In short, K+R C rocks.

            1. Anonymous Coward
              Alien

              Re: How is this possible?

              Look, I used PDP-11s. I learnt C on an 11/70 running a hacked-round 7th edition Unix (I'm not quite that old: gap-year job, machine bought for project which never happened, they let me loose on it to play). C was a great language for that machine: its model of the machine was really close to what the machine was actually like (both being pretty much what I would now call register machines, although I didn't know that then) and the whole toolchain was small enough (and the binaries were small enough) that you could actually write programs on it. It was lovely. It was certainly a lot nicer than the OSs & languages DEC wanted you to run on these systems, and it's not surprising Unix & C spread as a result.

              And the tradeoffs C & Unix made where completely appropriate for a machine like that: if you were going to be able to get anything done at all you really were going to have to compromise because the machine really was not fast and really did not have much memory, at all. 80% was all you could hope to do.

              Modern general-purpose machines (outside tiny microcontrollers) are nothing like PDP-11s. Memory is now far away from the CPU, with different bits of it being different distances away. The CPU is executing many instructions at once in an order it is deciding itself, and there are lots of CPUs (aka cores) which may or may not share address space with others and where there may or may not be a coherent view of that address space if they do. And it's all hugely fast, and memory is absurdly plentiful.

              Except, that on top of all this there's an emulation layer which is trying to make the machine look like a giant PDP-11 and doing a more-or-less good job of it (often less, witness all the speculative-execution problems): because C was so successful generations of programmers have grown thinking that computers were giant PDP-11s, and so the hardware people have to make their machines look like that, and so people keep using C (in all flavours: C++ is not solving any problems C has), and the death spiral continues.

              And of course these same programmers still have mindsets where things like bounds checks are expensive, because they were expensive on a PDP-11 (a machine, of course, that almost none of these people actually used, but never mind that) and somehow optional. On a modern machine bounds checks are almost literally free especially when generated by the compiler: a bounds check touches no memory and processors have a hard time keeping execution units busy, so there almost always will be one available for a bounds check.

              And so, because once an 80% solution was all you could hope to achieve on a machine which had a few tens of thousands of transistors and a rudimentary compiler, we are now somehow stuck with it, and with its awful consequences, on machines with tens of billions of transistors. And so we will always be.

              1. Peter Gathercole Silver badge

                Re: How is this possible? @tfb

                I think you're being disingenuous to the people who write modern C compiler implementations.

                They do not write the compiler to generate code to look like it runs on a PDP-11. They write compilers that will use any and all registers available, as many instructions types as they can, use huge memory structures, and have various #pragma settings that allow the code writer to use automatic optimisation and parallel features of the processor much better than before. Much of this started when the Portable C Compiler came out of Bell Labs, which allowed you to tell the compiler a lot about the shape of the target processor, and this work has been taken on by succeeding generations of compiler writers over the last 40 years or so.

                Yes, it's still C. I just love the comments from the Julia developers who say that their code only takes twice as long to run as preceding 3G languages (specifically Fortran, not C but the same issues apply), but they're getting there. (I used to support a supercomputer site for whom saving one or two clock cycles in the code that was executed billions of times was considered valuable.)

                There is a point of contention here, though. Even though modern processors are much faster and provide more instructions and features, and allow more memory to be used does not mean that you should squander those resources with more and less efficient code than is necessary. I'm not saying that code should be stripped of all protection, but there is a balance.

                Languages nowadays are a matter of fashion. They wax and wane over time, and until another language achieves the ubiquity of C, it will probably remain. There's just too much existing investment in it.

                It will be interesting to see whether RedoxOS is adopted to run any systems, how it compares to Linux and Windows when it comes to performance

          2. Persona Silver badge

            Re: How is this possible?

            I recall using Pascal back in the 70's. It had bounds checking. I think the Coral 66 I used next did too. Then along came C and writing programs that did things became so much easier. Back in 1990 I had to use Pascal again for a "security" project. It was a miserable experience but thanks to that choice of language and a stupidly rigorous test regime the code we wrote for HMG was very very secure. Then I went back to C & C++ and it was wonderful.

            There are many reasons besides fun to use C but it has always been a poor choice of language where security matters.

            1. Mikko

              Re: How is this possible?

              Pascal was so aesthetically unpleasant to program in that it gave bounds checking a bad name for decades. An accident of history, really, that the good ideas disused for so long.

              And don't get me started on those damn LISP parentheses and how they drove away generations of potential functional programmers.

              1. Anonymous Coward
                Black Helicopters

                Re: How is this possible?

                It is odd to be put off functional programming by Lisp's parens, since Lisps have never been functional programming languages: they are language programming languages.

            2. Peter Gathercole Silver badge

              Re: How is this possible?

              Pascal and C are, give or take a year or two, contemporary with each other. They were also based on previous languages (C on BCPL and Pascal on ALGOL).

              But their reasons for being were very different.

              C was intended as a systems programming language designed to allow efficient programming on quite small systems with limited memory addressing and instructions (take a look at the instruction set for the PDP-8, I think some microcontrollers have more instrictions).

              Pascal from it's inception was a teaching language. It's purest form was deliberately obnoxious in memory and string handling capabilities, while having strong typing and a very strong procedural bias. It came from a group of languages commonly called ALGOLoid languages, mainly from mainframes, and is seen by some as a cut-down version of ALGOL 66. It was always meant to be a whiny and very picky, unforgiving language to tell students what they were doing wrong.

              The purpose of Pascal was to make student programmers really think about what they were doing, but because of this it was really unsuitable as a general programming language. I mean you had to jump through tiny hoops to get well formatted output to screens and printers, it was almost impossible to do things like arbitrary arithmetic on structure references (while making it trivially easy to do typed arithmetic operations). Even things that were being taught for commercial programming, such as multi-format records in files were difficult to do (look up variant records in the standard to see how it had to be done).

              The original intention was that programmers would learn to program using Pascal, but them move to other full-function languages, particularly ALGOL or CORAL when they went to work.

              The tragedy was that rather than doing this, some people had the bright idea of corrupting the ideals of Pascal by adding all of the 'missing' features that Pascal needed to make it into a non-teaching, commercial language. Thus we had Turbo Pascal, Borland Pascal, and any number of extended Pascals leading right up to Delphi (which I admit was a very popular language in it's time, and quite well thought out).

              When talking historically about Pascal, you really need to indicate whether you are meaning the purest form of Pascal, or the corrupted version that many people learned when they should have been moving on to other languages.

              C (as long as you ignore C++) remained true to it's design, and is still a highly efficient and effective, although sometimes dangerous, language for programming hardware systems.

              1. Peter Gathercole Silver badge

                Re: How is this possible? @me

                I feel so stupid. Algol 68. It was Coral 66.

                Must get more sleep to get the ol' memory working better!

          3. vincent himpe

            Re: How is this possible?

            bingo ! We are still stuck in a world where the compiler needs a ; to figure out the end of statement and cannot derive when = means 'assign' and when it means 'compare'. Both problems were solved in languages much older than C. The problem is the people that developed the c parser. Their coding skills were only mediocre. That is now the standard we have to deal with... The same goes for null terminated strings. You are banking on the presence of a marker. If that marker does not come the thing goes wonky. Strings should be arrays with a header describing their size. You cannot read past the end and the end is known before the first character is read. The same misery happened with integers. It depends on the hardware what size they are. That has now been solved with uint32 in32 and other types.

            1. Peter Gathercole Silver badge

              Re: How is this possible? @Vincent

              There is nothing wrong with having different operators for test and assignment. It's only the lax style that was encouraged by languages such as Basic that made programmers think that it's a good idea to use the same operator.

              Having two different operators just makes it very clear what is happening when you can use tests and assignments in arithmetic operations. I admit that you have to know how TRUE and FALSE are represented in the language to benefit from this, and you can get some very hard to read statements, but having explicit delimiters can benefit bot the compiler and the programmer.

              There are a huge raft of programming languages where test and assignment are represented by different operators. You just have a particular bias.

              The semicolon (or other delimiter - Python is picky about end-of-lines themselves) at the end of a statement allows complex mult-line statements to be adequately delimited. It allows you to do complex things without having to assign values into variables, allowing the compiler to automatically optimise the storage and discarding of intermediate results that otherwise would need variable assignment. And this allows the compiler to pick up when nesting of blocks is incorrect by counting the levels of block delimiters and comparing that to the statement delimiters.

              Sure, the compiler can work out where it thinks the end of a statement should be, but having used teaching languages that attempt to correct missing end of statement delimiters, I've seen the bizarre and sometime complex problems that this can cause. No, I'm firmly on the side of the making a language as concise as possible.

              Just imagine the problems if you tried to write a free-format natural programming language. How often does English allow ambiguous instructions? There's a whole profession (lawyers) who make their living trying to make sense of English!

        3. sreynolds

          Re: How is this possible?

          I don't think so. You cannot just focus on a single issue in order to put down and entire language that is responsible for all modern operating systems.

          1. sabroni Silver badge

            Re: How is this possible?

            You cannot just focus on a single issue in order to put down and entire language that is responsible for all buffer overflow vulns.

            FTFY.

            1. Peter Gathercole Silver badge

              Re: How is this possible?

              I don't think C is responsible for all buffer overruns. It's been perfectly possible to do it in may other languages, especially older ones that were intended as system programming languages. Most assemblers have no concept of bounds checking at all, although people programming directly in assembler in this day and age are probably working either on embedded processors, or trying to get every ounce of speed out of a system. Please note I count macro assemblers differently from simple assemblers.

              The lower the level of the language, the less likely it was to have array or buffer bounds checking. C has just survived longer than most.

        4. DS999 Silver badge

          This has been fixed in C forever

          There are bounds checked versions of all the calls. If you had asked me two days ago whether sudo is using any unsafe C library string functions I would have said "of course not, that would be insane!" Obviously someone would have grepped the source, found all those calls, and replaced them with safe alternatives. Guess my expectations for the open source security community are too lofty...

        5. MacroRodent

          Re: How is this possible?

          > This bug was generically fixed at least 25 years ago.

          Try 50 years or even 60. Simula 67 (from 1967 as you can guess) included language-supported dynamic strings, classes and managed memory allocation. And certainly was not the first. The implementation had a reputation for slowness (actually one reason Stroustrup developed C++), but there is no reason a competent Simula 67 implementation, using techniques developed after the sixties and seventies would be any slower than C++.

        6. Kevin McMurtrie Silver badge

          Re: How is this possible?

          C++ can be safe when used correctly - it has the support. Higher level languages only let developers move on to higher-level bugs. My favorite in Java is everybody copying StackOverflow examples to fix various digital signature failures by disabling all digital signatures. I know there's a FinTech company out there right now using SFTP without digital signatures because <sad face/> maintaining them looks hard.

    4. Anonymous Coward
      Anonymous Coward

      Re: How is this possible?

      No, it just shows that the idea that open source code is inherently more secure because of the 'community checks' is utterly fake, because there is not the vetting activists sold as done. Those 'many eyes' aren't reading and verifying code.

      Very few projects has that kind of control, and it's not because they are open, just because they have the required paid resources.

      1. ovation1357

        Re: How is this possible?

        'utterly fake' is probably a bit disingenuous to be fair.

        Sure, your mileage may vary with open source software - there will be well written/tested code all the way to spaghetti (or should I say Swiss Cheese) code. And the level of peer review and vulnerability scanning is likely to correlate with how widely used each program is, especially whether or not it has been adopted for inclusion in the majors distros (including non-Linux platforms).

        However, firstly if you think that closed source code is going to be better just because it came from a big and reassuringly expensive vendor then think again. I've seen the code behind some closed source abominations that would be ripped to shreds if shared. The big difference is that with open source at least you or anybody can inspect the code, contribute to it, patch it, fix it.

        With closed source all your can do is prey that the vendor is doing its best to write safe code, that it quickly patches any vulnerabilities that get discovered and prey it doesn't abandon the product or go bust.

        I suspect this bug is in a piece of code that looks safe and has passed a thousand experienced eyes and all sorts of automatic 'code sniffers', but then someone persistently chipping away at it has noticed a very specific set of circumstances where an almost unpredictable permutation of events opens up an unforeseen hole.

        Sudo, like many open source utilities, is available on all major Linux distros, BSD, macOS, Solaris, HP-UX, AIX and presumably also Windows WSL... There is simply no way that all of those vendors have chosen to adopt it without scrutinising the code.

        1. jake Silver badge

          Re: How is this possible?

          "I suspect this bug is in a piece of code that looks safe"

          Read it for yourself:https://blog.qualys.com/vulnerabilities-research/2021/01/26/cve-2021-3156-heap-based-buffer-overflow-in-sudo-baron-samedit

          The above link includes an explanation.

          Easy test to see if you are vulnerable, from that page: Run the command sudoedit -s / .... if you are vulnerable, it'll return an error that starts with sudoedit:, if have been patched it'll return an error that starts with usage:.

          Versions prior to 1.8.2 are not vulnerable.

          1. Muscleguy

            Re: How is this possible?

            From a patched Sierra install on Mac sudoedit is an unrecognised command.

            1. Crypto Monad Silver badge

              Re: How is this possible?

              [this is macOS 10.14.6 with security update 2020-007]

              MacBook-Pro-4:~ $ ln -s /usr/bin/sudo /tmp/sudoedit

              MacBook-Pro-4:~ $ /tmp/sudoedit -s /

              Password:

              sudoedit: /: not a regular file

              MacBook-Pro-4:~ $ /tmp/sudoedit -s '\' `perl -e 'print "A" x 65536'`

              Segmentation fault: 11

              So in short, macOS apparently is vulnerable, but it's partially mitigated because it checks the password earlier in the process (so you need to know the local account password).

              1. Anonymous Coward
                Boffin

                Re: How is this possible?

                No, it's completely vulnerable (a least 10.14 is): linking sudo to sudoedit (any directory) and then running it with a backslash and a long string will cause a SEGV. And it will do this even for a user who is not allowed to use sudo at all, which makes it even more terrifying.

                The whole 'sudoedit /' thing is some weird red-herring because someone got confused between '/' and '\' due to Windows braindamage: it's not mentioned in the report at all.

              2. Crypto Monad Silver badge

                Re: How is this possible?

                Still vulnerable even with security update 2021-001 applied:

                $ /tmp/sudoedit -s '\' `perl -e 'print "A" x 65536'`

                Segmentation fault: 11

                1. Crypto Monad Silver badge

                  Re: How is this possible?

                  Fixed by macOS security update 2021-002

                  $ /tmp/sudoedit -s '\' `perl -e 'print "A" x 65536'`

                  usage: sudoedit [-AknS] [-C num] [-D directory] [-g group] [-h host] [-p prompt] [-R directory] [-T timeout] [-u user] file ...

        2. Phil O'Sophical Silver badge

          Re: How is this possible?

          The big difference is that with open source at least you or anybody can inspect the code, contribute to it, patch it, fix it.

          "can" being the important word. I've seen several of occasions where a security bug is notified to the community (which could be one man & his dog in a shed) and the response is "meh, I'm to busy to look at that, it's not that bad and looks complicated". Now, certainly, you can fork & fix (and test) it yourself if you need to, but you then need to factor in the cost of doing that again for every new release that comes out, and big companies don't like that. The reason many big companies use FOSS is because they think it saves them the effort of doing it themselves.

          There is simply no way that all of those vendors have chosen to adopt it without scrutinising the code.

          Don't you believe it. In my experience they spend far more effort scrutinisng the license to make sure they can't be sued, as long as it downloads & compiles no-one looks at the code. They assume that "many eyes" have already done so. Our in-house code gets far more detailed code review and design scrutiny than any FOSS we ship does, sadly.

          1. Aitor 1

            Re: How is this possible?

            Do the fix and as for merge.

        3. Anonymous Coward
          Anonymous Coward

          "if you think that closed source code is going to be better"

          Never said that. I just said that few projects have actually the resources to vet the code properly, and that does not matter if the code is closed or open, it's just a matter of resources, and skilled resources usually like to be paid, especially when they have to do boring tasks like reviewing code. Being paid is again not a matter of being closed or open, it's just a matter of how the projects work and is funded.

          The assertion that open source code will have a lot of unpaid code reviewers just because it is open if fake, especially now there is simply too much code to review, but even before. It was classic wishful thinking, a kind of religious assertion, it would happen just because people believes so... you need just to have faith. Just, it never worked so.

          "I suspect this bug is in a piece of code that looks safe"

          An unchecked - or wrongly checked - buffer size? Really?

          1. Michael Wojcik Silver badge

            Re: "if you think that closed source code is going to be better"

            Nothing to do with an unchecked size. It's an overrun due to a missing sentinel check in a special case in string traversal. See my other post above.

            This sort of bug happens very frequently in C, because humans are bad at constant vigilance.

        4. oiseau
          Facepalm

          Re: How is this possible?

          .... better just because it came from a big and reassuringly expensive vendor then think again.

          Beat me to it. 8^7

          I cannot understand how this is still, after all the IT years gone by, a permanent assumption.

          It's downright stupid.

          O.

        5. Persona Silver badge

          Re: How is this possible?

          The big difference is that with open source at least you or anybody can inspect the code, contribute to it, patch it, fix it.

          ..... or find a bug in the source then use it as your (or insert name of national security service) private exploit for many years till someone else finds it and publishes the vulnerability.

          1. fnusnu

            Re: How is this possible?

            ..... or place a bug in the source then use it as your (or insert name of national security service) private exploit for many years till someone else finds it and publishes the vulnerability

            1. ovation1357

              Re: How is this possible?

              Yes - this is absolutely a risk and is known to have happened in the wild. However, the same organisations are known to pay proprietary vendors to insert bugs/backdoors so this isn't a great deal different, and again - with open source at least the code _can_ be inspected even if people don't.

              Agencies need to be pretty sly with any poison patches because anything blatently obvious is likely to get picked up quickly. The art is to write something that looks completely innocuious but has a very subtle weakness. I guess nobody except the security agencies knows how prevelant these kinds of attack are, however they know full-well that any backdoor they add could well be discovered and exploited by bad actors as well - so I suspect they tread with caution.

          2. ovation1357

            Re: How is this possible?

            I don't doubt that this happens, but being closed source won't make proprietary code any safer:

            The big players of this game will be very well adjusted to looking for the very same weaknesses in closed code and will exploit them. They'll be analysing the underlying system library calls, they'll be picking through the assembly - no doubt they will be very knowledable about the weaknesses of some core OS components and bugs in compilers.

            Just look at the law-enforcement agencies paying huge sums to private security firms for exploits they can use against things like iPhones which are very closed and highly protected. These guys are pros - being able to see the source code will only be one weapon in their arsenal, but IMO is probably a minor detail to them.

            How about the deliberate back-doors that get placed into closed OSes and other software/firmware as a result of secret agreements between the agencies and the vendors? I don't have references to hand but I'm pretty sure that there has been evidence of this and not mere speculation.

            Personally - I think that if you understand that open source software is not perfect, then it's still a better bet than a black box of 'unknown' in my opinion.

      2. Anonymous Coward
        Mushroom

        it just shows that the idea

        of using the word 'fake' disqualifies the writer of being sincere

        ergo me is not sincere?

      3. alain williams Silver badge

        Re: How is this possible?

        No, it just shows that the idea that open source code is inherently more secure because of the 'community checks' is utterly fake

        So would Qualys have found the bug if they did not have access to the source code ?

        1. Anonymous Coward
          Anonymous Coward

          Re: How is this possible?

          Bugs are routinely find in all kind of software, closed or not - using techniques which are not reading the source code.

      4. carl0s

        Re: How is this possible?

        At least you can check it yourself before using it.

    5. Robert Sneddon

      Re: How is this possible?

      I'm sure lots of people think that the sudo code must have been closely studied over the years. That doesn't mean it has actually been closely studied, indeed that mistaken belief might have induced many people who could and would have gone through the code with a fine toothcomb to decide to spend their efforts elsewhere in the open-source world.

    6. Anonymous Coward
      Anonymous Coward

      Re: How is this possible?

      It seems to be an annual occurrence with sudo!

      https://www.theregister.com/2020/02/05/sudo_bug_allows_privilege_escalation/

    7. Tom 38
      Boffin

      Re: How is this possible?

      For those interested:

      https://github.com/sudo-project/sudo/commit/c0eecf85c8b0920a9398920d5f5dae0ee2804b46

      1. KorndogDev
        FAIL

        Re: How is this possible?

        Thanks, this is indeed a very pleasant to read piece of code, the bug is so obvious, don't ya?!

        size_t cmnd_size = (size_t) (argv[argc - 1] - argv[0]) + strlen(argv[argc - 1]) + 1;

        1. Anonymous Coward
          Anonymous Coward

          Re: How is this possible?

          Oh, it appears to have assumed each argument is stored in order in a contiguous buffer. Lovely :-|

          Been working in kernelspace quite a bit lately but have yet to come across anything like as hokey as that. Have seen much worse in "safety critical" commercial code in the past.

          1. Anonymous Coward
            Anonymous Coward

            Re: Have seen much worse in "safety critical" commercial code in the past.

            Much worse that root privelige escalation with no creds, that's been in production for 10 years, on a utility that is used throughout linux deploys worldwide?

            Go on then, we're AC, no-one's watching.

            I'd be interesting in seeing the commercial code that's getting the same amount of use as sudo.

        2. ibmalone

          Re: How is this possible?

          Well that's horrid. Regardless of the actual bug just looking at it a clear problem is relying on too-clever pointer trickery. Even without checking the standard my instinct would have been there aren't guarantees on layout for argv[] members, and I can't find any, making this implementation dependent even on occasions where it works as intended.

    8. jake Silver badge

      Re: How is this possible?

      The same way a flaw was missed in a main steel support beam in a bridge that you have driven over daily for years.

      Humans are in the loop, they will always be in the loop, and they make mistakes. sudo has been patched, we've updated our systems, and we've moved on. Unlike the beam, which probably won't even be noticed until it fails.

      All complex systems have bugs. Some are worse than others.

      ::shrugs::

      1. yetanotheraoc Silver badge

        Re: How is this possible?

        System working as intended. There's a bug. Because it's open source:

        * any researcher can look at the source code and find the flaw.

        * any coder can submit a patch.

        * any maintainer can push that patch....

        ::shrugs:: indeed.

    9. Blackjack Silver badge

      Re: How is this possible?

      Also, didn't we deal with a similar bug last year?

    10. Anonymous Coward
      Terminator

      Re: How is this possible?

      Exactly. While I don't think the open/closed source thing is as important as it may seem, there is an astronomically tiny chance that very large software systems (in late 2018 the Linux kernel was growing by the entire size of the 7th edition kernel every four and a half days) written in languages which do not protect against certain classes of bugs don't have instances of bugs in those classes, because human beings are simply not up to avoiding that sort of problem.

      That, of course, is why we all now use languages which try to rule out certain classes of bugs ... oh, no, sorry, we don't do that, do we? Somehow we still have to program in languages which were appropriate for machines which were very small and slow even in 1970.

      (Yes, yes, I know there are going to be drive-by downvotes to this, because, oh I don't know why because and still less do I care: downvoting does not make you right.)

    11. chasil

      hardening-check

      I know that the article specifically says that ASLR was defeated, but I wonder if these other compiler/linker mitigations prevent (some of) these vulnerabilities?

      The "hardening-check" perl script is available from EPEL on redhat platforms. Here I use it to report mitigations in an old FWTK component that I use for an internal legacy system.

      $ hardening-check /home/fwjail/usr/local/etc/ftp-gw

      /home/fwjail/usr/local/etc/ftp-gw:

      Position Independent Executable: yes

      Stack protected: yes

      Fortify Source functions: yes (some protected functions found)

      Read-only relocations: yes

      Immediate binding: yes

      $ rpm -qi hardening-check | grep ^URL

      URL : http://packages.debian.org/hardening-wrapper

    12. fidodogbreath

      Re: How is this possible?

      I wonder how long the TLAs have known about this.

      1. jake Silver badge

        Re: How is this possible?

        Judging by the lack of anomalies in the system logs over the years, I'd say they've known about it exactly as long as the rest of us.

  2. Dyspeptic Curmudgeon

    Fedora as far back, at least as Fedora 28

    Fedora 28 uses sudo -V 1.8.23

    Fedora 32 uses sudo -V 1.9.2

  3. dvd

    My Mint install just got a patch.

  4. Gary Stewart

    I fairly sure Debian has put out a patch since it has shown up on my computer running Devuan.

    1. Anonymous Coward
      Anonymous Coward

      Can confirm, just updated to 1.8.27-1+deb10u3 on my server.

    2. Archivist

      An honest question

      As a Debian user, I never enable it. Does that make me safe?

      1. Will Godfrey Silver badge
        Linux

        Re: An honest question

        Safe-er

      2. Claptrap314 Silver badge

        Re: An honest question

        You can find out by checking /etc/sudoers. Which is owned by root...

        1. Archivist

          Re: An honest question

          Good idea! Thanks.

  5. Pascal Monett Silver badge

    "has been hiding in plain sight for nearly 10 years"

    And someone at the NSA is seriously pissed right now.

    1. veti Silver badge

      Re: "has been hiding in plain sight for nearly 10 years"

      Meh, they know nothing lasts forever. I'll bet there's a few more where that came from.

      1. jake Silver badge

        Re: "has been hiding in plain sight for nearly 10 years"

        I seriously doubt it.

        1. Aitor 1

          Re: "has been hiding in plain sight for nearly 10 years"

          They probably have a bunch more..

      2. NetBlackOps

        Re: "has been hiding in plain sight for nearly 10 years"

        Feed it to a fuzzer and find out!

    2. Julz

      Re: "has been hiding in plain sight for nearly 10 years"

      Came here to say something similar. I guess the NSA and their ilk have got something else now and tossed this one out there to stop their rivals.

  6. Dazed and Confused

    RHEL/CentOS7

    I guess the update is available for RHEL7 but CentOS7 isn't showing it yet.

    1. anothercynic Silver badge

      Re: RHEL/CentOS7

      Yep. RHEL 7 got it.

    2. Fonant

      Re: RHEL/CentOS7

      CentOS 7 has it now. "yum clean all" helps.

  7. Lorribot

    Linux is more than just distros

    What about all those appliances out there? How would you even know if Sudo was included?

    About 2-3 years ago all we had was couple of Kemp load balancers and a Netezza from IBM running CentOS. Now around 100 of our VMs out of an estate of 700 devices are running a flavour of Linux and there is a lot of different flavours, with, I would imagine, a raft of differnet added on open source libraries.

    We have no central management, patching or any idea what is actually installed, we rely totally on vendors supplying patches in a timely manner and many of these will have variable skill sets and willingness to support older stuff.

    Linux are great untill something like this and you have to go in and tell your security teams whether you have patches or not and you have to spend days digging through support docs and finding information and patching all your appliances etc on a system by system basis, Some of our appliances are running CentOS 4/5 which went out of support a few years ago. For the grunts at the coalface it is a complete pain to keep on top of it all.

    Windows by comparisson is a complete breeze to manage as it is only one flavour of OS to manage and lots of free and lots of expensive but easy to use tools and lots of reporting to keep the well paid people happy.

    Linux is great, its all the other stuff that comes along for the ride and the custom distros that are a real pain and where the quality starts to fall away.

    1. Steve K

      Re: Linux is more than just distros

      The typo “Differnet” in your post is a perfect description of this environment!

    2. ovation1357

      Re: Linux is more than just distros

      I'd argue that one of the features of an 'appliance' is that it's a self-contained image that is typically supported solely by it's vendor - e.g it might have CentOS or a completely custom build under the hood but you're not supposed to really 'care' nor tat with the underlying OS - you get your patches/image updates from the appliance vendor and that's that..

      Of course in reality you end up with legacy Appliances that are no longer supported by the vendor... Aside from the fact that it's now time to replace them with supported ones you do, unfortunately, have a problem to solve yourself.

      I don't think Linux can be blamed however, for your self-admitted complex, uncontrolled environment though. That sounds much more down to poor planning and management. (I'm also presuming by the fact you're owning up to having a mess, that you're the poor person who inherited it. You have my sympathy!)

      Although there's a huge number of Linux distros there aren't really so many that you're likely to see in a business setting - I'm sure there will be edge cases but it's likely to be mainly RHEL/CentOS with maybe a bit of Ubuntu or Debian thrown in. But sudo goes beyond Linux -it's available (although not necessarily bundled) on at least Solaris and BSD as well. I don't know but I'd hazard an educated guess that it also comes with WSL on Windows too.

      I bet you've got something still hosted on Windows 2003 or older - what do you do about new CVEs that affect that? After all you're on your own - MS has ended support and they own the code so often nobody in the community can help with a fix.

      At least with sudo, even if you had to compile it yourself, it _could_ feasibly be updated on all of your legacy systems no matter how old and unsupported they are...

      There's potentially a saving grace with this bug because in order to use sudo you first need to be logged into a shell as an unprivileged user, which is hopefully not something that's open to many people on your appliances. The write-up also says that this vulnerability affects systems using the 'default configuration' which implies that you might be able to mitigate this hack through a config change.

      There's a massive collection of tools for Linux, both free and paid, which will scan your estate for vulnerabilities and/or help you manage these machines. I don't think Windows is unique in this, and I'd go so far as to say it still lags behind in the automated configuration management game.

      The point here is that all OSes are vulnerable to security bugs, not all handle patching them as well and some are more prone to flaws than others - 'Appliances' have to use an OS of some kind and be it something really low level like VxWorks, embedded Linux/Windows, or a full version of OS/2, BSD, Solaris, Linux or Windows (undoubtedly plus others) they will all fall prey to bugs from time to time and will all need patches.

      Now is definitely not the time to be blaming fragmentation in the Linux ecosystem. You, Sir (or madam, for I make no assumptions here), have the onerous task of trying to discover what's actually running in your estate and prioritise updates to systems which are most at risk from being exploited. Good luck! (And I do mean that sincerely).

      I shall being doing similar as a top priority in the morning although I believe me and my team are fortunate to be starting out knowing exactly what kits we're dealing with. We apply security updates automatically so it should be a case of checking we're already patched and then dealing with the few stragglers but let's wait and see.

    3. jake Silver badge

      Re: Linux is more than just distros

      "How would you even know if Sudo was included?"

      I asked perl to check for me.

      "We have no central management"

      Well, there's your problem ... I can fix that for you. If you have to ask "how much", you probably can't afford me.

    4. Anonymous Coward Silver badge
      Windows

      Re: Linux is more than just distros

      Your presumption is flawed. I've seen plenty of systems running embedded Windows which stopped receiving patches the day the master image was produced for the factory.)

      It's just that Windows is crap for embedded stuff, so most manufacturers these days use Linux systems.

      Their attitudes haven't changed though - an appliance is generally not touched once it's been sold, whatever OS is on it.

      1. Anonymous Coward
        Anonymous Coward

        Re: I've seen plenty of systems running embedded Windows

        See! M$'s fault!!

        Surprised it took this long for MS to be fingered for this sudo vuln, but you got there so well done you! That really took some fucked up "logic".

    5. naive

      Re: Linux is more than just distros

      We are talking about a "privilege escalation", which requires somebody already having access on user level. How does that remark relate to Linux based appliances ?.

      "Windows by comparisson is a complete breeze to manage.." .. are you actually working in IT ?

    6. Anonymous Coward
      Anonymous Coward

      Re: Linux is more than just distros

      "What about all those appliances out there? How would you even know if Sudo was included?"

      Meh, the appliances are probably just running everything as root anyway, "cos it makes things easier". That firmware code needed to be shipped yesterday, y'know! Security, QA and professionalism are for wimps!

  8. trist

    Only sissies use sudo

    Gave up on sudo about 10 years ago. Real men just uses ssh and log in as root. They know what they are doing.

    I never understood why something needed a convoluted wrapper to run as root when you could always use the SUID bits and let the executable decide on what privs it needed.

    1. ovation1357

      Re: Only sissies use sudo

      I'm taking your post as being a bit tongue-in-cheek but are you aware that sudo goes beyond simply running commands as root?

      I doubt I know all its features but you can certainly limit not just the command but also the arguments so you can allow a user to act upon just a specific item without allowing them to act upon anything else.

      Setuid is fine for something like the passwd binary which is designed to be run by any regular user but which acts upon a privileged file but it's not necessarily an option when you need to allow, say, a web developer (or group of Devs) to be allowed to restart Apache/Nginx but not any other services.

      1. trist

        Re: Only sissies use sudo

        But can't you just add them to a group and change the permissions to 4x50 where x doesn't mean much on linux but might be useful on other un*xs and achieve the same thing? I have seen this done since the early 90s.

        I remember writing of something like a wrapper for normal users a long time ago and I can remember that I was being called, I don't know if this word can be used in the workplace any longer, a shirt lifter for doing so. This was before urban dictionary, and you could imagine the laughter when I started asking people what this meant.

        1. sev.monster Silver badge

          Re: Only sissies use sudo

          Imagine, if you will, that you have the need to allow Jane to run the command 'nginx reload' as the nginx user with group wwwdata. She does not, however, need to be able to kill nginx, start it from a stopped state, or really do anything else with the nginx binary. Also, to avoid exploits, LD_PRELOAD and other such env vars should not be inherited when running nginx.

          Now, you being the prestigious shell guru you are, immediately recognize that you could write a shell script:

          1. Set suid and guid so that the script runs as nginx and as group wwwdata

          2. Check original uid to see if it is Jane running the script

          3. Start a new login shell to discard environment vars

          4. Run the nginx command in the login subshell

          All in all this takes you a few minutes to write and test. Not too bad, easy and to the point.

          ...But now you have to add Michael, Jim, and Sharon to the list of users. Oh, and Sharon needs to be able to start and stop php-fpm7 too... At this point, your shell script repository is growing. Sure would be nice if you had a framework to do all this for you...

          (jane, michael, jim, sharon) ALL = (nginx:wwwdata) /usr/local/bin/nginx -s reload

          sharon wheel = root /sbin/service php-fpm7 start, /sbin/service php-fpm7 stop

          1. JakeMS
            Mushroom

            Re: Only sissies use sudo

            Yup, sudo is very useful on a system where such features are needed. But at the same time -and in many cases- it can just end up a binary on a system that never gets touched.

            I always remove it on systems where there are no users who will be using it.

            My policy is simple:

            Is this package necessary to the operation of the system (y/n)?

            n --> Remove

            y --> Keep

            in my view, having lots of packages you don't use installed is just an exploit waiting to happen.

            1. sev.monster Silver badge

              Re: Only sissies use sudo

              The argument here is that there is no situation in where sudo is necessary because suid exists, not that you should be smart in what packages you have installed. Of course, the principal of least privilege and keeping a minimal attack surface are great reasons to not use sudo. People that don't need fancy tricks but want to get root quickly would be better off using su or even doas anyway

    2. FatalR

      Re: Only sissies use sudo

      If you're only logging in to do maintenence tasks every so often, sudo is useless. And as we see now, one extra step/attack vector.

      For users, "sudo this sudo that", people are so used to typing sudo the mistakes its supposed to protect from are void. Commands to use online are often written with sudo in front of them, copy paste.

      Not many people actually tie sudo to an authenticated/centralised back end, and almost all uses of sudo allows any root commands to be run, not tied to specific tasks you want to give a non admin user.

      # sudo su

      sudo is pointless.

      1. doublelayer Silver badge

        Re: Only sissies use sudo

        Sudo has two uses. It provides granular privilege control on a shared system, and it enforces a password check for users running privileged commands no matter what on. Those are useful, but if you don't care about either, go ahead and remove it.

        "If you're only logging in to do maintenence tasks every so often, sudo is useless. And as we see now, one extra step/attack vector."

        If you're only logging in for maintenance, and nobody else logs in, then the attack can't work. It only works if you have a shell already. And how do you want to log in to run root commands? Log in directly as root? Exposing the root account to external login attacks, which is usually disabled? With a single password for the whole team, which can be leaked or changed? There has to be some way to get root. Why is your one so much better?

        "For users, "sudo this sudo that", people are so used to typing sudo the mistakes its supposed to protect from are void."

        Sudo cannot and does not protect you from knowing what you're doing. If someone tells you to run a command and you do it without knowing what it's for, the problem is you. Whether you used sudo or su to get to root, or made a script that automatically has root, or any other mechanism, the problem is running the command.

        "Commands to use online are often written with sudo in front of them, copy paste."

        Because they need root access and that's how they run. Again, it's the fault of the user who doesn't check what they're about to do. Sudo is not a sanity checker for commands. It's a privilege management tool.

        "Not many people actually tie sudo to an authenticated/centralised back end, and almost all uses of sudo allows any root commands to be run, not tied to specific tasks you want to give a non admin user."

        So? It lets you give people root privilege in a restricted or unrestricted manner. You want to have restrictions, you can. Take one of my personal servers. I have full sudo access from a management account. I don't have any users with restricted access. I don't now, at least. I have allowed friends to have accounts for various purposes, and sometimes I have given them access to a few commands. They're not getting full root access though. The easy way to do that is Sudo.

    3. jdiebdhidbsusbvwbsidnsoskebid Silver badge

      Re: Only sissies use sudo

      Nah, SSH and logging in as root is the equivalent of politely knocking on the door and waiting to be allowed in. Real "men" use sudo to smash in through the walls, get the job done then get the heck out of there.

      As in, I'm going to use sudo here, can't be bothered to type extra commands and log in, I'll just take root access for a few seconds and it'll be fine. I know what I'm doing, what could go wrong? Ooops!

    4. Anonymous Coward
      Alien

      Re: Only sissies use sudo

      I can see it now 'Yes, nice audit person, we just let all our sysadmin people log in as root: they're all completely trustworthy'. 'Do you really? OK, well, here, let me just tear up your banking license for you. When you've put in place a process around elevated access which is even slightly sensible, and when you've submitted to a comprehensive audit of your estate because, right now, fuck knows what's on it, then you can apply for a new one. Bye now'.

      And, before you ask, yes, I used to work for a bank as a systems person, no, no-one has always-on root access in an environment like that, because that would be fucking insane, and yes I know someone quite well who currently does and about half of whose job is looking after sudo configuration for elevated access control, and yes, she has been in meetings all day about this problem.

    5. Dazed and Confused
      Trollface

      Re: Only sissies use sudo

      Real men just uses ssh and log in as root.

      Nah! Real men boot with init=/bin/sh that way you can avoid that systemd stuff too

      1. jake Silver badge

        Re: Only sissies use sudo

        init=/bin/sh is for wimps. Real men use init=/usr/bin/elvis

        1. Dazed and Confused

          Re: Only sissies use sudo

          Ha!

          The first sysadmin I worked with used vi as his shell.

          1. jake Silver badge

            Re: Only sissies use sudo

            Sensible dude/tte.

            I've been using vi as my shell for decades, at least for certain things. I've mentioned it here on EReg. I started doing it back when RAM was scarce and running a shell just to run vi seemed wasteful. Still seems wasteful, come to think of it ...

            1. Dazed and Confused

              Re: Only sissies use sudo

              This was back when csh was a newfangled thing, David Korn was yet to extend the Borne shell, using vi gave you command history and also full recall of your output.

  9. jake Silver badge

    MacOS users:

    Eyeballing the issue, you guys should also be vulnerable.

    Can anybody confirm or deny? (I don't have a Mac available at the moment.)

    I'm not finger-pointing here, just a heads-up.

    1. Irony Deficient

      Re: MacOS users:

      It depends upon the version of macOS — e.g. Mavericks is old enough to be equipped with sudo 1.7.10p7, so it’s not vulnerable to CVE-2021-3156, but it lacks the security fixes of ’p8 and ’p9.

    2. Anonymous Coward
      Big Brother

      Re: MacOS users:

      10.14 is vulnerable:

      kingston$ ln -s /usr/bin/sudo /tmp/sudoedit

      kingston$ /tmp/sudoedit -s '\' $(racket -e '(display (make-string 65536 #\x))')

      Segmentation fault: 11

      kingston$ sudo -l

      Password:

      Sorry, user tfb may not run sudo on kingston.

      kingston$ sudo -V

      Sudo version 1.8.29

      Sudoers policy plugin version 1.8.29

      Sudoers file grammar version 46

      Sudoers I/O plugin version 1.8.29

      So note in particular that I'm not even allowed to run it because I'm not in whatever group it is that's allowed to use it (I have an administrator user for that which is different than my normal login account).

  10. RyokuMas
    Trollface

    Waitaminute...

    I don't get it... I'm sure I've seen plenty of posts on here whenever there's a new security issue found in Windows about how Linux doesn't have security issues...

    1. JakeMS
      Joke

      Re: Waitaminute...

      We don't have security issues, our OS is perfect. This report is fake news!

    2. Warm Braw

      Re: Waitaminute...

      I'm intrigued that most of the comments above have been about the merits of open source versus closed source and about the relative safety of different programming languages.

      The basic flaw is that Unix-like operating systems have an all-or-nothing privilege system whose consequences sudo attempts to mitigate. Any flaws in sudo simply expose problems in the underlying model so it's always going to be a risk.

      Windows has a more modern security model - but it's still anchored in timesharing concepts that go back to the 60s. I'd be inclined to argue that neither Windows nor Linux has intrinsic support for the type of security that's appropriate now - either for a personal device or a server offering dozens of different services to millions of unknown users - and that "bugs" of this kind are here to stay as a consequence.

      1. Anonymous Coward
        Boffin

        Re: Waitaminute...

        The basic flaw is that Unix-like operating systems have an all-or-nothing privilege system whose consequences sudo attempts to mitigate.

        Exactly so. The underlying security model of Unixoid systems wasn't fit for purpose in 1972, and it definitely is not fit for purpose now. (And no, I don't know what the fix is.)

    3. Norman Nescio Silver badge

      Re: Waitaminute...

      I don't get it... I'm sure I've seen plenty of posts on here whenever there's a new security issue found in Windows about how Linux doesn't have security issues...

      Would you be so kind as to provide links to three of those posts (or more, if you like)?

      No true Scotsman..., sorry, I'll start again, no experienced Linux techie would ever claim Linux doesn't have security issues. I expect posts claiming that to be either clueless fanbois, sarcasm (both unappreciated and explicit), or people with a tenuous grasp on reality. Linux, GNU, and FLOSS software in general definitely has security issues, but issues, once found, can be resolved and distributed by anyone: not just the copyright holder. With non-FLOSS software, even clear security problems might not be legally mitigable, and you could well be dependant on a software maintainer that requires some cold hard cash before resolving problems. Which is fine. You can choose to pay. Or not.

  11. Anonymous Coward
    Anonymous Coward

    What I don't understand is why a tool like sudo requires such a constant stream of updates in the first place. Don't add features to applications that need to be bulletproof!

    1. Claptrap314 Silver badge

      I've not kept up, but that is the scary part--I don't think much functionality has been added to sudo, if any, in the last decade or so. Fixing #*&#*$ amazing bugs? Yeah, that's starting to look like an annual.

  12. Cynic_999

    Open source vs closed source

    Closed source cannot be scrutinized by the general public looking for security flaws, whilst open source can. But does this make it safer? My guess is that of the small number of people who actually do scrutinise source code looking for security flaws, the majority are looking for those flaws in order to exploit them rather than fix them.

    I also guess that of all the people who look at the source code at all, most are doing so in order to customise the program or to help them develop an unrelated program rather than looking for security issues.

    Exceptions may be in programs specifically security/privacy related, such as encryption applications and the like, where there would be far more people wanting to ensure that they are genuinely secure and free from "back doors".

  13. odyssey

    Bounds checking is necessary but has a performance hit

    Some commenters think automatic bounds checking solves the problem fully. It doesn't. If you get length calculations wrong in Java, you get a StringIndexOutOfBoundsException, an ArrayIndexOutOfBoundsException, etc. Same for other managed languages.

    It doesn't solve the underlying problem, just mitigates it to a handled crash rather than a vulnerability. That's a lot better though, and most userspace programs should move off C for higher level languages with better string handling. But if you get the code wrong, you always get an issue - it's just that in a safe language it's an exception that's not exploitable.

    1. doublelayer Silver badge

      Re: Bounds checking is necessary but has a performance hit

      Exactly. That's a benefit. If you get a clear error case, you A) don't have a security hole and B) it tells you exactly what the problem is and you can now fix it. It won't automatically fix it for you so it works, but it prevents it from doing unintended or intended damage and makes it more obvious. It also makes it a lot easier to find the issue by fuzzing, because there are some cases that won't make it segfault but every out of bounds would trigger an exception.

  14. Anonymous Coward
    Anonymous Coward

    Q: Is CVE-2021-3156 architecture dependent? Are the exploits architecture dependent?

    I've had a quick dig, and didn't quickly find an clear easy answer.

    The set of coding errors don't immediately look to me to be architecture sensitive.

    What about the exploits?

    E.g. 32bit addresses vs 64bit addresses, x86 vs AMD64 vs MIPS vs ARM, etc.

  15. DaemonProcess

    strlen

    size_t cmnd_size = (size_t) (argv[argc - 1] - argv[0]) + strlen(argv[argc - 1]) + 1;

    Well the first obvious issue is they used strlen() and not strnlen() -doh! Isn't there a pre-processor to check for this?

    The second issue is they are subtracting pointer addresses. Taking the address of the last argument, subtracting the address of the first argument and then adding the size of the last one only - what about args in the middle? This code appears to be borked several ways.

    I suspect that sudo for a command was scrutineered (? is that a word) in more detail than sudo -e / sudoedit.

    This code is nearly as bad as David Korn's initial efforts in su, login, and passwd, which I had to upgrade for pam once upon a time. His code was a horror of #IFDEFs. At least the explicit length check I used had an 'n' in the function call, 25 years ago.

  16. Anonymous Coward
    Mushroom

    Redhat version stupidity

    Here's a nice thing. Redhat, in their vast wisdom, think it is a clever idea to backport changes to things, while not changing the version number. They famously do this for the kernel. Well it turns out they also do this for sudo, too. So if you have an old RHEL box for which sudo reports, say 1.7.x, you may still be vulnerable to this problem. You need to talk to your Redhat support person (if you have one) and find out which specific package versions are vulnerable.

    Oh Redhat, you are so wonderful.

  17. disk iops

    I wanna see Theo's face when this one came to light. If 'millert' is who I think it is.

  18. jgard

    God, the content of these comments is so bloody predictable and depressing. A Linux / Unix utility hides a serious vulnerability for years and so many people here are blaming the language. A language that is behind many of the most critical and efficicient systems ever written. To those of us who were, decades ago, trained in C, these comments aren't only biased, they're incredibly uninformed

    The first things you learn in learn in C are bounds checking and using malloc properly. These are basics, but people cannot bear to hear that the Linux / Unix community has missed this bug. Instead they try and fit Microsft whataboutery into the argument, then blame the language.

    If this bug was in Windows, would commentards be blaming the language? Of course not! They would blame Microsoft and their supposedly 'crappy' engineers (even though 99 % of readers don't have the smarts and experience to work there). I mean can you imagine a bug in C code that runs Window? Would C be blamed? Not a chance.

    Ken Thompson, father of Unix and grep, among others, initially coded both of them in assembly. Do you think Ken (a computing god) would blame the language? Absolutely not.

    I'm a huge Linux fan, I use it everywhere, but I have to be honest when an issue is revealed, and I don't understand why many people aren't more objective and rational in this debate. This isnt a failure of language it's due to process, tooling and individual factors.

    To blame C for this is not sensible, but to use it to discredit Windows as well is positively absurd. As the great Richard Feynman once said: “The first principle is that you must not fool yourself — and you are the easiest person to fool.".

    1. odyssey

      But it seems to keep happening. It happens dozens of times a year including in code maintained by very experienced developers. The sudo maintainers are not n00bs. At some point we need to acknowledge that the C language makes this kind of error too easy to make, and that for all userspace programs some safer alternative is a priority.

  19. bigtreeman

    doas

    Glad I use doas now instead of sudo ;)

    But at least when a Linux vulnerability gets discovered it gets patched and an update is quickly available.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like