back to article Rust projects open to denial of service thanks to Hyper mistakes

Security researchers at have identified multiple vulnerabilities arising from careless use of the Rust Hyper package, a very popular library for handling HTTP requests. Security firm JFrog found that an undisclosed number of projects incorporating Hyper, like Axum, Salvo and conduit-hyper, were susceptible to denial of service …

  1. Anonymous Coward
    Anonymous Coward

    RTFM

    https://bulgier.net/Q209354.htm

    1. Anonymous Coward
      Anonymous Coward

      Re: RTFM

      " on your MSDN CD's,"

      CD's

      Right - so what belongs to this CD?

  2. emfiliane

    Maybe, just maaaaaaaaybe, the library should fail safe, instead of failing open. Make a limit the default and require anyone needing more to explicitly state that, instead of starting with no limit and an obvious resource exhaustion problem, but then burying the proscription against that in the individual function docs.

    Saying "don't do this" but then making it the default behavior is one of the most asinine security practices ever. What exactly did they expect to happen?

    1. Zygous

      The warning was hardly buried - it's right there on the documentation page (https://docs.rs/hyper/latest/hyper/body/fn.to_bytes.html) between the function's signature and the example of how to use it. It's not unreasonable to expect users to think about what they're doing when calling a function that clearly could allocate a load of memory.

      The article's mention of the content-length header is a little misleading: the to_bytes function doesn’t implement any length checks so it doesn't matter what's in the content-length header if the user's code doesn't check that itself.

      1. Timop

        I am glad people always read manuals and terms& conditions first and definitely will not cut any corners.

        IMHO these checks should be enforced on both by Hyper and people using it in their code. That is the only way to make sure these things would happen in a scale of once in a million.

        1. fg_swe Silver badge

          Benign

          A determinstic(read: easy to debug) crash from a DOS attack is benign as compared to Malware Injection and Reconnaissance For Months.

          The only exception to this would be military communication systems, where downtime could mean losing a war. As far as I know, they have their software engineers "embedded" or on very short call. Their most important systems are created by "themselves", which means they can fix any DOS issue on very short notice.

          1. garwhale

            Re: Benign

            And if the process crashing is driving a car, flying plane, controlling a factory or operating on a patient?

            1. fg_swe Silver badge

              Re: Benign

              In this case the software engineering organization must PROVE this cannot happen. This usually means the process cannot allocate or deallocate heap memory(except during boot-up and stand-down), as the heap is too unpredictable for hard realtime systems.

              Of course the generation of this Proof might be hard to do by hand. If the language support non-nullable pointers the proof is easier. A source code inspection by experienced engineers combined with all the testing of the V model might(or not, if code is too complex) be able to generate this proof. Any relevant source code change must trigger a regeneration this proof.

              Also see SPARK Ada and similar.

      2. LionelB Silver badge

        > It's not unreasonable to expect users to think about what they're doing when calling a function [remainder of sentence optional]

        Apparently it is.

      3. Plest Silver badge
        Happy

        Are you suggesting some people might have just googled how to use the package then just copy/pasted the code they found without running proper tests or even making vague attempts to validate the sample code? Heaven forfend!

      4. Anonymous Coward
        Anonymous Coward

        Richard12 rebutted this well below

        One big part of why we are moving to Rust is to get past these sorts of problems. "It's fine to hand the monkeys a loaded gun" isn't an argument we should be leaning on, and as Richard12 pointed out, people making libraries for Rust should also raise their game to make safe by default code.

        If people need to override the safeties so be it, let them read how to point a loaded gun at their foot in the docs, not blow their foot off and then read the docs to find out why they have a bleeding stump.

        It's also better to expect that people bothering to build something in rust as opposed to PERL or ANSI C is that the want and need greater reliability, stability, or security or they are willing to put the extra work in.

        Not that you are wrong about where the buck should stop when you are writing code, but the best of us will brain fade and forget gotchas like that even after using them correctly for 2000 other lines. And even if I read the notes and understood them, there is no guarantee I will remember them when I come back to make a point rev edit on the same code 3 years from now. Or someone else may have to tweak the code you wrote.

    2. Michael Wojcik Silver badge

      the library should fail safe

      I'm inclined to agree, though with a DoS vulnerability I tend to give the library a bit more slack. And HTTP provides so many avenues for DoS it's hard to get too worked up about any one.

      There is the problem of where to set the default. In my HTTP server software, I picked a default maximum message size that was large enough for all our existing use cases; but I knew what those use cases were. (Applications using the server can override the default.) For a generic package, it's probably not a bad idea to pick a default size, but you know you'll be hearing a lot of complaints from people who didn't realize there was a default, or think your default is wrong.

  3. Richard 12 Silver badge

    So Rust is not memory safe then

    When you tell people they don't have to think about something, they won't.

    This is a bug in Hyper, not the developers using it.

    Everything has a maximum. "Unlimited" is never true.

    Either handle the case of "insufficient resources", set a default maximum or require library users to set a maximum.

    And then handle "insufficient resources" anyway, because it's going to happen sooner or later.

    Does Rust really just terminate the process because a memory allocation failed?

    1. Michael Hoffmann Silver badge
      Unhappy

      Re: So Rust is not memory safe then

      I concur and my jaw dropped reading the article. All the hyper around Rust is that it supposedly protects the developer from mistakes like this. I'm sorry, writing "don't do this" in TFM is not what I would associate with the future of the brave new world of magically secure systems programming.

      For those who can't be bothered looking up the function in question it says

      "Care needs to be taken if the remote is untrusted. The function doesn’t implement any length checks and an malicious peer might make it consume arbitrary amounts of memory."

      To which I respond "then what did we gain from Ye Olde C Days?"

      1. b0llchit Silver badge
        FAIL

        Re: So Rust is not memory safe then

        "then what did we gain from Ye Olde C Days?"

        We have gained a new programming language with some advantages and some disadvantages. Programmers still need to be programmers and fools will still be fools. We will have a new body of code, which needs to be handled. This body of code will be just as vulnerable to the programmer's logic errors, omissions and short-cuts than other programming languages.

        Basically, everything changed and everything stayed the same.

        1. Anonymous Coward
          Anonymous Coward

          Re: everything changed and everything stayed the same.

          ... and everyone's guilty, but no-one's to blame?

        2. Will Godfrey Silver badge
          Facepalm

          Re: So Rust is not memory safe then

          As the French would say:

          Plus ça change, plus c'est la même chose

      2. SimonLoki

        Re: So Rust is not memory safe then

        This is not an issue of memory safety. In the event that the length in the header is too long then the process will panic and stop which is defined behaviour. It is a problem because it can lead to a DOS, but it does not invoke undefined behaviour (for example, in 'Ye Olde C Days', a failure to check an input length and reading into a fixed length buffer could cause a buffer over-run - this is not possible in safe Rust).

        1. werdsmith Silver badge

          Re: So Rust is not memory safe then

          The looking-over-their-shoulders C programmers will jump on anything like this to comfort themselves.

      3. fg_swe Silver badge

        Rust Not Different Here From C, C++ or Java

        See http://sappeur.ddnss.de/discussion.html, section D9

        1. Anonymous Coward
          Anonymous Coward

          Re: Rust Not Different Here From C, C++ or Java

          Actually, I think this puts a light on one of the big differences, as the Rust would benefit from treating the memory security features as a good jumping off point to write code that is secure, stable, maintainable, and reliable, even if it takes a bit more work or planning.

          Yes, Rust does not fix all programming issues for you automatically. That does not mean the community should let people slide on stuff because the core doesn't explicitly prevent it. That's a crap argument.

          Also Java is an odd case as it TRIES to handcuff it's users in the name of security, then the horrors in the JVM code give any given script an endless parade of LPE and Code execution bugs by attacking the JVM directly.

          Good Rust should be safe whenever it can, unless you have a really good reason for it not to be. Then you empower programmers to write good tight code without handcuffing them, and make it easier to spot insecure operations during QA/Auditing.

      4. Adrian Bool

        Re: So Rust is not memory safe then

        > To which I respond "then what did we gain from Ye Olde C Days?"

        There is a gain is that, in Rust, this vulnerabilty can't be used to for remote code excution or data leaks — the impact is "limited" to DoS. I fully agree that that this API should be defined in a more robust manner though... DoS is still bad...

    2. Hans Neeson-Bumpsadese Silver badge

      Re: So Rust is not memory safe then

      Everything has a maximum. "Unlimited" is never true.

      I refer you to Einstein's quote about the universe and human stupidity.

    3. claimed

      Re: So Rust is not memory safe then

      Panic in rust is not a old school interrupt 'panic and shit the bed while the disk is still spinning', it will unwind the stack etc so still memory safe. The library developers have called panic, which is essentially a process exit. So no 'danger' (except your server goes down, oops).

      e.g. an if branch:

      panic!("this will never happen")

      Still a crap design, I totally agree with safe defaults... why allow a trivial DOS, just set to OWASP recommendation but allow up to platform max

      1. fg_swe Silver badge

        Thanks

        Your reasoning is the proper one. A deterministic crash is much better than Silent Subversion. See http://sappeur.ddnss.de/discussion.html section D9

      2. Dan 55 Silver badge
        Meh

        Re: So Rust is not memory safe then

        It doesn't matter if Rust unwinds the stack or not before exiting when it panics, the OS recovers the process' memory when it exits anyway.

        1. claimed
          Happy

          Re: So Rust is not memory safe then

          Don't love the emoji

          Which OS? Not all do. The big boys, yes, but not all. So Rust adds memory safe guarantees which don't rely on someone else fixing my logic bombs... Contrary to the OP

          1. claimed
            Happy

            Re: So Rust is not memory safe then

            Try:

            shmget, shmctl, shm_overview

            Shared memory will not always be cleared by (e.g. Linux), so you can create a memory leak that survives process exit and requires a machine restart.. Unless you're using Rust (and your drop handler correctly closes the resource), in which case it will be cleaned up on any exit, including panics

    4. This post has been deleted by its author

      1. fg_swe Silver badge

        Out Of Memory in C, C++, Java, Rust

        In all the above languages, you will get a deterministic crash if heap allocation fails. You either get a NULL pointer from malloc() or new or some sort of OutOfMemoryException. Accessing a NULL pointer typically creates (some sort of) SIGSEV and stops the program. OutOfMemoryException typcially stops the thread.

        This is exactly what you want. A deterministic, debuggable crash from a programming error/cybernetic attack. Much better than Silent Subversion from e.g. a buffer overflow.

        How else could an out of memory condition be handled ?

        (this applies to Windows, Linux, BSD, HPUX, Solaris, AIX, but maybe not to embedded systems)

        1. Richard 12 Silver badge

          Incorrect

          C, C++, C# and Java do not crash on out of memory.

          C malloc() returns NULL, allowing the immediate caller to handle the error - this is why it's unsafe to blindly use the result of malloc.

          C++/C#/Java "new" throw an exception, unwinding the stack and automatically cleaning up (freeing memory) until the first appropriate place handles the error. This is why it is safe to blindly use the result of new.

          I think Python does the same.

          None of these languages kill the application unless the developer has decided there isn't a worthwhile way to handle the error.

          In the case of an HTTP transfer, the most sane option is generally to log that a transfer required more memory than is available, and cancel the transfer.

          1. fg_swe Silver badge

            Re: Incorrect

            planet* P = new malloc(sizeof(planet);

            p->weight = 1E30;

            That will fail deterministically, debuggably and SECURELY(SIGSEV plus core dump) on all IT operating systems I know of, if memory is exhausted. Of course you can (and sometimes should) check the return value of malloc, but if you do not do it, you will not have a security problem. That was my point.

            1. Richard 12 Silver badge

              Re: Incorrect

              That is simply wrong.

              The malloc() version invokes undefined behaviour. The C/C++ compiler is permitted to assume an unchecked pointer deref cannot be null, and can optimise based on that assumption.

              The C++/C#/Java "new" version does not deref the pointer at all, it throws a memory allocation exception.

              I also find your belief that applications should crash instead of handling errors concerning.

              I sincerely hope you do not write serious libraries.

              You may wish to think about the consequences of refusing to handle errors.

              On the bright side, it does mean nobody needs concern themselves with Sappeur, and can file it along with brainf**k.

              1. fg_swe Silver badge

                NULL Pointers On Unix and Windows, Sappeur

                During my 20 years of software engineering experience on HP-UX, AIX, Solaris, Windows, MacOS and Linux I never had the problem of a NULL pointer not leading to a deterministic crash, exactly where the FIRST pointer dereferencing happens.

                This is because these operating systems by default allocate "invalid" MMU pages from address 0 to something like address 64000. In the example above, the planet struct would have to be bigger than 64K to lead to an undetected error.

                This is not the case in embedded systems, though. Sappeur currently targets "only" all kinds of Unixoid and Windows OSs with MMUs. From Solaris to ELBRUS Linux. One key assumption of Sappeur is that its smart pointers are initialized to NULL and will create a deterministic SIGSEV (or Windows equivalent) on dereferencing a NULL pointer. This assumption is important for performance, safety and security reasons.

                So you are "right" that this mechanism is not safe and secure for "huge" Sappeur classes.

                In 10 years of programming in Sappeur I never had such a case. Creating a class with 64K size per instance is rather unusual and I never needed this. In case of arrays, which can of course be much bigger than 64K, the SPRArray._sz member is at the beginning of the data structure, well within the first few dozens of octets. Each array access will be preceded by accessing _sz for index checking. This will then generate the deterministic SIGSEV and a debuggable core.

                So in theory you have a point, but not in the real world I have seen. In a future version of Sappeur I might consider adding a check that classes cannot be larger than 64K, thereby eliminating the problem in principle. Larger classes would then require non-nullable pointers, something also to be added to the language.

                1. Dan 55 Silver badge

                  Re: NULL Pointers On Unix and Windows, Sappeur

                  Page sizes on x86, x64, MIPS, and ARM are 4K, not 64K. You appear to be arguing that there aren't many structures > 64K so it doesn't matter as it'll segfault anyway, there are many more structures > 4K.

                  Really not a good advert for your supposedly memory safe language.

                  1. fg_swe Silver badge

                    Re: NULL Pointers On Unix and Windows, Sappeur

                    Please see my other post with the test program in C. The invalid memory space starting from 0 is actually many Gigabytes in size on 64 bit machines. This means that any NULL pointer to an object smaller than that will generate a SIGSEV exactly where the bug is. A debuggable core fil will be dumped. Sappeur Arrays are a not affected by this limit.

                    Conclusion: for all remotely sane (object size lower than 1000 000 000 octets) programs NULL pointers will generate a SIGSEV.

                    If you do not believe me, please perform your own tests and prove me wrong.

            2. Richard 12 Silver badge

              Re: Incorrect

              To expand on this:

              The malloc version is not debuggable.

              The compiler is permitted to re-order things on lines after the null pointer deref to happen before that deref, and things before the deref to after it - because null pointers cannot be derefed.

              The CPU is allowed to do this as well, so even inspection of the raw disassembly won't help you debug.

              Thus code that is "impossible" to reach, may in fact be executed before the SIGSEGV occurs. And code that "must" have executed before, may not have done.

              Depending on what those lines do, this may (and indeed has) resulted in serious security problems.

              1. fg_swe Silver badge

                Re: Incorrect

                As I wrote above, I have never seen what you describe happen in the real world. (On Unixes and Windows)

                And of course I had plenty of cases of forgetting to initialize a pointer during a development session and always got the determinsitic SIGSEV exactly at first dereferencing.

                Your are right though that it is a problem in contrived examples.

            3. Dan 55 Silver badge
              FAIL

              Re: Incorrect

              This is not guaranteed to fail and C/C++ leaves the behaviour up to the platform.

              *NULL will SIGSEGV on OSes which don't assign a physical address to virtual memory location 0, but NULL->weight, i.e. *(0 + offsetof(P, weight)), could point to a virtual memory location which is mapped to a physical address and if it does the assignment would work.

              Then there are old and embedded platforms which do no virtual to physical memory mapping. If you dereference NULL, you dereference address 0, no problem.

              Surely someone who wrote a supposedly memory safe language on top of C++ should know this?

              1. fg_swe Silver badge

                Re: Incorrect

                Please see my other comments to this issue. You are right in theory, but not in practice.

                1. Dan 55 Silver badge
                  Facepalm

                  Re: Incorrect

                  Where practice means only using structures < 4K in size and hoping the OS and MMU will catch everything for your supposedly memory safe language?

                  Seriously?

                  Not sure what even to say to that.

                  1. fg_swe Silver badge

                    Re: Incorrect

                    Please use the following test program to see that in practice your concern is not an issue. Apparently the "memory guard space" is in the order of 140 735 371 892 940 octets. (Linux 64 bit).

                    On MacOS, it seems to be about 6000 000 000.

                    This also aligns with my practical experience writing software in C, C++ and Sappeur.

                    #include <stdlib.h>

                    #include <stdio.h>

                    #include <string.h>

                    struct MemTest

                    {

                    char buffer[650000];

                    char* str;

                    char buffer2[6500000];

                    };

                    int main(int argc, char** argv)

                    {

                    int x;

                    char* str1 = malloc(100000000);

                    printf("adresse von str1: %lli\n",(long long int)str1);

                    printf("adresse von x: %lli\n",(long long int)&x);

                    strcpy(str1,"abc");

                    printf("%s",str1);

                    free(str1);

                    str1 = NULL;

                    struct MemTest* mtp = NULL;

                    mtp->str = malloc(10);

                    return 1;

                    }

                    1. Richard 12 Silver badge
                      Facepalm

                      Re: Incorrect

                      Sigh. The above code proves nothing.

                      At best, it demonstrates that a particular implementation of malloc() and stack happens to often return pointers with large virtual addresses.

                      It says nothing at all about what is or may be at low (virtual) addresses, whether a process could deref them without crashing, or whether that crash would occur "in order" in a real application.

                      That is entirely implementation dependent.

                      For example, a fairly common layout for microcontrollers is that hardware and code occupy the lowest pages in the memory map, the stack grows down from the top of RAM and the heap grows up from the lowest RAM address. All of them jumping over address blocks that don't exist.

                      On larger systems, technologies like ASLR mean that behaviour is intentionally non-deterministic. So it might crash today while silently corrupting some code or variable tomorrow.

                      You'll often get lucky, but you've built a footgun by not following the rules of the language.

                      - And yes, this is one of the reasons why modern C++ code doesn't use malloc unless absolutely necessary.

                      1. fg_swe Silver badge

                        Re: Incorrect

                        I never claimed the "address 0 invalid space" exists in embedded systems, rather I specifically claimed this to exist for all sorts of modern Unix(es) and Windows. This invalid address space exists, has been existing for a long time and will detect NULL pointers reliably and at zero runtime cost.

                        ALSR will randomize addresses outside the "invalid space" and does not matter here.

                        Again, please post a demo program for Windows, Linux or MacOS, which will prove me wrong.

                        1. Michael Wojcik Silver badge

                          Re: Incorrect

                          please post a demo program for Windows, Linux or MacOS, which will prove me wrong

                          Because those are the only environments for which there are C implementations?

                          The problem here is that it's obvious to anyone with a passing familiarity with the actual C language that you're wrong, and you're wrong in the worst way: you're arguing about your anecdotal experience with a handful of implementations, rather than discussing what the actual language (which is defined in ISO 9899, not in your fevered imagination) says.

                          Back in the old days on comp.lang.c there was no shortage of people like you. They were all wrong as well.

                          1. fg_swe Silver badge

                            Heresy ?

                            I get it, I am wrong in theory. But not in practice for the OSs/environments Sappeur currently targets. I specifically said so from the beginning. Windows, Linux, BSDs, MacOS, Solaris, HP-UX, AIX - they will all reliably generate a SIGSEV(or equivalent) when accessing a NULL pointer. That's deterministic behaviour as required for Memory Safety.

                            The other environments (small embedded systems without an MMU) I currently do not target.

        2. garwhale

          Re: Out Of Memory in C, C++, Java, Rust

          No. If you try to allocate memory in chunks in a loop, it should return successful until it fails, in which case it should return unsuccessful, not crash.

    5. fg_swe Silver badge

      FALSE

      Please look at http://sappeur.ddnss.de/discussion.html , section D9 for why you are wrong.

  4. fg_swe Silver badge

    Resource Limit Management != Memory Safety

    The management of RAM allocation, database connection numbers, file handles, number of threads etc must be managed by the application programmer. There is no sensible way an automatic runtime mechanism can do this for the app programmer. Except, of course, stopping the thread or program upon resource exhaustion.

    So - the application programmer must think about all the resources he allocates in his program. For example, an http server must reject too many parallel requests(Code 429 Resource Exhausted). An application using database handles must limit the number of database connections by some sort of pooling and semaphores. No automatic mechanism on the runtime/language level can replace programmer reasoning here(except maybe some sort of database pool which blocks until a connection becomes free).

    Memory Safety is not the paradise of programming, it "just" eliminates an ugly kind of cancer.

    Software Engineering is a highly complex craft+science with lots of aspects. If it were simple, we would not earn good money on it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Resource Limit Management != Memory Safety

      I feel like this post is a mix of hits and misses. While I agree that the programmer needs to be aware of these issues, and build the exception code to handle them correctly for their application, all the examples you give are firing back a standard return code.

      The core of what sticks in my throat on your arguments is the idea that the only way a piece of code could operate is by throwing a SEG FAULT, with the huge system hit that entails. Yeah, you can catch it and continue execution but you really need to keep shit like that to a minimum if you want a stable and performant system. Doubly so when input is coming from off-box as the overwhelming majority will be. And unless you mandate an external preprocessor to filter all of those requests and sanitize them, you create the exact problem raised, where attacker controlled requests can flood the system hammering SEGFAULTS till the OS cries mercy, taking down every process on the system potentially.

      So yeah, I'd rather see them build this function in a way that helps the programmer build their code to catch and handle a regular return value to manage this kind of error, not poke the OS in the eye. Not like the Hyper team should be burned at the stake, but like all the rest of us, there may be better ways and we should learn from them and adapt.

      1. fg_swe Silver badge

        Re: Resource Limit Management != Memory Safety

        Again: you have a programming error, which can lead to RAM exhaustion. An attacker comes along and triggers a SIGSEV from that. Program stops, core is dumped. Then you, the senior engineer, attaches gdb and finds the error location. Very soon you have found the bug, fixed it and compiled the new program version. System running again after 33 Minutes.

        No information disclosed to attacker, no effectors manipulated, downtime 33 Minutes. Great.

  5. Brewster's Angle Grinder Silver badge

    I love how sweetly the documentation asserts "Care needs to be taken if the remote is untrusted."

    If? IFFF?! It's a remote end. I can count on the fingers of zero hands the number of times a remote end is trustable.

    1. fg_swe Silver badge

      Intranet

      If you create an http server for a "known population of clients", then maybe you do not need to care about DOS attacks. Note that this is not true for subversion opportunities, as you must always assume one of your intranet machines being compromised.

      Actually, this is how things like Oracle are operated - they are locked behind a firewall as they would be easily hacked if exposed to the wild world of internet or even the entire intranet.

      1. b0llchit Silver badge
        Facepalm

        Re: Intranet

        Yeah,... security in depth is an ancient concept but rarely practised...

        No client can be trusted (not even the programmer). You need to take care of all the scenarios. You can (D)DOS yourself on your own intranet too.

        1. Arthur the cat Silver badge

          Re: Intranet

          You can (D)DOS yourself on your own intranet too.

          I did exactly that 3 days ago by overeager changes to my DNS recursive resolvers. Oops and much embarrassment (especially with 30+ years experience with DNS). Fortunately it was easily recovered from.

  6. Howard Sway Silver badge

    the same root cause – forgetting to set proper limits on HTTP requests when using the Hyper library

    Just bad library design. If not setting limits causes a problem, have a default limit of 0, forcing the user to do so. Although these sort of checks that everybody has to do should really be inside the library - the whole point is meant to be that libraries exist to take away tedious work and wrap it up in nice, easy to use packages, not that you make programmers have to worry about implementation details. Make the 90% use case the easiest, always.

  7. garwhale

    Rust processes should not crash due to memory allocation faults

    1. If the memory allocated cannot allocate enough memory, it should return a error, not crash the process.

    2. Failing to define the maximum length of something that should be treated as an error, if no defaults are set.

    If (1) happens, then (2) should return an error.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like