back to article HTTP/2 'Rapid Reset' zero-day exploited in biggest DDoS deluge seen yet

A zero-day vulnerability in the HTTP/2 protocol was exploited to launch the largest distributed denial-of-service (DDoS) attack on record, according to Cloudflare. Surpassing 398 million requests per second, the attack is believed to be more than five times larger than the previous record of 71 million requests per second. …

  1. ZenaB

    So there's nothing particularly novel about this attack then, it's just the classic "tie up server resources until it can't function" DoS. The question really is wtf is generating those 300million requests!?

    1. diodesign (Written by Reg staff) Silver badge

      Speed

      Well, it's novel in that it uses HTTP/2 stream resets and jams as many requests as it can down each TCP connection. Really is limited by the attacker's sending rate. Seems to be enough to overwhelm server-side software.

      C.

      1. Anonymous Coward
        Anonymous Coward

        Re: Speed

        Another point is that the method is efficient in that one client can tie up a large amount of server resources, but it is also in the class of direct attacks. Compare this to the class of reflection or amplification attacks that allow a similar number of hosts to leverage third parties to mount very large attacks.

        This attack is significant in several ways, as the method does not rely on a misconfiguration third party as in amplification or reflection attacks, and in that unless they can pipe it through a proxy server or something similar, the attack is direct, so at a glance this is about as subtle as the low orbit ion cannon from yesteryear.

        Don't try this one at home skiddies, you may get a knock on the door.

    2. Grogan Silver badge

      What is generating those 300 million requests? The http2 clients in the bot net. It's relying on the cancellation of requests, keeping the stream open... since the requests are canceled the clients can keep sending them at a high rate, because it doesn't run afoul of streaming limits.

      But yes, this is not clever, it's not so much an exploit, but inherent in the design of the protocol. It's not something that will be "fixed" but mitigated with new behaviour of the server daemons.

  2. ecofeco Silver badge

    I wondered what was going

    Been seeing lag oddities in my world. Now I know why.

  3. Kevin McMurtrie Silver badge

    A cancel request?

    This sounds like a feature created by an overly eager intern. A more seasoned coder would demand that clients consume what they request.

    HTTP 1.1 actually got this right. You can send requests as fast as you'd like and the server will send responses as fast as it would like, in matching order. People only think that they need HTTP 2 for this because most HTTP 1.1 libraries are too crufty to handle pipelining elegantly.

    1. Dan 55 Silver badge

      Re: A cancel request?

      This sounds like a feature created by an overly eager intern.

      Well HTTP/2 is based on Google's SPDY so the only thing they really thought about was how fast they can serve adverts.

    2. abend0c4 Silver badge

      Re: A cancel request?

      In the absence of ingress control (and even to some extent in its presence) DDoS is always going to be a risk, but you can mitigate the risk with backpressure mechanisms and avoiding where possible the amplification of potential attacks.

      Unfortunately this is a straightforward protocol bug that manages to make both mistakes at the same time. For backpressure to work, it has to be connected to the actual resource being consumed: if you dispatch a task on reception of a packet, then it's the completion of the task that reopens the request window, not its initiation. The effect of this bug is that you can essentially dispatch new tasks as fast as the wire accepts requests without any backpressure being applied. And because each of these simple requests can result in the fetching of significant amounts of data, the work involved in preparing the response is many times that involved in processing the request. Further, the bug is fully exploitable by a simple user application using the protocol stack provided.

      Designing protocols is hard and implementing them properly is harder still, particularly when the various parts of the implementation are not tightly coupled. We depend rather more than we might acknowledge on the good behaviour of "known implementations" safely tucked away in kernel space (e.g. TCP). It will be interesting to see how HTTP/3 - where the transport and application layer protocols will all effectively be in user space - changes the attack surface.

    3. CowHorseFrog Silver badge

      Re: A cancel request?

      The whole concept of multi plexing HTTP is unnecessary and of course opens up opportunities for the baddies.

      Why would any browser need to send 100x requests for anything ? Simple answer something is really wrong if it does.

    4. Anonymous Coward
      Anonymous Coward

      This can be handled without crippling the servers

      so I am less inclined to flame the engineers involved. The gains when the majority of your clients aren't generating attack traffic can outweigh the few that are if the attack mitigations aren't expensive. So really the push is to get the mitigations outlined into the server codebase so that they work out of the box and not just for customer with an over the top DDOS security provider.

      Forcing clients to choke down request traffic that only loaded because of instructions in the page the server sent to the client in the first place isn't great engineering. The whole idea behind this was for the server to get the client to spool up parallel requests to keep the page loads fast. If the client optimizes the requests the server told it to make and nopes out of parts that it doesn't need, that's reasonable not an attack. The server would know even more about the expected request pattern.

    5. richardcox13

      Re: A cancel request?

      > This sounds like a feature created by an overly eager intern.

      No. Because this has a completely reasonable use case. A page is loading (eg. for images below the fold) when the user navigates to another page; at this point those resouces outstanding for the first page will not be needed.

      Just opening lots of HTTP/1 sockets has its own problems, and cancellations by the client (close the socket) allowed similar DDOS attacks. The difference is the HTTP/1 attacks have had a couple of decades in which defences and mitigations have been built up.

  4. TheMeerkat

    The problem here is that HTTP/2 is usually terminated at a proxy, that in the past often used to terminate TLS. And then the request is sent to a real server that often uses blocking IO and a new thread to process it and would not know that the connection to it is dropped by the proxy until the process of fetching whatever it fetches is done.

    If you don’t terminate HTTP/2 in a proxy but build it into your application instead, you would not be that vulnerable.

  5. jvf

    what a bunch of assholes

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like