back to article Just one bad packet can bring down a vulnerable DNS server thanks to DNSSEC

A single packet can exhaust the processing capacity of a vulnerable DNS server, effectively disabling the machine, by exploiting a 20-plus-year-old design flaw in the DNSSEC specification. That would make it trivial to take down a DNSSEC-validating DNS resolver that has yet to be patched, upsetting all the clients relying on …

  1. Anonymous Coward
    Anonymous Coward

    Well spotted, and well handled.

    It does raise some interesting questions about processes followed for setting standards. This seems to have been there for, what, 20 years? So whatever the process that was followed to validate and verify the standard didn’t work.

    By inference, the same process applied to other IETF standards should mean that we’re questioning the those other standards too, just in case.

    That’s a bit meta, but it’s potentially unwise to assume that all other similarly evaluated standards are safe.

    1. Anonymous Coward
      Anonymous Coward

      Sure... because you've never made a mistake in your entire life?

      1. Anonymous Coward
        Anonymous Coward

        I check other people's work BECAUSE I know I make mistakes - so they probably do too.

    2. JBowler

      Only a very small subset of the IETF standards should be affected

      >By inference, the same process applied to other IETF standards

      As I understand it the standard mandates resolving conflicting, or at least out-of-date, information rather than just throwing it out (which is what I believe most RFCs say, if they say anything). Most of the time RFCs deal very little with specification about what happens with cases where the data is invalid, incomplete, out-of-date or repeated; the implementation is free to attempt recovery if it can. That leads to bugs but the standard approach then is to mandate giving up if, at first, you don't succeed; a very good approach in my opinion.

      The most entertaining thing about this is that it requires a standards change to fix; all that can be offered in the patches is violation of the standard! So the CVE has to be resolved by the standards implementers, not by those who implement their standard. That's a wake up call for all standards bodies.

      1. Anonymous Coward
        Anonymous Coward

        Re: Only a very small subset of the IETF standards should be affected

        > Most of the time RFCs deal very little with specification about what happens with cases where the data is invalid, incomplete, out-of-date or repeated; the implementation is free to attempt recovery if it can.

        That’s beginning to get into the differences between the IETF and the ITU. The ITU standards are very long winded and formal, but generally pretty complete. Such long winded fustiness was not for the hot young rebellious IETF (I deliberately exaggerate for the purposes of indicating the magnitude of the gulf between the two bodies).

        1. R Soul Silver badge

          IETF v ITU

          This was generally true 30-40 years ago during the protocol wars: "protocol data unit" versus "packet" for instance. It isn't true now. Back then, the IETF concentrated on stuff that worked. The ITU sat in its ivory tower dreaming about imaginary garbage that could never hope to work in the real world. This is still how protocol development generally gets done at both organisations today.

          The IETF produces stuff that people need and use. The ITU doesn't.

          1. Anonymous Coward
            Anonymous Coward

            Re: IETF v ITU

            >Back then, the IETF concentrated on stuff that worked

            Well, I think it was more like the IETF adopted things that other people had got going and had gain popularity and usage. It's not a coordinated, designed approach to standards design, it's more like an acknowledgement of a defacto reality.

            >The IETF produces stuff that people need and use.

            Like IPv6...

          2. Roland6 Silver badge

            Re: IETF v ITU

            > The IETF produces stuff that people need and use

            Like Claas E IPv4 addresses, with RFCs disagreeing as to their status and thus how systems should handle them…

      2. sitta_europea Silver badge

        Re: Only a very small subset of the IETF standards should be affected

        "...The most entertaining thing about this is that it requires a standards change to fix; all that can be offered in the patches is violation of the standard! So the CVE has to be resolved by the standards implementers, not by those who implement their standard. That's a wake up call for all standards bodies."

        If I had a dollar for every standard ignored or just plain broken by Google or Yahoo (to name but two) I'd be very happy.

        1. Anonymous Coward
          Anonymous Coward

          Re: Only a very small subset of the IETF standards should be affected

          Or 10 cents for ones by Microsoft.

      3. Roland6 Silver badge

        Re: Only a very small subset of the IETF standards should be affected

        >” The most entertaining thing about this is that it requires a standards change to fix”

        This also highlights the difference between a Standards body such as IEEE, ISO, ITU and IETF. The Standards. Oldies will revise the Standard and reissue a complete revised Standard. iETF simply issue an RFC saying it amends or corrects some previous RFC, leading to the proliferation of documents.

        This document from CIsco listing all of the RFCs concerned with Voice over IP, illustrates the point nicely:

        https://www.cisco.com/c/en/us/support/docs/voice/voice-quality/46275-voice-rfcs.html

        I suspect many VoIP implementations only work because of the ready access to open source, rather than original development.

  2. Bebu
    Windows

    DJB probably has a wry smile...

    I recall years ago he wasn't a fan of DNSSEC but I don't recall the details but its complexity would likely be high on his list I suspect.

  3. martinusher Silver badge

    Rule #1 of network implementation....

    Regardless of what a specification does or doesn't say you have to build network code so it accepts the maximum and demands the minimum. You should be able to fire packets at it containing anything at any rate and it should gracefully drop anything that doesn't make sense. (Some standards decree that you issue an error packet but that's asking for trouble unless the rate that they're generated is limited.)

    1. Claptrap314 Silver badge

      Re: Rule #1 of network implementation....

      A thousand times, no! Postel's law is an absolute failure. Clear communication is an absolute requirement if correct function is desired, and Posel's entire premise is to be fuzzy. Postel's law gets implicated in major bugs more than once a year. This is not an accident.

      Document your expectations fully, and then loudly reject non-compliant access.

      In this case, you drop any and all packets that don't make spec. You should also grey list them and if a second one comes in, black list the sender (and don't respond). Full stop. Otherwise, you are opening up timing attacks, as documented here recently.

  4. Anonymous Coward
    Anonymous Coward

    Report available here ---->

    https://www.athene-center.de/fileadmin/content/PDF/Technical_Report_KeyTrap.pdf

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like