back to article Evil or benign? 'Trusted proxy' draft debate rages on

Discussion over the AT&T-authored Internet Draft “Explicitly Trusted Proxy in HTTP/2” continues, with commentary unpicking the likely implications of the draft. One question is whether or not there's as great a threat to user privacy as was stated by Lauren Weinstein in the blog post discussed in El Reg yesterday. According to …


This topic is closed for new posts.
  1. John Smith 19 Gold badge

    Finding the words

    " the draft proposes to register a new value in the Application Layer Protocol negotiation (ALPN) Protocol IDs registry specific to signal the usage of HTTP2 to transport "http" URIs resources: h2clr."

    IOW this only applies to stuff flagged "http" not "https" and the code they want to stick in the registry (which I suspect is nothing like the steaming PoS that Windows uses) is "h2clr"

    Of course if you layer security on top of the http connection and that includes some kind of encryption that could still b**ger up the results.

    1. Michael Wojcik Silver badge

      Re: Finding the words

      the registry (which I suspect is nothing like the steaming PoS that Windows uses)

      Well, no, of course not. They're talking about an IANA registry, which is a collection of information maintained by IANA, reserving and documenting a set of values and their meanings within a particular technical context.

      (Really, a minute of research would have revealed that, and spared you a silly comparison with something completely unrelated. Might as well say it's nothing like a wedding registry or the county Registry of Deeds.)

  2. Anonymous Coward
    Anonymous Coward

    Hi all,

    Working on an HTTP 2.0 implementation here, so I have indeed read the whole spec (and I hate the header compression protocol, even if I agree it is needed...).

    The Trusted Proxy is designed for insecure networks where the user tries to use HTTP2.0 in clear text, there is no mechanism whatsoever to fake end to end encryption with the trusted mechanism.

    Of course, the real user facing decision is not really at the spec level, but at the browser UI level, a browser SHOULD NOT ever show a "secure" icon when using a trusted proxy to encrypt the HTTP 2.0 Session, trusted proxies are from the point of view of security an effort to provide "wired equivalent privacy" at the protocol level on highly untrusted networks (public wifi mostly).

    I am pretty sure that nobody in his right mind will implement clear text HTTP 2.0 on public networks and I do not expect this Trusted Proxy to not be really used.

    To be honest, the people the most enclined to push this spec and implement Trusted Proxies are mobile ISP and corporate networks (the whole "proxy everything" mentality is about corporate networks security and bandwidth optimisations), and for most people, mobile ISP and corporate networks can allready inject their own Certificate Root Authority in bundled handsets and do the full MITM attack without ever touching HTTP 2.0...

    You should talk to browser vendors about what they expect to implement for user facing GUI when there is a "Trusted Proxy" in the loop, my expectations are "nothing, for us it is clear text".

  3. Anonymous Coward
    Anonymous Coward

    By the way (same anonymous commenter as before :P), the following does clarify a little bit what is only obvious when you read the spec in context:

    If the user does not give consent, or decides to opt out from the

    proxy for a specific connection, the user-agent will negotiate HTTP2

    connection using "h2" value in the Application Layer Protocol

    Negotiation (ALPN) extension field. The proxy will then notice that

    the TLS connection is to be used for a https resource or for a http

    resource for which the user wants to opt out from the proxy. The

    proxy will then forward the ClientHello message to the Server and the

    TLS connection will be end-to-end between the user-agent and the


    When the client asks for H2 in the ALPN negociation (aka: when the user used an httpS URL), the proxy cannot do anything else than pass through, because his proxy certificate will never match the destination server certificate, and the client will refuse the h2 connection (except if it is an apple device of course, but they do not support http 2.0 yet :-) )

  4. Warm Braw

    Solution looking for a problem...

    This arises as a "fix" to the difficulties that result from the "TLS everywhere" approach in HTTP 2. That simply breaks existing caching systems that are widely used by ISPs and large organisations for "public" (i.e. http:) documents.

    As far as I can see, the only advantage of a "trusted proxy" to the end user (over plain text transmission) is that it might, under some circumstances, prevent knowledge of the exact URLs being visited by a group of users being easily discoverable except by the proxy operator.

    That seems such a minor win that the better solution is simply not to use TLS at all, except for https: (as is presently the case). It would be nice to have some sort of group security (so you could, for example, cache pages once for all members of a trust group), but that's a difficult problem to solve cryptographically and an even worse one to manage and trusted proxies aren't a substitute.

    Rather than cover up a poor design choice with layers of pretence, why not simply be explicit that without https: you get no security or encryption? Architecturally simpler, easier to explain...

    1. Anonymous Coward
      Anonymous Coward

      Re: Solution looking for a problem...

      The advantage of the Trusted Proxy is that is it practically much harder for J.R.Hacker to intercept at the ISP or higher level then it is to fail open a local switch or wifi AP.

      This is not about protecting people from the NSA, but mostly about protecting wifi users from local snooping.

      For anything authenticated, end to end encryption is needed anyway, the trusted proxy spec is pretty much limited in scope to avoiding a few local info leaks, and Yes, it should be treated as http by the browser, it's a small privacy increase that should be possible but never give any increased sense of security.

  5. Trevor_Pott Gold badge

    I need to ask the obvious question

    Why is there any aspect of the HTTP2 specification that is unencrypted? Why are we even creating protocols that use anything other than strong encryption for any traffic whatsoever?

    Encryption in flight, encryption at rest and disaggregated, decentralized key exchange or just go the fuck home. It's 2014. The time for unencrypted data transmission is long past.

    1. Michael Wojcik Silver badge

      Re: I need to ask the obvious question

      Why is there any aspect of the HTTP2 specification that is unencrypted? Why are we even creating protocols that use anything other than strong encryption for any traffic whatsoever?

      Traffic explosion. Do end-to-end TLS everywhere and you've broken edge caching and other critical mechanisms for reducing redundant traffic. CDNs can't serve the same content to multiple user agents if each user agent has negotiated its own session key with the server.

      And, of course, most user agents (browsers) are configured not to do local caching of encrypted data, so you've just broken local caching as well.

      And that's not even taking into account the additional overhead of tunneling all web traffic in TLS.

      I don't need all my web traffic encrypted. I don't even need most of my web traffic encrypted. When I'm reading the Register or other publicly-available material, confidentiality is a non-issue; integrity is minor (and insecure integrity measures are sufficient); authentication is usually not an issue; and non-repudiation doesn't apply. TLS would just be useless overhead.

      1. Trevor_Pott Gold badge

        Re: I need to ask the obvious question

        I don't care about edge caching. I pay the ISPs money so that they continually and constantly upgrade their network. Not so that they pocket every dollar they can and provide the minimum possible service.

        If you want to create a protocol that enables edge caching you create a protocol where content requested from a central source is redirected to an edge cached source and the stream encrypted from there. It should be one in which the request, the data and the transport are all encrypted.

        This requires the active participation of all three parties: data provider, cache provider, and requester. The requester sends an encrypted request for data from the data provider. The data provider receives the request and informs the cache provider that it has received a request with the following hash. The cache provider then determines if it A) has that data in it's cache, B) wants that data to cache or C) if the data provider should provide the data directly.

        If A) then the encrypted data is serves to the requester from the cache provider. If B) the data is sent to the cache provider from the data provider who then sends that data to the requester. If C), the data provider sends the data to the requester directly.

        In this manner "what data is requested" as well as the content are never made visible to the data cache provider, nor to the spooks. Everything in encrypted end to end and only the data provider and the requester know what information is being exchanged.

        If this is to technically or politically difficult to implement then we simply should not be using caching, full stop.

        The security and privacy of citizens is of far greater importance than the profits of the ISP. Nothing on the internet should be unencrypted. Ever.

This topic is closed for new posts.

Other stories you might like