back to article TCP is a wire-centric protocol being forced to cut the cord, painfully

The venerable Transmission Control Protocol (TCP) is one of the foundation protocols of the Internet, but it's not so hot at mobile environments, says Juho Snellman of Swiss telco software concern Teclo. The Register took an interest in Teclo's work after spotting this presentation to a SIGCOMM workshop. In an interview with …

  1. Anonymous Coward
    Anonymous Coward

    Multipath TCP?

    is anyone using Multipath TCP other than research projects - partic for things like WiFi / 4G handover?

    1. This post has been deleted by its author

    2. MrHorizontal

      Re: Multipath TCP?

      MP-TCP on mobile doesn't make much difference. It only really comes into it's own when you have multiple connections aggregated. Although if each 1x1 Wifi and LTE stream is independent and multiplexed via MP-TCP, theoretically it would be advantageous, but since both already mux the streams, I doubt a layer 3 protocol would be an advantage.

      A far, far more efficient and modern protocol to use would be Stream Control Transmission Protocol (SCTP). SCTP has many advantages - it's inherently multi path like MP TCP, it can be used over a single UDP port to allow a user mode SCTP stack to be used in a transition phase, and can use any of the congestion control algorithms used with TCP (e.g. Reno) and maintain the same or even more advanced reliability of TCP as compared to UDP, but with the inherent latency advantages (assuming a good kernel mode stack) as UDP. Last but not least it would also make routing much easier since data could be routed through any given node (if allowed), making it more useful for mesh networking (and thus brilliant for mobile, though not so good for the spies).

      SCTP is used in some VoIP situations as it makes RTP much simpler, but there's only one mature stack made by the BSD people, and it hasn't been properly adopted in Linux and Microsoft chose to ignore it entirely citing 'lack of demand'. When will these people realise that they will spur demand if it's supported. Idiots.

      In the meantime, MP-TCP needs to be used as it's sort of compatible with traditional TCP, but it's a hack and not a proper solution.

      1. A Non e-mouse Silver badge

        Re: Multipath TCP?

        The other minor problem with SCTP is firewalls, access rules, etc. being aware of this additional protocol running over IP.

        I agree that in theory, it shouldn't made a difference as the ISPs should only be looking at the external IP packets. But theory and practice rarely agree :-(

        1. Anonymous Coward
          Anonymous Coward

          Re: Multipath TCP?

          if the ISPs only looked at external IP packets, life would be simpler, as you suggest :)

          Looking into SCTP over last couple of days since these pointers, and two further questions - why is SCTP not being implemented more widely (Linux?), and why so little development work?

          Given the advantages described both here and in the small number of non-telco (non-SS7) research papers, you'd think it would at least be part of some Uni research projects?

    3. Anonymous Coward
      Anonymous Coward

      Re: Multipath TCP?

      Apple have been using Multipath TCP on iOS for almost 2 years now.

  2. Anonymous Coward
    Anonymous Coward


    "One of the surprises in trying to mobile-optimise TCP came from the amount of packet-ordering issues that arise in the networks, [...]"

    That should not have been a surprise. It's been a fact of life for at least 20 years - usually where there is IP layer load sharing infrastructure in the path. This used to cause problems when endpoints immediately assumed an out-of-sequence TCP packet meant intermediate ones were lost. The fast retry mechanism being invoked repeatedly was taken to indicate Network Congestion and the overall transmission speed was deliberately slowed to a minimum.

    1. Tom 7 Silver badge

      Re: Surprise?

      Bad implementations need fixing not the protocol rewriting.

      1. Mike 125

        Re: Surprise?


        Indeed. It never sounds good when people start blaming their performance issues on the core protocols. I suspect this is a very big part of his problem:

        “In the kernel, for every new networking characteristic, someone adds more stuff – the data path becomes very complicated.”

        And yet his solution is to add yet more "heuristics"?

        Having said that - it's a nightmare area to be working in, given the market conditions.

      2. Anonymous Coward
        Anonymous Coward

        Re: Surprise?

        "Bad implementations need fixing not the protocol rewriting."

        The implementations are not necessarily bad - just that there are scenarios where they don't work efficiently.

        A Wikipedia article gives some insight into many attempted solutions to address the network congestion problem.

        One thing that appears to be missing from the IP protocol is a concrete way to indicate network congestion to endpoints by signals originating at the affected point in the intervening infrastructure.

        1. Gideon 1

          Re: Surprise?

          "One thing that appears to be missing from the IP protocol is a concrete way to indicate network congestion to endpoints by signals originating at the affected point in the intervening infrastructure."

          Erm, yes there is. Indeed congestion signalling and control is the critical core and often misunderstood part of TCP. It is achieved by ramping up the packet rate until the round trip time starts to increase, as that is when the packets start to fill the queues in the routers along the route. There is no point in having more than one or two packets in any queue. This maximises the throughput while also sharing the bandwidth equitably with other traffic.

          1. Anonymous Coward
            Anonymous Coward

            Re: Surprise?

            "There is no point in having more than one or two packets in any queue"

            High bandwidth links, especially with high latency, need a lot of packets in flight to get an efficient throughput. Sharing of bandwidth may not be the required objective.

            The problem with round-trip algorithms is knowing whether that is due to queues or discards. How they guess the latter is what gets upset by out-of-sequence packets.

          2. This post has been deleted by its author

            1. Mike007

              Re: Surprise?

              "One thing that appears to be missing from the IP protocol is a concrete way to indicate network congestion to endpoints by signals originating at the affected point in the intervening infrastructure."

              There's always ECN, although there's the problem that some NAT routers discard every TCP packet that has that "unknown option" of "I support ECN". Apparently apple are going ECN enabled by default which might be interesting.

              "It is achieved by ramping up the packet rate until the round trip time starts to increase, as that is when the packets start to fill the queues in the routers along the route."

              Detecting congestion has nothing to do with monitoring latency. Increased latency will not cause TCP to slow down - it will assume there is still more bandwidth and keep increasing the data rate until the buffers completely fill and start dropping packets. Oversized buffers in networking equipment is a serious problem.

    2. Steve Davies 3 Silver badge

      Re: Surprise?

      does this mean a return of the PAD's from X.25 days... ?

      Runs for cover.

    3. Yes Me Silver badge

      Re: Surprise?

      The only surprise is that this collection of old news came out as if it was a new story.

  3. Tom 7 Silver badge

    "a lot of work and you might run into licensing issues"

    Hoisted by their own petard?

    1. Gideon 1

      Re: "a lot of work and you might run into licensing issues"

      QNX comes to mind, as that has all drivers and networking in userland.

  4. Graham Cobb Silver badge

    Giving back

    Interesting architecture. I do wonder why they bother with a Linux user-mode implementation (where they will need to build their own user-mode real-time OS inside their process to provide the scheduling they need, memory allocation, timers, thread management, etc) instead of just using an off-the-shelf embedded system. Maybe this is just proof-of-concept and they will go to a real implementation (or hope to licence it to a probe vendor) later.

    And ambitious, to be able to do that level of modification of the packet flow and still allow the session to survive if their box fails.

    However, I am concerned that TCP improvement work is being done commercially (and presumably with intentions to be patented) instead of openly to benefit all TCP users. They are building off the work many, many companies and many, many people (I participated in the IETF process in a small way, many years ago) have given to develop and continuously improve IP and TCP and they should be giving back to that community.

    1. Pascal Monett Silver badge

      Given the scale of the issue, and the fact that they work on a Linux platform and would apparently like the code in the kernel, I have the feeling that they will be giving back, as soon as they are sure that they have a viable solution.

      At least I hope so. Anything that can improve mobile phone performance in calls or data is a godsend.

      1. Graham Cobb Silver badge

        I hope you are right, Pascal, but that wasn't my reading of the information in the article. They seem concerned about the licence issues around working in the kernel (so, presumably, are not planning on releasing their code under the GPL) and they seem to be building a probe-type box (complete with bypass) -- clearly not just a research implementation but a box to sell for big bucks as an add-on to network equipment.

        But I know nothing about their plans except what I read in the article.

  5. Warm Braw Silver badge

    TCP is a wire-centric protocol

    It isn't, really. However, its retransmission mechanism is really there to deal with routers dropping packets owing to congestion, not to fix problems in the link layer. When early connectionless networks were designed, wired links (using modems) were as much prone to intermittent packet loss. What was different was that the "wires" had their own protocols (DDCMP/SDLC/etc) to manage the retransmission process and so transport implementations were typically less aggressive about doing it themselves.

    There's nothing to stop a wireless network deploying its own link-level protocol, it's just that since the advent of Ethernet, the reasons for the existence of datalink protocols seem to have been forgotten.

    Now, there may be cases where even with management at layer 2 you get significant and unpredictable latency and it's true that TCP can't cope with that, but nothing can. It might be that under those circumstances you might want to throw data away rather than retransmit it, but in that case, you want a different sort of Layer 4 protocol altogether.

    You can't really build a network out of damp string and expect everything to be fixed at the transport layer. It's interesting to review Jon Postel's comments from 1977 on the initial attempts to specify TCP/IP. He said We are screwing up in our design of internet protocols by violating the principle of layering. Specifically we are trying to use TCP to do two things: serve as a host level end to end protocol, and to serve as an internet packaging and routing protocol.. Unfortunately, I think we're now seeing the ultimate expression of the tendency he detected then - functionality has gradually been squeezed out of the lower layers (and that applies as much to the typical over-extended LAN as to a mobile network) in the mistaken belief that the solution to the resulting problems can be punted up the stack.

  6. Sebby

    He's certainly right that TCP could perform better with random non-systematic losses or delays, though as said we have now got pretty good at fast recovery and extensions like SACK make it easier to go very far very fast with minimal overhead.

    However before we start dicking around with TCP, first kill one of the worst bloodsuckers of TCP performance in mobile environments or anywhere else: NAT. Mobile operators were quite happy roping people into their walled gardens back then, and news flash the Internet turned out to be important, so most of us are now talking to the Net by way of IP translation. Because of NAT we are using stupid tricks to keep TCP sessions alive, and wasting precious resources (energy, bandwidth) doing it. The state management and scaling issues are surely quite substantial in an increasingly mobile world, and the time spent translating is not spent shoving bytes around. Certainly the current situation leaves a lot of room for layer 4 manipulation, so I have to imagine that this research is way above the level of deployment. Still, NAT is evil and should be stamped out.

    Just my thoughts.

    1. A Non e-mouse Silver badge

      And where are you going to get all these IPv4 addresses so you can switch off your NAT boxes? You're looking at needing somewhere between 50 and 100 millions IP addresses. So you'll need multiple class A address ranges just for the UK mobiles. Hang on, I'm sure I've got a spare class A subnet down the back of the sofa....

      IPv6 would be able to handle this - assuming the world actually implements IPv6!

  7. sjiveson

    That briefly mentioned work on inproving Linux network performance

    Some of it anyway:

    1. A Non e-mouse Silver badge

      Re: That briefly mentioned work on inproving Linux network performance

      More details over at LWN.Net

  8. Anonymous Coward
    Anonymous Coward

    Out of order packet handling (1985) or via

    "On caching out of order packets in window flow controlled network" (author: Raj Jain)

    Abstract: In window Flow controlled networks, if a packet is lost the destination has to decide whether to save (cache) subsequent out-oi‘-order packets. Also, the source has to decide whether to send just one packet or to send all packets following it. This leads to four different types of caching schemes. Simulations show, against our immediate intuition, that regardless of whether the destination is caching or not, the source should retransmit only one packet. This paper describes the alternatives to, and provides justifcation for, schemes used in Digital Network Architecture and ARPAnet/TCP,

    From DEC, initially in the context of DECnet, but applicable also to the world of TCP. Thirty years ago.

    What's old is new again. Again.

    1. Warm Braw Silver badge

      Re: Out of order packet handling (1985)

      Worth pointing out, too, that Raj was a promoter of the "congestion bit" (or DEC bit as it became known) which came to TCP/IP as "Explicit Congestion Notification" (RFC 3168). Although it's not intended for that purpose, it could also in principle be a hint to the transport layer that packet loss was likely not the result of congestion. However, it's not widely deployed and it's not clear it makes a huge difference to congestion performance in real-world situations.

    2. Anonymous Coward
      Anonymous Coward

      Re: Out of order packet handling (1985)

      "[...] that regardless of whether the destination is caching or not, the source should retransmit only one packet."

      That can lead to the case where the transmitter is waiting for an acknowledgement before transmitting a further retried packet. Even waiting for one round-trip time can seriously impair the throughput of a connection on a high bandwidth and high latency link.

      As you say - there is nothing new in these problems. Various work-rounds have provided different strategies - but people often don't understand which is applicable to their situation. I remember people often increased the TCP window size to get more throughput on an under-utilised link - and then were dumbfounded that it actually decreased throughput significantly.

    3. Roland6 Silver badge

      Re: Out of order packet handling (1985)

      >What's old is new again. Again.

      I'll think you'll find that much that was written in the 80's (and to a more limited extent in the mid to late 70's) on the behaviour of network protocols is still relevant today. Just that because it was pre-Internet and hence set down on paper, it is highly inaccessible to those who believe everything is somewhere on the Internet...

  9. Panicnow

    Transmission Control Program!!

    I know I am the only one to remember this, but the RFC for TCP names the protocol Transmission Control Program (protocol). Even Vint Cerf the author of the rfc gets this wrong.

    ( Go on, check the RFC!).

    More seriously, one always has to watch Telco tinkering of protocols to make sure they are not removing net neutrality by other means.)

    1. Warm Braw Silver badge

      Re: Transmission Control Program!!

      That's because it (along with IP) was a replacement for the "Network Control Program" that had previously powered ARPANET. And NCP was indeed a "program" running on a separate computer (Interface Message Processor) to the host system.

  10. Daniel von Asmuth

    Why TCP over POTS? The Royal Mail handles packets

    The Telephone networks have their suites of age-honoured protocols like SONET, so why on earth did they adopt TCP/IP. How about ax.25?

    1. Christian Berger

      Re: Why TCP over POTS? The Royal Mail handles packets

      Well TCP/IP is _much_ cheaper thanks to cheap Ethernet and cheap IP equipment.

      In any case, TCP/IP won't perform any better or worse than X.25 given the same optimizations.

      What we have here is a typical case of someone trying to sell some boxes by not addressing the problem (unsuitable mobile networks) but trying to build a new layer of complexity around it. WCDMA/UMTS simply were drafted in the early 1990s and back then nobody cared about packet switching networks. The vision was 64k ISDN channels, not packets.

    2. Anonymous Coward
      Anonymous Coward

      Re: why on earth did they adopt TCP/IP.

      Because ATM SONET etc are designed from the ground up to be robust and reliable and predictable (attributes of circuit switching) and the necessary boxes are priced accordingly.

      Because TCP etc is designed to be "oversold" (sell more capacity than you can ever hope to deliver). That actually works surprisingly well most of the time. That's one of the differences between circuit switching and packet switching. Consequently IP networking sells like hot cakes, and in many cases is dirt cheap. And nowadays people mostly expect computery things to fail from time to time. Which of course they do, especially when badly designed and implemented.

  11. Anonymous Coward
    Anonymous Coward

    Not this $hit again ...

    So, basically, We need to re-invent the PPP portion of TCP? It's already been done over wired ( 2+ lines ) and implemented successfully over WiFi ( I used it in the design and implementation of an 802.11b network back in the early 00's ). Can't we just enhance the protocol's sensitivity to time outs and enhance it's multi-path capabilities to resolve this problem ? The main issue with using a PPP setup is the amount of time it takes to time out a pipe or connection. Using current PPP setups ( DSLAMs come to mind as great concentrators) , The issue would be you have a large amount of bundled tunnels ( Like having a group of straws bundles with a rubber band). The 2nd issue would be the large amount of overhead ( 10%+ in some cases ) that comes as a side effect. Just make it a variation of 802.11n's implementation of aggregating multiple channels and be done with it. Remember, No matter how much wireless ( Cellular and otherwise ) you implement, It has to hit the wire somewhere.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021