back to article Starlink offers 'unusually hostile environment' to TCP

SpaceX's Starlink satellite internet service "represents an unusually hostile link environment" to the TCP protocol, according to Geoff Huston, chief scientist at the Asia Pacific Network Information Center. Huston's assessment appeared late last week in a blog post detailing his analysis of Starlink performance. The post …

  1. Mikel

    We love it.

    Jitter is a thing. It's gotten much better, as has latency. Early on drops were common but not for over a year. I am not seeing the wide variation in bandwidth reported here. It's been sufficient to our use from the first day.

    I have had Starlink since early on and find it adequate for my home use. I'm not trying to do remote robotic surgery with the thing.

    1. The Man Who Fell To Earth Silver badge
      Thumb Up

      Re: We love it.

      Same here. I've had Starlink for almost 3 years, and compared to my only previous other option of 1.5 Mbps DSL (at 2/3 the price of Starlink), it's a Godsend.

      1. Doctor Syntax Silver badge

        Re: We love it.

        AIUI what he's saying is that these are the problems the developers of your protocol stack have to overcome/hide from you for you to get this service.

        1. John Robson Silver badge

          Re: We love it.

          "Problems"

          a latency of 80ms isn't a significant problem for all but the most demanding of applications.

          1. Anonymous Coward
            Anonymous Coward

            Re: We love it.

            Hear hear. My T-Mobile home internet gets a little flakey after I've run heavy traffic across it. 100 ms latency really doesn't cause much trouble, but 300+ with packet loss certainly does!

          2. Anonymous Coward
            Anonymous Coward

            Re: We love it.

            I suspect you'll only really notice it with realtime applications like a video conference. Streaming ought to work just fine as that tends to use fairly large caches anyway.

          3. Martin an gof Silver badge

            Re: We love it.

            a latency of 80ms isn't a significant problem

            I'm absolutely not a networking engineer, but as I understand it, it isn't the absolute latency that's the issue, it's ths variability - the sort of jitter reported here is often indicative of a poor path and steps taken to deal with that can have consequences for protocols at all levels of the stack.

            M.

            1. Jellied Eel Silver badge

              Re: We love it.

              I'm absolutely not a networking engineer, but as I understand it, it isn't the absolute latency that's the issue, it's ths variability - the sort of jitter reported here is often indicative of a poor path and steps taken to deal with that can have consequences for protocols at all levels of the stack.

              Yep. It's mostly an application thing, rather than a network thing. So as a neteng, it's being able to characterise the connection, then work with the client to understand if this is going to be an issue. Or working with a client to troubleshoot and explain why there's an issue. Sometimes that's just a fundamental issue relating to what they're trying to do. So for absolute latency, explaining why they couldn't get 100Mbps throughput on a TCP file transfer London-Tokyo, which is basically the long, fat pipe problem explained here-

              https://en.wikipedia.org/wiki/Bandwidth-delay_product

              The way TCP works is using a SYN/ACK scheme, so sender has to wait for the ACK before the next set of packets gets sent. From memory* that works out to around 70Mbps with typical latencies on that route. So solutions can be switching from TCP to UDP, ie an FTP will use TCP, TFTP uses UDP and works on a fire & forget principle so can saturate the link and give the expected 97Mbps or so goodput. Because there are overheads for everything. This is common on satellite connections because they're typically higher latency than terrestrial connections. Satellite terminals usually include TCP helpers that spoof the ACK so packets keep flowing.

              Another solution is using TCP Window Scaling-

              https://en.wikipedia.org/wiki/TCP_window_scale_option

              The throughput of a TCP communication is limited by two windows: the congestion window and the receive window. The congestion window tries not to exceed the capacity of the network (congestion control); the receive window tries not to exceed the capacity of the receiver to process data (flow control).

              which lets you tune the window/buffer size based on the latency/BDP. This gets FUN! for a number of reasons, like-

              Windows Vista and Windows 7 have a fixed default TCP receive buffer of 64 kB, scaling up to 16 MB through "autotuning", limiting manual TCP tuning over long fat networks.

              64kB= 512kb, which.. was kinda ok when connections were T1s, but less ok on faster links. I think later versions of Windows have a higher default, but a lot of clients either didn't know, or hadn't configured this on their connections. But this is also where PDV (Packet Delay Variation) or jitter come into play. This is mostly a buffering thing.

              So you tune your TCP window/buffer based on a latency of X, but when it varies, congestion control may kick in, or the buffers just overflow and packets get dropped. For TCP, that means they should be retransmitted, for UDP.. everything is down to the application to notice and react. A lot don't handle this well. It's also where satellite terminals can help, so they can spoof the scaling and improve performance independent of the OS. It's also an issue for streaming apps because they generally use latency/goodput to allocate their buffers, so if those are too low, when latency increases, streams may stop.

              And I guess it'll also be an issue for gaming, especially things like multi-player games where players may be from all over the world. They generally use buffers so everyone's kinda moving in real-time, but variations and high latency can cause players to end up desynced/out of sync and end up out of time & space to the rest of the players. They're dead but don't realise it until the game catches up.

              And finally, you get into the Dark Arts of tuning buffers on router interfaces. Not so much of an issue for most people because the satellite terminal is the router and it's already tuned, but can be necessary on routers behind satellite connections.

              *<cough>Sprint<cough> a client who should have known better before they escalated and dragged me out of bed to explain LFPs. Not sure they appreciated me asking them to hang on a mo while I yanked on the cables to drag Tokyo closer. One of those user education moments, even though the users were my peers..

              1. dinsdale54

                Re: We love it.

                The issue that forced me to learn about this (referred to in another post) was a customer backing up data from Tokyo to Singapore IIRC. Plenty of bandwidth but significant packet loss.

                My company ended up re-writing the data transfer protocol to use multiple tcp connectrions and making the send windows proportionally smaller. Packet loss then caused much smaller retransmits.

                1. Jellied Eel Silver badge

                  Re: We love it.

                  My company ended up re-writing the data transfer protocol to use multiple tcp connectrions and making the send windows proportionally smaller. Packet loss then caused much smaller retransmits.

                  Yup. It can be a common problem with EVPNs or pseudowire services, ie Ethernet over a packet based transport. They're cheap because they run over a packet based network that can be oversubscribed. So that results in services behaving in ways that shouldn't really happen for true Ethernet. Or does, eg using TDM to transmit Ethernet frames, but then that's still a tuning issue with stuff like the interframe gap. Main risk with using multiple sessions is the potential for out-of-order packets and managing that. But you can avoid the 'train wreck' when sessions attempt to re-transmit or restart sessions, increasing congestion and packet loss until finally stuff like BGP keepalives drop, and then every packet goes to the great bit bucket in the sky.

                  But latency/jitter is also an issue using stuff like LAG (Link Aggregation Group) or MLPP (Multilink PPP) where you have to be careful and tune buffers, if links that are part of the group have different latency or PDV. This is especially true if it's an attempt to create diversity, when diverse links are almost certain to have different latencies. If that's on metro connections, or short WAN links, the variability is generally manageable, but can still end up being a PITA.

          4. Michael Wojcik Silver badge

            Re: We love it.

            a latency of 80ms isn't a significant problem

            RFA. TCP implementations which are sensitive to packet loss may perform particularly poorly with a pipe that offers low latency and high throughput most of the time, but relatively frequently drops the occasional packet. Other implementations are less sensitive.

            This is not about latency, and it's not about applications. It's about the stack.

          5. Alan Brown Silver badge

            Re: We love it.

            Dropped packets is another matter emtirely. Reno really doesn't like that

            ECN has been around for a very long time and it surprises me how often it's not enabled or even ends up being filtered at some intermediate point

      2. tony72

        Re: We love it.

        Let's also not forget that the Starlink constellation is far from complete. There are still a lot of V1s up there without the laser links, and of course there are only the V2 minis up there, no full-size V2s until Starship is operational. So I imagine performance is only going to get better as the constellation gets built out.

        1. jockmcthingiemibobb

          Re: We love it.

          You're assuming Musk isn't telling his usual pork pies, that laser links work and that they'll launch a lot more satellites. Bear In mind the early ones have already started falling out the sky

        2. MachDiamond Silver badge

          Re: We love it.

          "There are still a lot of V1s up there without the laser links"

          The laser links are so the sats can squirt data sideways when they can't see a ground station. It won't fix this issue and may cause a slowdown in places where the satellites have to use that function.

          Whether the constellation will be completed and kept filled is down to getting it done before the money runs out and getting enough customers paying full rate to be profitable, if possible. There's been prices quoted for places such as rural Africa where customers pay 1/3 that of somebody in Australia or the US. It can be argued that having some revenue is better than nothing as the sats go overhead no matter what, but it does mean installation of ground stations, paying to connect to the backbone and having people to work on the regulatory requirements. Is there profit in that or just PR?

  2. Khaptain Silver badge

    What's his definition of hostile

    the chief scientist used Speedtest every four hours from August 2023 through March 2024 and found the service "appears to have a median value of around 120Mbit/sec of download capacity, with individual measurements reading as high as 370Mbit/sec and as low as 10Mbit/sec, and 15Mbit/sec of upload capacity, with variance of between 5Mbit/sec to 50Mbit/sec."

    I have an ADSL link at home and I can only dream of those speeds.

    Hostile is when you are browsing on a maximum of12Mb download, 0.8 Mb upload , you really don't care about the latency as the line is so slow that a couple of ms just doesn't change anything.

    TCP was built for redundancy, I cant see what the actual problem is when you compare it to the benefit for those that live in out of reach areas. I feel like this guy is knitpicking or is looking for a research grant.

    1. bazza Silver badge

      Re: What's his definition of hostile

      This is the kind of analyses that get done on all comms service providers. It's done because one can learn some significant commercial insights into performance of the network, which can in turn indicate the likely profitability of the company (all "losses" are lost revenue), and therefore a view on what the share price should be. Also, one hopefully gets the attention of the network provider who wants to hire the brain that came up with the results and recommendations to improve that network performance, improving the potential reveue of the company.

      Thing is, it's not going to work with StarLink. The approach they're taking is to launch so much capacity that any perceivable inefficiency is insignificant. Even the very idea of a LEO mega constellation is already hopelessly inefficient; at any point in time, 2/3rds is over the ocean and not earning revenue. If the protocol is a bit inefficient, who cares, just launch another 1,000 (or about 2 months launching at present rates) and move one. StarLink is all about cracking a moderate sized nut (the satcomms market) with an enormous sledgehammer, and have made the sledgehammer reasonably affordable instead of a proposterous pipe dream. The bet they're taking is that what they have launced finds a sufficient market to earn a profit.

      Where protocols and their design / performance matter is that if one has tuned / bodged the protocol around one use case (e.g. domestic Internet), but the market turns out to be another (users on the move), that's potentially a very different set of protocols, and potentially a very different spacecraft and constellation desgin. Where analyses like this can therefore still matter is understanding how flexible StarLink can be in serving different markets. Thing is, even then, I'm not sure. It's quite possible that the final capacity of StarLink is so vast that even if the constellation's (and its protocol's) performance in serving different use cases is far from ideal, it still might not matter.

      1. A Non e-mouse Silver badge
        Flame

        Re: What's his definition of hostile

        This is the kind of analyses that get done on all comms service providers by some academic trying to grab headlines in preparation for their next grant application.

        FTFY.

        1. R Soul Silver badge
          Flame

          Re: What's his definition of hostile

          Except in this case, it isn't. Geoff Huston is not an academic. He doesn't (need to) write grant applications to get funding for his excellent research work. Apart from those inconvenient truths, your comment is right in every way.

          1. Bebu Silver badge
            Windows

            Re: What's his definition of hostile

            See Geoff Huston.

            He has been in networking a very long time possibly even before the Internet (in any form?)

            This particular type of link has unusual challenges which he has attempted to quantify and suggest particular protocol (TCP) variants to address those challenges. Sounds to me like a engineer doing his or her job.

            The frequent handover between satelites is likely a lot more frequent than between 4G cell when speeding through a large city (except on a low flying jet aircraft ;) but the with the high latency of all satellite links is probably close to a perfect storm for TCP or perhaps any virtual circuit technology. Although I wonder whether multipath TCP might help here possibly maintaining several links through mutiple satellites as they come and go - a bit like juggling balls in the air.

            1. Anonymous Coward
              Anonymous Coward

              Re: What's his definition of hostile

              "The frequent handover between satelites is likely a lot more frequent than between 4G cell when speeding through a large city (except on a low flying jet aircraft ;) but the with the high latency of all satellite links "

              High latency? 30-80ms isn't high.

              The handover should be handled on the physical layer, below TCP, in this instance the physical media is somewhat different from the more common copper/fibre, it's closer to a mobile system, but has much more hopping as you suggest.

              That's probably not enough to make a noticeable difference to anyone other than "esports" twitch gamers, or robotic surgery...

              I regularly have video calls and desktop sharing sessions over starlink and it's far better than many of my customer's fixed line connections (though their insistence on using sufficient nested remote desktop sessions that Inception looks like an early reader book doesn't help).

              1. Anonymous Coward
                Anonymous Coward

                Re: What's his definition of hostile

                Thanks, that was the only application I could think of that could have a problem with latency, so if that isn't the case then it's impressive.

              2. Anonymous Coward
                Anonymous Coward

                Re: What's his definition of hostile

                Well video games, video, and audio all work fine with UDP. And UDP will work just fine with starlink. It's when you start using TCP when there's issues. TCP Reno will act much differently then TCP Cubic, especially over different mediums such as starlink vs fiber. You're almost right when you say that it should be handled at the physical layer. It shouldn't be handled in either the physical or network/transport issues. It should be handled like it was in the past at the datalink layer.

            2. Roland6 Silver badge

              Re: What's his definition of hostile

              > “The frequent handover between satelites is likely a lot more frequent than between 4G cell when speeding through a large city”

              From conversations with those involved in GSM, they were reasonably happy that a car travelling at more than 60 mph could swap cells faster than the network could keep up, given 60 mph was in many places the maximum speed limit.

              Thus looking at this (Starlink), they are trying to directly use the Internet IP and associated routing protocols to manage the routing.

              Hence the questions are whether IP et al are correctly handling the very dynamic routing required (and whether this can be improved with better route prediction) and then what are the effects this is having on TCP/UDP packets in transit. Suspect there is much Starlink can learn from (and contribute to) the 3GPP.

              1. JT_3K

                Re: What's his definition of hostile

                Legality, ethically or morally aside, perhaps of interest here is that people doing "Cannonball" events across the US. It's been repeatedly noted that the switching can't keep up with them and their cell service is non-viable. Calls aren't great but it's my understanding that data comes in fits and starts and can't really be relied upon. It's led to some more inventive attempts (making use of spotter planes to identify upcoming police presence) being contacted by VHF instead. Noting that this seems to be mentioned for people running at ~120mph+ and being supposedly useless at 150mph.

                (I always found the real-world extreme-case scenarios interesting)

                1. bazza Silver badge

                  Re: What's his definition of hostile

                  One of the major screw ups in recent years is that the hard-won advances made in the early days of mobile phone networks (GSM was particularly successful in terms of network design, deployment and dynamic management) got ****ed up by the likes of Apple, etc. Mobile phones had to comply with network design rules, and this included achieving the right kind of output power and the right kind of receive sensitivity so that, when you were 35km from a GSM cell tower, your phone would be able to make a connection to it.

                  That was fine in the days of phones, when all they were was a phone. There was room for a proper antenna, it could have the right gain, and all was well with the world.

                  Then, along comes Apple and invents the "Smartphone". This jammed all sorts of non-phone things in, like a mega battery, screen, big CPU, lots of Flash, GPS, Bluetooth, WiFi, and compounded it with an industrial design that ensured there was hardly any room for an actual cell network antenna. Apple's approach? Nah, ship it anyway. And where Apple lead, others simply copied.

                  The rude awakening then was that, yes, whilst smartphones were mostly standards compliant, many weren't actually able to achieve the ranges that cell network standards engineers had envisaged. Things were especially bad in the early days of 3G (UMTS or CDMA2000), because most of the companies that deployed it early on didn't really put in that many cell stations, and the cell breathing effect inherent in DSSS signal modulations made the range problem worse.

                  Move forward to the modern day, and things are not much better. To provide an "adequate" service to customers, cell network operators have effectively had to anticipate underperformance by all mobile phones and space 4G, 5G cells a lot closer, with the inevitable consequence of one travelling between them more rapidly.

                  One of the attractions of GSM was that the network mangement flexibility could allow for phones that were obviously moving quickly to be parked on cells with big towers covering large ranges, whilst phones that weren't could be parked on micro cells, this being enabled by the TDM / FDM nature of GSM (an operator could mix / match big cells and microcells in the same geographic area, not problem). 3G threw all that flexibility into the bin, and I'm not sure that 4G got it all back.

                  The only country I've been to where they got 3G right was Japan. I think this was more or less an accident; their previous domestic 2G standard called for very small cells (so, not like GSM. And, interestingly, it couldn't cope with bullet trains!), and when they replaced it with 3G they simply swapped out the cell station cabinets. That meant they had an incredibly dense 3G network, and whilst that meant a lot of handovers for moving mobiles, the compensation was that a phone never had to transmit much power. I recall in the 2000's getting 25Mbps download speed on a bullet train at 200mph in a tunnel on a BlackBerry...

              2. bazza Silver badge

                Re: What's his definition of hostile

                GSM is actually built around the speed achieved by the French TGV train network (300kph), plus some room for speed improvements (in the trains that is, not GSM). This makes a lot of sense; GSM is a very heavily French influenced standard. The rate at which the protocol can send out timing advance / retard instructions to a specific mobile (and the size of those timing steps) is factored around a train not exceeding (I think, from very dusty memory) 350kph. Even the maximum cell radius - 35km - was chosen so that handovers of handsets between cells on a train travelling at 300kph would occur at a reasonable rate.

                Presently, the StarLink network architecture employs the satellites to form a connection between a user terminal and a ground station also in view of the same satellite. There's a bit of beam forming going on, which helps with frequency re-use. So, really, the conversation is between two units on the ground. As such that's effectively a static link, albeit one that gets periodically interrupted as the user terminal redirects its beam at the next satellite passing over. So I doubt that there's any "routing" as such going on.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: What's his definition of hostile

                  GSM is a very heavily French influenced standard

                  Ah, that explains why it goes on strike so often.

                  It all makes sense now :)

              3. Anonymous Coward
                Anonymous Coward

                Re: What's his definition of hostile

                "Thus looking at this (Starlink), they are trying to directly use the Internet IP and associated routing protocols to manage the routing."

                I don't think the satellite handoffs/routing are handled by TCP/IP, IIRC that's done with a proprietary L2 protocol. The article references issues that TCP has with the jitter resulting from the handoffs.

                I could be wrong about all the above...

            3. MachDiamond Silver badge

              Re: What's his definition of hostile

              "Although I wonder whether multipath TCP might help here possibly maintaining several links through mutiple satellites as they come and go"

              The dish would have to get bigger since the beamforming approach would need to be split up into more virtual dishes to communicate with more than one satellite at a time. If the dish is moving, all of the sats have to be within a narrow angle of view.

        2. Martin M

          Re: What's his definition of hostile

          Have you had a really, really bad experience with an academic at some point?

      2. Doctor Syntax Silver badge

        Re: What's his definition of hostile

        "at any point in time, 2/3rds is over the ocean and not earning revenue."

        You don't think shipping and aircraft which have no access to terrestrial networks are potential sources of revenue?

        1. Brewster's Angle Grinder Silver badge

          Re: What's his definition of hostile

          A trickle of revenue. But they're still going to make the sparsest rural region look densely populated. Unless those darn orca have started using satphones. (Vital for coordinating flashmobs on a yacht!)

          1. John Robson Silver badge

            Re: What's his definition of hostile

            A trickle, except that you can charge quite a bit for a single mobile service that works across all the major shipping routes of the world.

            Besides which they were/are also intending to use those birds for relaying data across the oceans faster than it can be routed by fibre under them...

            1. Brewster's Angle Grinder Silver badge

              Sharks vs aurora

              A rule of thumb I have is:

              quite_a_bit * few < not_quite_so_much * a_few_more

              It's not always true. But when you crunch the numbers, more often than not, a few (sexy) high value events produce less income that lots of (unsexy) low value events. The former is the exception, not the rule. Also, relying on high value clients makes your income very granular and unstable; normal churn within the noise threshold can suddenly see your income drop precipitously. (You saved that money when the reverse effect had it soar, didn't you?)

              And neither sailors, nor shipping firms, are noted for being awash with cash or sinking under the weight of their profits. Maybe there is money in routing. But I don't see ocean travellers (nor aircraft( netting you much of a financial catch.

              1. Pascal Monett Silver badge

                You might want to communicate that analysis to Broadcom.

                They seem to be preferring the sexy kind of events . . .

              2. John Robson Silver badge

                Re: Sharks vs aurora

                I expect that the maritime revenue is going to be smaller than the static revenue, but it can still be significant.

                Starlink sort of relies on sparse density anyway, partly to limit congestion, but mostly because high density tends to invite cheaper, and technically better, solutions (fibre to the premises).

                1. MachDiamond Silver badge

                  Re: Sharks vs aurora

                  "but mostly because high density tends to invite cheaper, and technically better, solutions (fibre to the premises)."

                  It's not even "invite", it's already in place and there can be multiple providers. That's one of the downsides for Starlink. They'll max out in denser areas so there's a limit to how many subscribers they can sign up without performance quickly going downhill. Anywhere that gets built up will invite alternative providers. There is a lot of dark fiber in the US. When their was a boom in laying fiber, there was also a lot of work done to get more data through that fiber which lead to excess capacity. Starlink can be a good option for that cabin in the woods, but even a smallish town in the middle of the US can be well served with wireless internet which eliminates the need to install coax, fiber or copper pairs.

              3. Anonymous Coward
                Anonymous Coward

                Re: Sharks vs aurora

                Well, Inmarsat made quite a good living on both those markets, for decades. “Sailors” maybe not…..*cruise ships* however with 3000 passengers each paying $15 per day for their internet, the entire load going down a single terminal…..There are plenty of $5M+/yr single-terminal accounts sailing around today. Also, you’ve forgotten some ocean-adjacent markets which are rather lucrative: oil & gas (offshore or in extremely remote areas); governmental (aka army&navy calling home); and of course <redacted high-visibility individuals>. Less lucrative, but still a major earner overall, is disaster/warzone/clandestine press reporting.

            2. MachDiamond Silver badge

              Re: What's his definition of hostile

              "A trickle, except that you can charge quite a bit for a single mobile service that works across all the major shipping routes of the world."

              There's already service providers that serve that market so Starlink just becomes another. Whether it works will be at the mercy of their being coverage via V2 sats with sidelinks to get data to and from a ground station. The pricing is constrained by what the established providers might do if pressed and what services are already being provided. It's not just internet being provided to the crew so they can watch Netflix in their off hours, but tracking of the ship, engine performance, log data, weather forecasts, nav updates, credentials for fuels and fees and loads of other stuff. There's nothing holding a company back from making that work via Starlink, but how much would that cost and would it then lock them into only being able to use Starlink? The comms company may also be providing a whole package that includes the back-end so Starlink would also have to do that for all of the business and I don't think that a shipping company is going to want to pay for a commercial level link just so the crew can stream movies. It's cheaper to provide a few gaming consoles and have a library of DVD's.

        2. StudeJeff

          Re: What's his definition of hostile -- ships at sea

          Earlier this year I had my first experience using Starlink. I was on a cruise with about 3,000+ other people and paid an extra $25 a day for Internet access.

          Wonder how much Royal Caribbean pays Starlink every year to be able to provide that service?

          How much do the other cruise lines pay? And all the other ships and aircraft out there?

          It might not be a huge amount of money, but it's a lot more than a drop in the bucket!

          Oh, and it worked just fine for me.

          1. X5-332960073452

            Re: What's his definition of hostile -- ships at sea

            Can't make a joke about that $25 being Zimbabwe dollars as they've just changed to the Zig.

            What a rip off, (I guess $25 US dollars) a day, 14 day cruise = $350 (captive audience !!)

            1. Geoff Campbell Silver badge
              Pirate

              Re: What's his definition of hostile -- ships at sea

              I have in the past been in a situation where I can only go on holiday if I can connect to the 'net a couple of times a day. Not a huge time commitment, perhaps ten minutes either end of the day, but I just needed to stay in touch with a few people. No connectivity, no holiday - just the normal joys of running a small company.

              In that context, $25 a day is fine.

              GJC

              1. MachDiamond Silver badge

                Re: What's his definition of hostile -- ships at sea

                "No connectivity, no holiday - just the normal joys of running a small company.

                In that context, $25 a day is fine."

                To check email twice a day, I'd be happy with a pay-as-you-go lash up and a visit to an onboard business center. Since becoming self-employed, I just turn on the auto-responder to let people know I'm offline for a period of time. I'm on holiday, damnit!

      3. rg287 Silver badge

        Re: What's his definition of hostile

        at any point in time, 2/3rds is over the ocean and not earning revenue

        Aside from shipping, aviation and remote islands.

        Shipping & Aviation are low density in terms of "terminals per sq.km". However, they will also be paying rather more than £75/mo (their "Mobile Priority 5TB plan is £4,846/mo, and cruise liners and airliners would be on a custom plan). So even if their density is 60x lower than the sparsest midwest or australian outback scenarios, then it's comparable income. There's >2,000 aircraft per day cross the Atlantic. At £5k/mo/terminal, that's £10m/month. If they got Starlink onto 20% of the global fleet (so 5k out of ~25k airliners IIRC) than that's £25m/mo or £300m/yr. Which is more than a trickle. And that's just from the commercial aviation side (not including private gulfstreams).

        Business terminals are also most likely being sold at cost or for profit - unlike the residential terminals which (at least initially) were being sold below cost.

        There's also all those pesky islands who lack a fibre conection and have been stuck with MEO or GEO-based services. Plenty of business to go around throughout the South Pacific (Polynesia is well connected, but most surrounding islands are single-point-of-failure at best), as well the likes of the Falklands, St Helena & Tristan de Cunha in the South Atlantic, Antarctica and lots of the smaller Caribbean islands.

        There are also of course coastal islands which might sort-of manage on cellular or P2P wireless, but might also want StarLink as redundancy, or because it's faster than the ADSL they can connect to on the mainland.

        1. fibrefool

          Re: What's his definition of hostile

          "There's also all those pesky islands who lack a fibre conection and have been stuck with MEO or GEO-based services. Plenty of business to go around throughout the South Pacific (Polynesia is well connected, but most surrounding islands are single-point-of-failure at best), as well the likes of the Falklands, St Helena & Tristan de Cunha in the South Atlantic, Antarctica and lots of the smaller Caribbean islands."

          It's illegal to use Starlink in St. Helena:

          https://www.sainthelena.gov.sh/2023/news/reminder-on-the-use-of-unlicensed-telecommunications-equipment/

          Fortunately though St. Helena has a submarine cable connection as of October last year:

          https://www.kentik.com/analysis/saint-helena-activates-long-awaited-subsea-connection/

          Which would make it an ideal location for a Starlink ground station - which may follow the OneWeb ground station that is already planned:

          https://www.datacenterdynamics.com/en/news/oneweb-to-build-satellite-ground-station-on-st-helena-island-in-partnership-with-sure/

          1. rg287 Silver badge

            Re: What's his definition of hostile

            https://www.sainthelena.gov.sh/2023/news/reminder-on-the-use-of-unlicensed-telecommunications-equipment/

            Well, it's illegal to use unlicensed kit. So StarLink is illegal unless they license it. Albeit the licensing seems to be more on "exclusivity because Sure brought fibre here" rather than any technical consideration. Having a St Helenese friend, I was aware of their shiny new fibre link (which has been a long time coming). Naturally, that remains a single point of failure. Seems like OneWeb might already be in there. One wonders if their ground station could become a "client" community station if the fibre was cut, allowing STH traffic to be routed via OneWeb to other ground stations (presumably in Africa). Seems sensible for extremely remote ground stations to be able to flip into a client mode if they're isolated.

      4. Mishak Silver badge

        2/3rds is over the ocean and not earning revenue

        Unless you sell that capacity for maritime and aviation use.

      5. Geoff Campbell Silver badge

        Re: What's his definition of hostile

        Starlink has a lot of high-revenue customers on boats and, I think, planes, and probably also small islands, so your calculation of satellites being useless over the ocean is not entirely valid.

        GJC

        1. Kapsalon

          Re: What's his definition of hostile

          ..... so your calculation of satellites being useless over the ocean is entirely not valid.

          FTFY

          1. Geoff Campbell Silver badge
            Pirate

            Re: What's his definition of hostile

            Well, I wouldn't go quite that far.

            Mind you, there's also the routing the model of traffic going up to your nearest satellite, around the planet by frickin' lasers, then back down to earth from the satellite nearest the destination, reducing latency and ground-based bandwidth requirements, so maybe I could be persuaded by your revision after all.

            GJC

      6. Anonymous Coward
        Anonymous Coward

        Re: What's his definition of hostile

        100% of them are over the sky and allow service to be sold to airlines (and private pilots???). The 2/3 that you claim are over the oceans CAN earn revenue from any sailing vessel that installs a starlink antenna.

    2. doublelayer Silver badge

      Re: What's his definition of hostile

      It seems clear from the article: hostile means that the environment involves dropping packets and jitter that are atypical of other connections, and since TCP was mostly designed with terrestrial or geosynchronous connections, it doesn't handle it well yet. Certain versions of TCP will not make good use of that bandwidth because they'll constantly be in the slow AI cycle, hitting an MD every time your satellite changes, so your speed for each socket won't be as high as the link theoretically could make it. You have plenty of bandwidth, but your software isn't using it. The solution is to improve the protocol so it can handle that hostile environment and not get degraded.

      1. abend0c4 Silver badge

        Re: What's his definition of hostile

        Back in the old days, when analogue circuits were prone to noise and drop outs, we had datalink protocols that could correct more rapidly for the characteristics of individual links. Indeed, and the upper level protocols assumed a certain level of good behaviour. The relative reliability of digital circuits has generally meant that datalink protocols have largely disappeared. However, as a general proposition, I don't think you can punt the whole problem up to the transport layer: not only does it recover too slowly in general, but the notion that you need a different congestion algorithm for different underlying datalinks rather breaks the layering model as it implies the transport layer can somehow know about the layers below it.

        Clearly datalink protocols are a much more straightforward proposition for point-to-point links than for a many-to-many network such as Starlink, but perhaps we've got into the way of expecting too little of the underlying network infrastructure.

        1. Jellied Eel Silver badge

          Re: What's his definition of hostile

          Very true. It's an education thing for users to understand their networks and how their applications will behave. I saw this a lot when we launched the first Ethernet services. Customers would complain they weren't getting the throughput on file transfers, so had to explain many times how TCP works, and the LFP (Long, Fat Pipe) issue. If apps could be switched to use UDP instead, that goes away, but then there's the risk of packet loss. And then of course there's the issue of MTU..

          But there are solutions that have been around for years, especially considering the Internet was essentially designed around unreliable data links. So 'obsolete' protocols like good'ol RIP and RIPv2 work much better over less reliable connections than OSPF. Sure, convergence times drop, but it's better than having routes added/withdrawn with link-state protocols. There's also another protocol who's name/RFC escapes me that was designed for exactly this kind of link that I've used for high jitter connections and satellite services. It's supported in IOS and JunOS, but probably isn't in Windows.

          1. Anonymous Coward
            Anonymous Coward

            Re: What's his definition of hostile

            Are you referring to RFC 969 in as the protocol you were trying to remember?

            1. Jellied Eel Silver badge

              Re: What's his definition of hostile

              Are you referring to RFC 969 in as the protocol you were trying to remember?

              Nope, not that one. It was developed for 'unreliable' connections, and I think it was one of the predecessors to Reno & Cubic. One of those things I had to use occasionally when I had interesting clients. Like you want a connection WHERE?!?

      2. Mage Silver badge

        Re:geosynchronous connections

        Most of those spoof TCP and usually only a VPN "built-in" to modem and Earth-station works. It's hostile in a different way due to rtt being x4 the time from earth to satellite 36km above the Equator (so much further nearer the poles.

        The environmental impact of Starlink is much higher.

  3. Michael Hoffmann Silver badge

    If that is "hostile", then consider me a surrender monkey!

    Starlink. Pry. Cold. Dead. Fingers.

    Even in the most recent NBN fibre plan, we are not included, less than 5 minutes from nearest town along a major thorough-way. And the backup fixed wireless hasn't budged upward in speed in years, despite all the waffle about upgrades and improvement (1.5km LOS to tower).

    I shall submit to my hostile 300mbit/sec!

    That said, I'm not opposed if they can do some tweaking for improvements!

    1. doublelayer Silver badge

      Hostile isn't referring to bandwidth, but to the challenges that are unusual to Starlink-style services. It isn't a judgement on the service, more an observation of how TCP isn't working well enough under those challenges and could be improved.

      1. Kapsalon

        But... what were the problems experienced, this is a highly theoretical discussion so far.

    2. xyz Silver badge

      Me too... >300mbps, about 30ms latency, in a forest, up a mountain, no mains leccy and no mobile signal. Bit thirsty on power use but apart from that I can't fault it.

      Still wouldn't buy a Tesla though.

      1. Shalghar Bronze badge

        "Still wouldn't buy a Tesla though."

        Exactly this. Not due to Musk or the brand itself but due to the far too many product and design flaws as well as the severe quality issues in this 18650 laptop battery powered remote controlled toy car.

        And no, the "super cell" only worsenes the issues with a cell structure with this kind of mechanical design for high ampere usecase as the thermal issues within the thicker "super" cell only get worse. Compare the liquid cooled tesla batteries with those from BYD or the solid state concepts currently popping up more and more. Some noname "smart"phones already have solid state batteries (BV 8900 series for example) and suddenly those "warning, may explode" stickers on the package are no longer necessary, not even for a device thats shipped with a fully loaded battery.

        Every product advertises and justifies itself. No use buying because of brand name or CEO if the product you need is of minor quality compared to any noname thing or if another product fits your use case much better than $brandname stuff. Each tech and product line has its usecase and if starlink is better than the alternatives on the spot, its absolutely justified to choose it. Much more so if no other alternative exists.

        Back in the "PDA" days, i would happily enjoy the Apple Newton, the only PDA which adapted to you instead of you having to succumb to the writing rules of any other device. This, however would not mean i would buy a macintosh since my triple hardware (Acorn RiscPC, Amiga 1200/68040 card upgrade, Linux PC) was far more convenient to adapt to what i wanted and had nothing of the apple-esque "we know better than you and you will never have full control over "your" device" misdesign philosophy. (And, come on, ONE mousebutton and the second key somewhere on the keyboard ?).

  4. Anonymous Coward
    Anonymous Coward

    Less than 200m from a full fibre cabinet

    I can only dream of the top end starlink bandwidth, and my copper cable dsl from BT still gets to hit lower than the bottom end starlink numbers on occasion.

    1. Shalghar Bronze badge

      Re: Less than 200m from a full fibre cabinet

      Ever since many houses in my area got a state subsidised fibre link, my copper cable connection somehow got faster in general but at the same time slower in peak times. I suspect that the overall speed for the town was not increased and the fibre customers leech off a bit more bandwith than they had before in the "all households have copper" times.

      Seems to be a priority issues more than a real hardware problem and since fibre is so politically wanted, i have my suspicions.

      Strangely i seem to remember a BofH episode where Simon installed a traffic generator to slow down the network, only to pull out the recently renewed cabling and swap it for "new" on overtime (plus resell value of the old stuff). Managers never remember....

  5. Richard 12 Silver badge

    So move on, QUICly

    It's been known that TCP has really poor throughput under these conditions for decades.

    Mobile phones have the same problem, albeit generally not every 15 seconds.

    I suspect hardly any browser users are affected, as they're probably all using QUIC most of the time and not even noticing.

    1. Oh Matron!

      Re: So move on, QUICly

      for those old enough, when GPRS first came out, it wouldn't work when you were travelling faster than 70Kmph

      1. Michael Kean

        Re: So move on, QUICly

        To be fair, at 70,000 miles per hour I would not expect to be in range of a tower long enough to get the SYN ACK back.

        1. Michael Wojcik Silver badge

          Re: So move on, QUICly

          When I'm driving 70Kmph the A/C doesn't work either. Also the last time I tried to use my phone at that speed, it was a hypersonic stream of plasma and I couldn't read the screen.

    2. fibrefool

      Re: So move on, QUICly

      "I suspect hardly any browser users are affected, as they're probably all using QUIC most of the time and not even noticing."

      Except that QUIC typically uses the same TCP congestion control algorithms.

      but yes, with QUIC it'd be pretty easy to make some of the experimental changes Geoff is suggesting, to deal with the step changes in latency and the packet loss during switchover.

      1. Michael Wojcik Silver badge

        Re: So move on, QUICly

        Well, QUIC uses some congestion-control algorithm. It has SACK built-in, IIRC, and don't QUIC implementations generally use BBR or something similar to it?

        (I've avoided QUIC in any of my own products thus far, but I do need to go back to the specs as I'm sure we'll have customers asking for it at some point.)

  6. MONK_DUCK

    "The CUBIC TCP network congestion avoidance algorithm could also do a job, in harness with Selective Acknowledgement (SACK – aka RFC 2883)."

    Interesting analysis though cubic tpc came out 2007 and is used by all the major desktop OS' and probably server. Likewise Selective ACK has been around for decades so as long as you are using a recent patched OS and not building your own tcp stack, you're probably fine.

    1. Michael Wojcik Silver badge

      There may well be embedded systems using other stacks on the network. Not everything is a PC.

  7. anothercynic Silver badge

    To be honest...

    ... This is quite the first world problem to have (but it's good research nonetheless).

    I can see that this would possibly have an impact on those on the edge of existing coverage... but no Joe Average runs real-time stuff on Starlink (other than games maybe, or stock trading perhaps)?

    1. Orv Silver badge

      Re: To be honest...

      It will negatively affect VoIP and video calls, too, but we seem to have basically given up on keeping the latency down on those.

  8. dinsdale54

    I see lots of comments about how Starlink is fantastic - which I agree with - but it's not what Geoff is discussing.

    I've put in more time than I ever wanted to dealing with high latency jittery networks with packet loss. You can get in to all sorts of issues with large send and receive windows causing enormous retransmits at significant jitter or small amounts of packet loss. ~1% packet loss on a high latency network will pretty much remove your ability to to large data transfers, regardless of the theoretically available bandwidth.

    The congestion control algorithm makes a big difference here. There are lots of them - which is a sign that none work well for all use cases. The author is making sensible suggestions as to which ones are well suited to Starlink.

  9. TheHinac

    WFH Work From Home

    The first thing I did when I bought Starlink for my father who lives out in the middle of no where was use VMware Horizons VDI to connect to work. Then I took calls over it about 120 miles from the servers and phone system. While my father watched the new top gun in 4k on Amazon Prime Video. No issues. No dropped calls, no crazy latency, no studders on the remote desktop session. Just no problem. The only problem my father has ever had since I bought him Starlink 2 years ago is lightning struck the dish and fried it and the router. But that wasn't so bad because Starlink sent me a free (used) system to replace it. I paid nothing for it. I did have to buy another 150 foot extra cable, it's an accessory I purchased back when I setup so it could be ran in a PVC pipe under ground.

    So yeah. It's been great. I use it every time I visit for work and games.

    1. Pascal Monett Silver badge

      Re: WFH Work From Home

      Nice to know that in one specific case it is working well. I'm happy for your father.

      The engineer has to concern himself with global usefulness, not just specific cases, and I find his remarks insightful, although I am definitely not in any way associated with the industry.

  10. TJ1

    Starlink are actively working on this

    Earlier in 2024 Starlink published a document [0] summarising work they're doing on latency - the aim is for an average of less than 20ms. I wrote a reply to a similar article on Hacker News recently that details how Starlink works for those making uninformed comments about it [1] - I'll repeat it here since it makes the unique challenges clear:

    For those not aware of how Starlink operates: The customer antenna is called the User Terminal (U.T.) a.k.a. "dish" although all production models are rectangular - only the pre-production beta model is round and dish-like.

    The U.T. contains a phased array antenna that can electronically 'steer' the bore-sight (aim) of the transmitted (and received) signal at the current satellite that is in view. In ideal circumstances the U.T. antenna has approximately 110 degrees field of view (~ 35 degrees above each horizon).

    The satellites pass from west to east and take approximately 15 seconds to pass through the field of view of the U.T. The satellite forms a beam aimed at a fixed location on the ground - this is called a 'cell'. All U.T. within that area share the radio link that has a fixed bandwidth, so contention is managed by the satellite.

    The path length to a satellite directly overhead would be around 550km (in most cases the satellite is slightly north, or slightly south, of the U.T. but for round numbers sake assume 550km).

    The path length to a satellite appearing 35 degrees above the horizon (the slant range) is ~ 2568km.

    Satellites relay the packets from the U.T. to the (nearest) Earth ground station, so the path length and therefore travel-time will vary enormously over just 15 seconds.

    The round-trip for the minimum case is 4 x 550km = 2200km but for the maximal case is 4 x 2568km = 10272km. These equate to a travel time of between 1.8 and 3.6ms per leg, so that gives a hard physical minimum of 4 x 1.8ms = 7.2ms to 4 x 3.6ms = 14.4ms

    As more satellites are added to the constellation so the gap between satellites decreases and the angle above horizon at which a satellite is acquired can increase thus shortening the maximum path and lowering the latency.

    Starlink has a publicly stated goal of less than 20ms round trip latency and published a report in March 2024 about the engineering efforts to achieve this [0]. Much of the effort that customers see focuses on two issues:

    1. reducing latency between ground station and Internet connection point

    2. scheduling the radio links between satellite and all U.T.s in its beam area

    Starlink balances contention by sometimes restricting and sometimes promoting activation of new U.T.s in each area - this is why on occasion a fully subscribed cell will impose a waiting list on new activations. At other times Starlink will, and does, dynamically change the monthly subscription cost. Recently some areas had their residential price reduce from US$120 to US$90 where others in congested areas had an increase from US$90 to US$120 (in the USA).

    [0] https://api.starlink.com/public-files/StarlinkLatency.pdf

    [1] https://news.ycombinator.com/item?id=40384959#40388007

    1. Anonymous Coward
      Anonymous Coward

      Re: Starlink are actively working on this

      Obviously a Muskite, or Musk himself!

      Couldn't you have written the facts without the condescension or the PR spin?

      Anyway. You disappoint. I expected his Muskiness to have invented FTL data speeds by now

    2. Richard 12 Silver badge

      Re: Starlink are actively working on this

      Tell me you didn't read the article without saying you didn't read the article.

      This is about how the rapidly changing latency and regular packet loss when switching to the next satellite results in the default TCP congestion control algorithms being unable to actually use the available bandwidth.

      TCP assumes packet loss only occurs when the link is oversubscribed, and that link latency is constant with rare step changes. The actual physical number doesn't matter much, it's the jitter.

  11. Anonymous Coward
    Anonymous Coward

    Hostile is relative

    Relative to a first world lightspan. I operate in what is effectively a geographically remote location and 50ms latency is great. I work with other more remote sites on a variety of connection types and Starlink rates among the better services. Starlink latency is about 1/3 of what I see with terrestrial cellular data connections but it's packet loss is a bit greater and more variable. As far as packet loss goes, anything below 5% is fine for near real time data streams and is passable for video and audio.

  12. Luiz Abdala
    Meh

    Gaming?

    The one thing Starlink fails at is probably gaming, because 80ms is in the upper limit of what is called acceptable, and packet loss is fatal. As in, headshot fatal, counter-terrorists win, chicken dinner battle royale fatal.

    But gaming - and someone said online surgery - is probably the only things that would be bothered by it.

    If you can setup a giant 60 seconds buffer to say, netflix streaming, nobody cares.

    15 seconds turnover looks great near what I've witnessed in some wi-fi congested areas...

    1. MachDiamond Silver badge

      Re: Gaming?

      "The one thing Starlink fails at is probably gaming, because 80ms is in the upper limit of what is called acceptable"

      That's supposed to be Starlink's big advantage over satellite internet providers with birds in geosync that have existed for years.

  13. martinusher Silver badge

    To Be Expeced

    TCP is a convenience layer that converts a packet oriented network into a facsimile of a serial port. To do this it has to make numerous assumptions about the properties of a link and the behavior of an application. It is at best relatively inefficient, often its horribly so. Its really only suitable as a serial port replacement running a legacy terminal. Everything else invariably uses a user level protocol to frame datagrams -- inefficiency piled on inefficiency. It works OK if you've got a relatively low latency high bandwidth link but as soon as you put wireless into the mix overall performance drops like a rock.

  14. stucard

    TCP options & other transports

    The Space Communications Protocol Specifications - Transport Protocol (SCPS-TP) is a group of CCSDS standardized, IANA listed, TCP options, that in our experience w/SATCOM and diverse other links (inc. per packet Concurrent Multipath Routing over several) has handled path losses, rapid wide variation in path characteristics, and out-of-order delivery, to remote sites, ships, and aircraft, extremely well. We have not yet tested it over StarLink but believe it should work much better there than TCP w/o those options (starting w/ Selective Negative Acknowledgements, SNACK). Another IETF standardized (but this one is not TCP interoperable) protocol that might work even better is the NACK Oriented Reliable Multicast (NORM) transport protocol, as it not only supports efficient multicast, but also is a Type II Hybrid ARQ/FEC that can dynamically adapt to widely varying path losses.

    1. FeepingCreature

      Re: TCP options & other transports

      If it's random uncorrelated drop, might it be genuinely viable to simply send every packet twice (with slight delay)?

  15. IGotOut Silver badge

    Ping?

    Sorry but if this was an academic paper then we are screwed.

    From this i can assume...

    I summarize this as ....

    For starters ping is a crap. At most use it for checking connectivity. Any sort of traffic filtering or shaping is going to drop PING (or at least reduce it) if it gets busy, or QoS is in play.

    He was basing all the research on failed pings and speed test. Speedtest again is another variable.

    Then you have local conditions, weather, radio interference and so on.

    Please at least use wireshark and Actually SEE what is happening, rather than just presuming.

    1. Anonymous Coward
      Anonymous Coward

      Re: Ping?

      The paper is quite fine. It's discussing starlinks impact on various TCP implementations. It mentions certain implementations that don't perform well and certain ones that do. From a networking perspective this all makes perfect sense. TCP Reno will perform much differently than TCP Cubic on starlink vs fiber. It's all about using the right "tool for the job'

  16. anonymous boring coward Silver badge

    TCP/IP was meant to be robust, so this is just a good test to ensure it hasn't drifted from that goal.

    1. mtrantalainen

      TCP is robust in sense "no data loss even if connection is dropping packets or packets are delivered in random order over the network". However, to implement that without huge latency, congestion control algorithms are used to try to optimise the buffering barely just enough to get max throughput.

      If the buffer is too large, you get bufferbloat which causes a lot of extra latency (up to 10x the actual latency of the connection) or reduced bandwidth.

      And the problematic part of StarLink connection is that congestion control algorithm is selected by the sender for a data stream so you cannot tell the sender that your connection has random latency. The receiver can only tell the sender to slow down but there is no standard way to tell "increase the buffering even though latency is already spiking".

      Latency spikes are problematic because they seem similar to a router in FTTH connection that's about to be fully overloaded. And in that case you definitely want to limit the bandwidth to avoid overloading the bottleneck router.

  17. Lee D Silver badge

    Almost like NASA went out of its way to make appropriate protocols for space-based communications.

    And that you could piggyback TCP on any of those if you needed to.

  18. HammerOn1024

    No Difference

    So I'm in my car using fixed cell stations that change every few minutes as I move through the network. From the user's perspective, my perspective, I can't tell as I'm chatting or webbing away.

    In starlinks' case, it's the opposite, for most links anyway; I'm fixed, but the network is moving. From the user's perspective, my perspective, I can't tell as I'm chatting or webbing away.

    Yawn... next.

  19. Anonymous Coward
    Anonymous Coward

    ECN

    Did ECN ever catch on to become effective? I recall having to disable ECN on our Ciscos way back when 56k modems were hot stuff. Some networks didn't play nice if the ECN bit was set.

    1. Jellied Eel Silver badge

      Re: ECN

      Did ECN ever catch on to become effective? I recall having to disable ECN on our Ciscos way back when 56k modems were hot stuff. Some networks didn't play nice if the ECN bit was set.

      I.. don't think so? FECNs and BECNs were something the netheads stole from the bellheads & good'ol Frame Relay. One of those nice ideas that I also remember becoming a PITA because vendors didn't always play nicely with others. It became a bigger thing when Cisco introduced WRED. So in an all-Cisco environment, it kinda played nicely, in a mixed environment, it didn't. And then what OS and apps might do or not do with ECN marked packets. It's one of those things that is part of overall traffic management though, and maybe also touches on 'net neutrality issues. Biggest one in the early days was enabling some features like WRED meant routers dealt with them at the process layer, so could hammer the CPU. A lot of this is done at the interface layer now though, so hardware queues rather than software.

    2. mtrantalainen

      Re: ECN

      The problem with ECN in public internet is that if you honor ECN marks in packages and slow down, all the users without support ECN will simply take over any bandwidth you release and the buffering issues still remain because buffers are filled by other users.

      In practice, ECN only allows slowing down your own connection but it cannot prevent bufferbloat in routers as long as there are many enough users without ECN support.

      There isn't any situation where you would want to turn on ECN when other people don't support it.

      In LAN networks where you know every client supports it, the ECN can improve performance a lot. However, even there a single greedy non-ECN client (e.g. CUBIC) will acquire unfair share of the network.

  20. Henry Wertz 1 Gold badge

    So no big deal

    So no big deal. Who is still using TCP Reno? Linux switched to Cubic like 15 years ago, newer distros are using BBR. (And befgore Cubic, they werer running sometjing newer than Reno if I recall correctly.) Windows switched to I think also Cubic or some close variant by Windows 7. For the very reason that it got better and more consistent speeds in face of non-congestion related packet loss as well lower buffer bloat.

    Don't know what macOS uses.

    Not to say this analysis isn't useful and interesting -- it is.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like