Re: We love it.
I'm absolutely not a networking engineer, but as I understand it, it isn't the absolute latency that's the issue, it's ths variability - the sort of jitter reported here is often indicative of a poor path and steps taken to deal with that can have consequences for protocols at all levels of the stack.
Yep. It's mostly an application thing, rather than a network thing. So as a neteng, it's being able to characterise the connection, then work with the client to understand if this is going to be an issue. Or working with a client to troubleshoot and explain why there's an issue. Sometimes that's just a fundamental issue relating to what they're trying to do. So for absolute latency, explaining why they couldn't get 100Mbps throughput on a TCP file transfer London-Tokyo, which is basically the long, fat pipe problem explained here-
https://en.wikipedia.org/wiki/Bandwidth-delay_product
The way TCP works is using a SYN/ACK scheme, so sender has to wait for the ACK before the next set of packets gets sent. From memory* that works out to around 70Mbps with typical latencies on that route. So solutions can be switching from TCP to UDP, ie an FTP will use TCP, TFTP uses UDP and works on a fire & forget principle so can saturate the link and give the expected 97Mbps or so goodput. Because there are overheads for everything. This is common on satellite connections because they're typically higher latency than terrestrial connections. Satellite terminals usually include TCP helpers that spoof the ACK so packets keep flowing.
Another solution is using TCP Window Scaling-
https://en.wikipedia.org/wiki/TCP_window_scale_option
The throughput of a TCP communication is limited by two windows: the congestion window and the receive window. The congestion window tries not to exceed the capacity of the network (congestion control); the receive window tries not to exceed the capacity of the receiver to process data (flow control).
which lets you tune the window/buffer size based on the latency/BDP. This gets FUN! for a number of reasons, like-
Windows Vista and Windows 7 have a fixed default TCP receive buffer of 64 kB, scaling up to 16 MB through "autotuning", limiting manual TCP tuning over long fat networks.
64kB= 512kb, which.. was kinda ok when connections were T1s, but less ok on faster links. I think later versions of Windows have a higher default, but a lot of clients either didn't know, or hadn't configured this on their connections. But this is also where PDV (Packet Delay Variation) or jitter come into play. This is mostly a buffering thing.
So you tune your TCP window/buffer based on a latency of X, but when it varies, congestion control may kick in, or the buffers just overflow and packets get dropped. For TCP, that means they should be retransmitted, for UDP.. everything is down to the application to notice and react. A lot don't handle this well. It's also where satellite terminals can help, so they can spoof the scaling and improve performance independent of the OS. It's also an issue for streaming apps because they generally use latency/goodput to allocate their buffers, so if those are too low, when latency increases, streams may stop.
And I guess it'll also be an issue for gaming, especially things like multi-player games where players may be from all over the world. They generally use buffers so everyone's kinda moving in real-time, but variations and high latency can cause players to end up desynced/out of sync and end up out of time & space to the rest of the players. They're dead but don't realise it until the game catches up.
And finally, you get into the Dark Arts of tuning buffers on router interfaces. Not so much of an issue for most people because the satellite terminal is the router and it's already tuned, but can be necessary on routers behind satellite connections.
*<cough>Sprint<cough> a client who should have known better before they escalated and dragged me out of bed to explain LFPs. Not sure they appreciated me asking them to hang on a mo while I yanked on the cables to drag Tokyo closer. One of those user education moments, even though the users were my peers..