Why AI/ML?
Symbol rates, power levels, FEC parameters etc have long been managed to optimise links without any AI/ML, so what does that bring to the party?
Every year the bandwidth that telecom networks carry increases by roughly 30 percent. To keep up, the interconnects on which these networks are built are going to need to get a whole lot smarter and more capable before long, BT's Andrew Lord said during his Hot Interconnects keynote early this week. Lord, a senior manager of …
Slashing most "user telemetry" would save a lot of bandwidth... many application won't move much more data in the future, even video which is one of the most heavy one on consumer'a side will reach a point were any increase in resolution/color depth/etc. is useless because no human eye can perceive it. Maybe shuffling around Zuck's metaverses could be heavier - but that's another good reason to save bandwidth (and probably mankind with it...)
Yeah, but people have been saying that for 30 years. I see no reason to imagine it's true now.
Even if you think existing stupidly-high-res streamers will remain forever happy with their current resolution (in itself, a risible idea once you write it down), there's the other 90% of the population who aren't even at that level yet. To say nothing of the other four members of each of their households. And all the stuff they want to download and watch in parallel.
And that's without even considering "new stuff". VR, IOT shit...
I wonder if the flexing they're doing now on the fiber interface cards is interacting with Skype's ML dynamic compression methods and giving us the weird sound effects we hear during a call. One such effect is a repeating, louder-softer-louder sound of someone slurping the last bit of a milkshake through a straw.
(Icon for, "I can't hear you over that slurping noise!")
I wouldn't be so sure that the 30% year-on-year growth of bandwidth demand is going to stay for very long.
My guess is that it has been sustained by audio at first, and then by video, but it's not at all clear to me what is going to keep demand growing once we get to the point where everyone can reliably receive 8k video.
Maybe Andrew Lord is just saying that it will take quite a bit of time and effort before we get to that point, in which case I'd agree, but it doesn't feel like that is the case. It feels like he genuinely believes that at some point I'll need multiple gigabits of personal bandwidth, and I'll need them for sustained amounts of time, not just bursts to make a big download every now and then.
I honestly can't imagine what for. Might be a failure of the imagination, sure, and I'd be happy to hear suggestions. But, well. 3D content? Metaverse and suchlike? The thing is, that sort of stuff is either rendered server-side, in which case it's just video, or client-side, in which case it's even less bandwidth than video. Unless we end up slinging trillions of animated voxels around? I don't see any benefit to that, though, compared to moving rendered video.
What else? Most other buzzwords floating around don't consume very much bandwidth compared to video.
While you are right consumer video streaming has growth limits (~16hrs/day@8k), we are not there yet. Majority of content is streamed at SD or HD not even 4k, and majority of watching is still on broadcast rather than streaming networks, and aside from TV itself, internet browsing time in general is still only starting its transition from text/photo to increasigly video heavy. Likewise our working patterns are still only part way through the shift to zoom/skype vs physical.
Beyond video, in the short term, streaming gaming is 2-4x less efficinet than TV for the same resolution as compression has to be less aggressive to hit the required latency. In the medium term streaming VR is even more of a bandwidth hog due to higher resolutions and even lower latency requirments making heavy compression a challenge, albeit most of that traffic is likely to be generated close to the edge on a metro network (again for latency reasons), so less relevant for the type of core networking optics crunch this story is about.
Dont think of VR as primarily a consumer gaming niche. When the headsets get small and confortable enough, is probably more relevant for business than consumer. Think unlimited screen setups, photorealistic telepresence (see meta's codec avatars R&D on youtube and Varjo's generation of realtime scans of your environment to collegues can teleport to wherever you are), remote robotics operation for home working blue collar jobs etc etc. It's a transformative technology that is currently being dismissed by most as 'just gaming'.
VR headsets looks to me to have more tech hurdles to get past than fiber bandwidth. Video streaming is usually handled by CDNs which are closer to users - the real issue right now is live events, because multicast was never really widely deployed - DAZN which in Italy tried IP only broadcasts failed badly and had to get a deal with Sky to have that on satellite too - one can try with more bandwidth only - but probably it's a problem to be tackled in a different way. Blue collars remotely controlling their robots is just not happening - we'll see AI controlling them before.
Firstly we have about two decades where VR will be available but AI will not have been rolled out to every niche it can eventually serve. These things take time and there is plenty of market availble for both technologies. Secondly, absent true general human level AI, it will never make economic sense to automate to 100%. Some percentage of activity is not just rare, but actually unpredictable and that's where you employ humans. Unpredictable stuff happening in toxic industrial environments is not a fun job to do in person though.
My memory might be failing , but I am sure when I was at Homechoice/Videonetworks in the year 2000, some big wig from BT said consumers would never need more than 2Mbs. Thet were charging £700 for a broadband connection at the time and even then Premier League football was blurry at times.
An article about a British company and one of its (presumably) British employees, and yet it has the word "fibre" misspelled 23 times.
However, what has really puzzled and annoyed me most was the curious mention of multiple dBs of spectrum margin. When did the measurement of spectrum availability start to be quantifiable in dB for <deity's> sake?
How long before I have to ask for a few more chips to go with my fish by specifying the requested extra in dB?
Grrr.
It's a standard tradeoff. If you have a signal modulated as, say, 64QAM, then that signal requires a certain signal/noise ratio to be received with a low enough error rate. If you double the data rate using 128QAM then you need, typically, 3dB more s/n ratio to get the same low error rate. Fibre systems work with more s/n ratio than strictly necessary to cope with equipment and fibre route variations, and in some cases some of that dB margin can be traded for extra bandwidth if the stats on the link say it can be done reliably.
I guess the new owners have demanded that spell checkers are set to US language rather than British English. IMHO, Americanese has devolved into a different language and should not be called English.
When MTG becomes speaker of the house in January, they will make sure that everyone starts speaking Georgian (the state not the country).
What I want to know is then will BT commission the Fibre that was laid in our area almost two years ago. The boxes are up the poles but so far they are not offering a service. The Vermin network is groaning due to the load.
H'mm. Interesting. Of course in the UK, Highways have been doing this for years. A slight problem has been the totally un-anticipated death rate through smash-ups when someone breaks down and stops on the hard shoulder. I mean, who could have foreseen that? Why, they are now having to turn the smarts off and go back to the old but safe way of doing things, what a tragedy for progress. Can't help wondering if the analogy will push that deep and fibre overloads will turn out to decay gracelessly.
A few commentards mentioned the applications. Or how we control the infrastructure.
As each technological advance introduced a new infrastructure we made the same mistakes; initially we tried to control the infrastructure with a hierarchy.
As the volume of data overwhelmed it someone introduced a distributed and sometimes autonomous way to control the infrastructure.
Mr Lord thinks this fibre is going to support the centralised internet.
But we already have the edge. And applications will be distributed. Don't try to control that with your PON thinking.
I don't have the answers. I think I know where we have to look and it's not in the mega-corporations controlling technology.
GAFA have stalled the development of the net. We could push on and switch to distributed system and services. But that would make the government unhappy.
The obvious alternative is for the British Government to take charge of all aspects of the net in the UK, connectivity, content, domain names, the works, I'm pretty sure they would break it relatively quickly. They've broken everything else in the last few years.
We could then see some quite dramatic reductions in traffic and usage, and we wouldn't need all this expensive new stuff that we no longer have the chips or staff or cash to build. Put the Tories in charge and we might be back to 1990s levels by Christmas. That would save a fortune and heaps of expensive electricity.