
This is pretty smart of Avaya
They get publicity and get some real world heavy traffic testing in an environment where they can do performance monitoring in a way no enterprise would ever allow for their traffic.
Olympics – summer or, in this case, winter – provide a great proving ground for telco technologies: a huge number of users of widely-varying technical literacy, lots of disparate device types, a fairly lumpy movement of users between indoor and outdoor venues, and of course, stringent security requirements. Vulture South spoke …
Ah, The Olympics.... I can still remember some tradesman with a pair of sharpened, tempered, bolt cutters, on-stage at AT&T Atlanta Works (back when AT&T still owned Avaya, Lucent, NCR, OFS and Bell Labs), cutting through the fiber cable that ran between downtown and the lake...while the bigwigs were having a staged video conference. It failed-over faster when they cut it, than it did when they simply pulled the connection apart.
It was fun. Of course the free food, alcohol, and shirts were better.
vKontakte is the Russian answer to Facebook (and if rumours are believed, a large source of less-than-legit MP3s).
WeChat is a text, image, voice and video messaging system from Tencent (the same people who brought you QQ).
Facetime is Apple's high-bandwidth and platform-locked answer to to Skype.
Given that this is a US outfit, how much of that network is used to export Russian data as fast as possible? Just curious - it would seem strange if NSA and friends would pass up an opportunity to use a massively fast network to do some serious involuntary "sharing"..
The hardware appears to be 100 Mb/s links to Megafon and Rostelecom and 10 Gb/s link pairs to a dozen or so spots. I did not see the spread to the WiFi access points described, but presumably they used standard many-port Gb/s switches for that, which generally run a couple hundred dollars each.
I agree that there are no conceivable scenarios in which a 54 Tb/s switch can be overwhelmed by a few dozen 10 Gb/s links. I would expect the front panel looks rather sparsely populated.
I'm surprised that the internal bandwidth is provisioned for two orders of magnitude more bandwidth than the outgoing links. Perhaps they are schlepping uncompressed 4K video to their local editing booths? That video can get pretty large.
The other thing that surprised me was that text-based Twitter used more bandwidth than several video services. I would have expected even a few Facetime video links to generate more traffic than every Sochi visitor simultaneously banging on Twitter.
"I'm surprised that the internal bandwidth is provisioned for two orders of magnitude more bandwidth than the outgoing links. Perhaps they are schlepping uncompressed 4K video to their local editing booths? That video can get pretty large."
The article mentions that all the internal video does indeed travel over the same network.
Much more than just VLANs. The network uses Shortest Path Bridging (Avaya's marketing team call it FabricConnect) to provide a backbone with no blocked links anywhere, traffic balanced across available shortest paths and failover and failback on the order of tens of milliseconds. VLANs are only implemented at the very edge of the network to provide access.
There is some pretty funky stuff going on here; the IP multicast video feeds can be picked up anywhere on the fabric without any traditional multicast routing protocol (i.e. PIM) - FabricConnect handles multicast natively with traffic taking the shortest part from a source to receivers (in fact, unicast forwarding is basically treated as just a special case of multicast with one sender and one receiver). No rendezvous points, no unicast encapsulation, no duplication of traffic. Streams set up and tear down almost instantly.
In addition to the IPTV feeds, the fabric can also handle many-to-one multicast (which PIM is very bad at doing in a scalable way) for things like CCTV feeds. When you see a hundred CCTV feeds flick on in a fraction of a second, it's very impressive compared with a PIM network where the feeds come in one at a time over tens of seconds because of the inefficiency of the protocol.
I've deployed a few SPB networks now - it's almost boring how simple it is to configure and how well it works.