Moore's Law on Acid
At this rate, we'll be seeing 1Tbs switches soon enough. Only $20,000 for 36 ports. Gaming latency, we hardly knew ye.
Analyst firm IDC reckons the world's Ethernet switch market laid on 3.3 per cent growth year-on-year for the first quarter of 2017, up to US$5.66 billion. At the same time, however, the service provider router market bore out what you'd expect if you've been watching Cisco's indifferent performances of late – it slipped by 3.7 …
We'll move on to terabit, but as it stands, quantum tunneling is a major problem with modern semiconductors preventing us from going there. If I recall correctly, Intel posted a while back that their research says that we will need 7nm die process to create 1tb transceivers. So for now we'll focus on 400gbs
40gb/s is accomplished with 4 bonded (think port channel kinda) 10gbs links. That means we need we need 4 wavelengths to accomplish 40gb/s or 10 for 100gb/s. Using WDM equipment, a 40gb/s trasceiver can deliver 10,20,30 or 40gb/s depending on which wavelengths are optically multiplexed.
100gb/s using 25gb/s transceivers can provide 25,50,75 or 100gb/s over the same wave lengths.
Long range transceivers capable of service provider scale runs are very expensive. But compared to rental of wavelengths cost nothing. I've seen invoices for 4 wavelengths along the transsiberian railroad where short term leases (less than 30 years) were involved measured in millions of dollars per year. Simply replacing a switch and transceiver would boost bandwidth from 20gb/s to 50gb/s without any alterations to the fiber or passive optical components.
So, 40gb/sec makes a lot of sense in data centers where there is no reoccurring costs for fiber. But when working with service providers, an extra million dollars spent on each end of a fiber, the hardware cost is little more than a financial glitch.
40Gbps was an OK interconnect for switches when there was nothing better, but it wasn't great for server connectivity when you were looking at upgrading from 10Gbps because you needed a little more bandwidth. Your 40Gbps costs per server were around 3x the costs of 10Gbps when you will probably only utilise 25-50%. in the medium term versus 25Gbps costs 2x 10Gbps and delivering 40-80%. of your bandwidth needs and providing a capable fibre channel competitor on the storage side.
AWS and Google have standardised on 25Gbps in their DC's already so it's likely the 25Gbps costs wil come down and 40Gbps will remain high.
> But compared to rental of wavelengths cost nothing.
That sounds interesting. Are you meaning there are rental agreements for running light through fibre?
If so... why? It's not a public transmission system (aka wireless).
I'm guessing I'm just not grokking something, thus the question. :)
"That sounds interesting. Are you meaning there are rental agreements for running light through fibre?"
Yes - any modern DWDM system should have the ability to accept what are known as 'alien' wavelengths.
These are 'coloured' optical signals supplied from another carrier and then, after level adjustment, are combined in the optical filters with wavelengths generated by the host system and sent to line. The only processing done to these alien wavelengths by the host system is analogue adjustments, eg., attenuation, amplification, filtering. The host system has no access to the data carried on the alien wavelengths as they are never demodulated into an electrical signal within the host.
This feature is useful if you need to get several wavelengths between data centres in different cities, but don't want to go to the trouble of laying and maintaining your own cross-country fibre network and duplicate what some other carrier has already built.
So, in your data centres you install a base level DWDM box with one or more transponders and no amplifiers. The coloured output from these travels on dark fibre to a national carrier's site, where they are loaded onto a cross-country DWDM system as alien wavelengths - there obviously is a point at which you might decide it's better economics to build your own full DWDM system and lease dark fibre cross-country, but adding optical filters, amplifiers and intermediate amplifier sites adds considerably to the cost.
"40gb/s is accomplished with 4 bonded (think port channel kinda) 10gbs links. That means we need we need 4 wavelengths to accomplish 40gb/s or 10 for 100gb/s. Using WDM equipment, a 40gb/s trasceiver can deliver 10,20,30 or 40gb/s depending on which wavelengths are optically multiplexed."
I work with DWDM systems that do either 100 or 200 Gb/s on a single wavelength (same card, it's just a tick box to switch between them) and I believe there are now 400Gb/s systems starting to come into use - the only reason to use multiple wavelengths is to have multiple independent 100/200 Gb/s channels over the same fibre pair.
I hate it, its pointless without actual numbers. Wow 700% growth, must be raking it in. Unless of course 100Gb last year it was 10 ports, so this year 70 ports. Year on year growth values are pointless. Where as the 10Gb could have been 1 billion ports last year and this year 1.2billion
The bits at 100Gbps are only around 3mm apart from each other if lined up in a line. Think of how many are in a simple 10 meter cable waiting to come out the other end.
Boggles the mind.
Now you need the processing power to do something with those bits as they spill out all over the floor into a bit bucket.