back to article Farewell, slumping 40Gbps Ethernet, we hardly knew ye

Analyst firm IDC reckons the world's Ethernet switch market laid on 3.3 per cent growth year-on-year for the first quarter of 2017, up to US$5.66 billion. At the same time, however, the service provider router market bore out what you'd expect if you've been watching Cisco's indifferent performances of late – it slipped by 3.7 …

  1. Brian Miller

    Moore's Law on Acid

    At this rate, we'll be seeing 1Tbs switches soon enough. Only $20,000 for 36 ports. Gaming latency, we hardly knew ye.

    1. Anonymous Coward
      Anonymous Coward

      Re: Moore's Law on Acid

      Nope, they're on to 400 Gbps ethernet next, though I'm sure there's some room somewhere that has people drawing up specs for 1 Tbps ethernet sometime next decade.

    2. CheesyTheClown

      Re: Moore's Law on Acid

      We'll move on to terabit, but as it stands, quantum tunneling is a major problem with modern semiconductors preventing us from going there. If I recall correctly, Intel posted a while back that their research says that we will need 7nm die process to create 1tb transceivers. So for now we'll focus on 400gbs

    3. Anonymous Coward
      Anonymous Coward

      Re: Moore's Law on Acid

      "Gaming latency, we hardly knew ye."

      Throughput ≠ Latency

      1. Anonymous Coward
        Anonymous Coward

        Re: Moore's Law on Acid

        Not absolutely true. The higher the transmission speed, the less time it takes to serialise a given amount of data on to the "line", so that does result in lower latency.

        1. Anonymous Coward
          FAIL

          @AC - latency

          Yeah, the 1.2 microseconds that a full sized 1500 byte packet is on the wire on 10 GbE really hurts your reaction time, I can see why you want to upgrade to something faster.

    4. Ian Michael Gumby
      Boffin

      Hun? Re: Moore's Law on Acid

      If you can afford a 40Gb/s switch then what are you doing playing games?

      10Gbe is still too expensive for the SOHO. Not to mention your upstream (ISP) won't be bringing in data faster than 1Gb/s ...

    5. Anonymous Coward
      Anonymous Coward

      Re: Moore's Law on Acid

      I'm looking at a bunch of switches at the moment, deployed for at least 18months in a customer's network, with 3T8 switching matrix cards in them - we can do you 1T8 if you only want a slow speed network.

      1. Anonymous Coward
        Anonymous Coward

        Re: Moore's Law on Acid

        Can you translate 3T8 to English for those of us who aren't WAN jocks?

        1. Anonymous Coward
          Anonymous Coward

          Re: Moore's Law on Acid

          Umm - standard technical notation - 1k8 = 1.8 kilo whatever, 3T8 = 3.8 Tera whatever.

          Although I see Nokia are now talking about their new router in terms of 0.576Pb/s, just to raise the bar a little.

          1. Anonymous Coward
            Anonymous Coward

            Re: Moore's Law on Acid

            Umm not in computer speak. It should be 3.8Tib/s (although 3.8Tbps is also widely used and still much better than 3T8)

  2. Arikos

    1.2Tbps already inbound

    Saw a live demonstration last week of an implementation of Gen-Z at 1.2Tbps.

  3. CheesyTheClown

    It's about wavelength as opposed to transceivers.

    40gb/s is accomplished with 4 bonded (think port channel kinda) 10gbs links. That means we need we need 4 wavelengths to accomplish 40gb/s or 10 for 100gb/s. Using WDM equipment, a 40gb/s trasceiver can deliver 10,20,30 or 40gb/s depending on which wavelengths are optically multiplexed.

    100gb/s using 25gb/s transceivers can provide 25,50,75 or 100gb/s over the same wave lengths.

    Long range transceivers capable of service provider scale runs are very expensive. But compared to rental of wavelengths cost nothing. I've seen invoices for 4 wavelengths along the transsiberian railroad where short term leases (less than 30 years) were involved measured in millions of dollars per year. Simply replacing a switch and transceiver would boost bandwidth from 20gb/s to 50gb/s without any alterations to the fiber or passive optical components.

    So, 40gb/sec makes a lot of sense in data centers where there is no reoccurring costs for fiber. But when working with service providers, an extra million dollars spent on each end of a fiber, the hardware cost is little more than a financial glitch.

    1. theblackhand

      Re: It's about wavelength as opposed to transceivers.

      40Gbps was an OK interconnect for switches when there was nothing better, but it wasn't great for server connectivity when you were looking at upgrading from 10Gbps because you needed a little more bandwidth. Your 40Gbps costs per server were around 3x the costs of 10Gbps when you will probably only utilise 25-50%. in the medium term versus 25Gbps costs 2x 10Gbps and delivering 40-80%. of your bandwidth needs and providing a capable fibre channel competitor on the storage side.

      AWS and Google have standardised on 25Gbps in their DC's already so it's likely the 25Gbps costs wil come down and 40Gbps will remain high.

    2. Anonymous Coward
      Anonymous Coward

      Re: It's about wavelength as opposed to transceivers.

      > But compared to rental of wavelengths cost nothing.

      That sounds interesting. Are you meaning there are rental agreements for running light through fibre?

      If so... why? It's not a public transmission system (aka wireless).

      I'm guessing I'm just not grokking something, thus the question. :)

      1. Kernel

        Re: It's about wavelength as opposed to transceivers.

        "That sounds interesting. Are you meaning there are rental agreements for running light through fibre?"

        Yes - any modern DWDM system should have the ability to accept what are known as 'alien' wavelengths.

        These are 'coloured' optical signals supplied from another carrier and then, after level adjustment, are combined in the optical filters with wavelengths generated by the host system and sent to line. The only processing done to these alien wavelengths by the host system is analogue adjustments, eg., attenuation, amplification, filtering. The host system has no access to the data carried on the alien wavelengths as they are never demodulated into an electrical signal within the host.

        This feature is useful if you need to get several wavelengths between data centres in different cities, but don't want to go to the trouble of laying and maintaining your own cross-country fibre network and duplicate what some other carrier has already built.

        So, in your data centres you install a base level DWDM box with one or more transponders and no amplifiers. The coloured output from these travels on dark fibre to a national carrier's site, where they are loaded onto a cross-country DWDM system as alien wavelengths - there obviously is a point at which you might decide it's better economics to build your own full DWDM system and lease dark fibre cross-country, but adding optical filters, amplifiers and intermediate amplifier sites adds considerably to the cost.

        1. Anonymous Coward
          Anonymous Coward

          Re: It's about wavelength as opposed to transceivers.

          > Yes - any modern DWDM system should have the ability to accept what are known as 'alien' wavelengths. ...

          Awesome, thank you for explaining it. :)

    3. Kernel

      Re: It's about wavelength as opposed to transceivers.

      "40gb/s is accomplished with 4 bonded (think port channel kinda) 10gbs links. That means we need we need 4 wavelengths to accomplish 40gb/s or 10 for 100gb/s. Using WDM equipment, a 40gb/s trasceiver can deliver 10,20,30 or 40gb/s depending on which wavelengths are optically multiplexed."

      I work with DWDM systems that do either 100 or 200 Gb/s on a single wavelength (same card, it's just a tick box to switch between them) and I believe there are now 400Gb/s systems starting to come into use - the only reason to use multiple wavelengths is to have multiple independent 100/200 Gb/s channels over the same fibre pair.

  4. Anonymous Coward
    Anonymous Coward

    Percentage change year on year

    I hate it, its pointless without actual numbers. Wow 700% growth, must be raking it in. Unless of course 100Gb last year it was 10 ports, so this year 70 ports. Year on year growth values are pointless. Where as the 10Gb could have been 1 billion ports last year and this year 1.2billion

  5. IJD

    The simple fact is that 40G ports cost about the same as 100G ports, maybe even more now since 100G volume is rising rapidly and 40G volume is falling. 40G rollout was delayed by several years due to the telecom crash and technical issues, so 100G has overtaken it.

    1. Anonymous Coward
      Anonymous Coward

      <<The simple fact is that 40G ports cost about the same as 100G ports>>

      Not if you're buying Cisco kit !

  6. Herby

    To keep things in perspective...

    The bits at 100Gbps are only around 3mm apart from each other if lined up in a line. Think of how many are in a simple 10 meter cable waiting to come out the other end.

    Boggles the mind.

    Now you need the processing power to do something with those bits as they spill out all over the floor into a bit bucket.

  7. Nimby
    Facepalm

    40 Gbps switches are unloved both in revenue and port shipments

    Or just simplify to "40 Gbps switches are unloved."

    It's pretty simple: you can have cheap, or you can have fast. Why would you want something that is neither?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022