back to article The internet's edge routers are all so different. What if we unified them with software?

Edge routers have been an essential part of the internet for decades, connecting access networks – enterprise LANs, mobile and broadband networks – to the global backbone. These devices often have cryptic names — MPLS VPN Provider Edge routers, S/P-Gateways in the case of cellular networks, and Broadband Network Gateways (BNG …

  1. smudge
    Alert

    is it desirable?

    I am not an expert in this area, so the detail of the article is beyond me.

    But as a security pro (retired), my first question would be - is this putting all our eggs in one basket? Could it create a dangerous monoculture, where one exploitable vulnerability - possibly in the protocols themselves - could be catastrophically dangerous?

    Will be interested to hear opinions.

    1. stiine Silver badge

      Re: is it desirable?

      Like SS7? If that's what you were thinking, then the answer to your question is: Yes, except the potential holes are wider and deeper.

    2. jake Silver badge
      Pint

      Re: is it desirable?

      Sometimes it is better to have Clues than it is to be called an expert. You not only appear to be in possession of a Clue, but also have more than an inkling about applying it.

      It's Friday, I'm buying.

      1. Anonymous Coward
        Anonymous Coward

        Re: is it desirable?

        It's Friday, I'm buying.

        Stated quietly so nobody hears it? :)

        Joking aside, I agree. The issue with the current focus on specialisms (which, I hasten to add, are IMHO essential to get things done right) is that it omits the crucial issue that all of these need to be integrated at some level, and from my experience there is an serious dearth of people who can do this competently.

        In general, those who can tend to be older, pre-specialist era people - you know, the ones the bookkeepers get rid of because they cost too much and HR wants to lose them before they (rightly) accused of ageism, aka. the people who are capable of pronouncinging the word 'interoperability' sans problems.

        Especially in personnel management you reap what you sow. Just ask Heathrow, for instance..

        1. The Kraken

          Re: is it desirable?

          Not desirable. One compromise and they’re all screwed… go figure.

          Diversity is better.

          1. bazza Silver badge

            Re: is it desirable?

            Agreed, in the short term, but in the long term I'm not convinced.

            Diversity" can be an illusion. Yes, the box and software may all have different labels, but that does not mean the difference goes all the way down. There's bound to be libraries, tools, OSes that are repeatedly used across the industry. If just one of those components is flawed, then all devices that have used that component are also flawed. And, these days, one has to also worry about commonality of chips and flaws in them; we've all seen both AMD and Intel's various flavours of secure enclave get broken repeatedly over the past few years.

            Diversity also means that there's a whole bunch of different teams developing and maintaining a hardware design and a software stack. Overall, per product, that won't be very many sets of eyes. Whereas, industry-wide commonality likely means a whole lot more eyes - internal and external - focusing on the problem. Over the years that will win out (so long as they don't keep fundamentally changing it). But, there's no money in it; the profit margins will become very small.

            The parallel is Windows, which gets attacked a lot but also gets fixed a lot, and Windows today is far better than it used to be. If there were several hundred different desktop OSes, there would be a lot of problems lying undiscovered by the manufacturers for a lot longer.

            Perhaps the answer is diversity, until one of them is reckoned to be "perfect"!

            1. Roland6 Silver badge

              Re: is it desirable?

              >The parallel is Windows

              The closer parallel is Intel and AMD over the x86 architecture. Effectively two different'ish implementations of the same specification.

              From what I can see, the ARM ecosystem has yet to achieve the same level of dual implementation and supply, although I suspect it will be part of ARM's future as they grow in the commodity server and workstation/PC spaces. Although, if their emulation of x86 platform is good enough they could become a third player in the x86 marketplace.

        2. Tom 7

          Re: is it desirable?

          " The issue with the current focus on specialisms (which, I hasten to add, are IMHO essential to get things done right) is that it omits the crucial issue that all of these need to be integrated at some level, and from my experience there is an serious dearth of people who can do this competently."

          This is a problem across the whole field of IT, and to some degree science. Specialisation is a necessity for those paying and yet the broader view is in reality as important but not encouraged. Something about killing the goose that lays the golden egg with a restricted diet comes to mind.

    3. Jellied Eel Silver badge

      Re: is it desirable?

      It.. depends.

      Firstly, what is meant by 'edge'. So in many networks, the 'edge' could be the customer edge, so the router sitting in the customer premise. Increasingly, this isn't a router, but a NID (Network Interface Device), and doesn't really route, it simply forwards. One interface faces the customer, the other the service provider, so unless it's doing (or trying to do) something like multi-homing, it doesn't need to route.

      If it's the service provider's edge, ie the tin the customer's router connects to, that may not route either, it's often a switch that aggregates and forwards to something closer to the core. Many access/aggregation networks aren't really IP but layer-2(is) Ethernet or GMPLS devices. Mainly because they're still mostly forwarding. If the SP's wholesaling services, they may be more router-like, so able to support multiple 'ISP' customers dwelling inside their own VRF.

      That's mostly a cost/complexity thing because routers tend to be far more expensive than switches. Compare costs of a 48 port switch vs a 48 port router for example. Then look at license and support costs. Then look at typical traffic flows, and 'wanted' flows between customers connected to a common edge/aggregation router tend to be small. So stuff like MS discovery protocols etc so you can use a neighbor's printer or file server.

      So it's really the same old problem with SDN, ie who benefits, and who pays? So a practical application could be streaming. So a Netflix SDN that distributes content closer to the edge. That doesn't necessarily need SDN, but could be done with a practical mcasting solution.. Which is a combination of technical, but mostly commercial.

      Otherwise the problem with SDN is essentially extending the control plane, which any sane SP would not want to do. Control plane functions stable and SPs really want to keep those secure and under the SP's control. Especially as that also flows though into boring little details like contracts and SLAs. So one thing holding SDN back is the lack of a killer app that justifies why service providers should hand over control of their networks to some third party. But SDN-like services already exist in the wholesale space via things like shared VRFs or more flexible NNI offers.

    4. katrinab Silver badge
      Alert

      Re: is it desirable?

      Once you filter out the technobabble and corporate-speak, the article can be summarised as:

      "There are different types of router, and people select the one that best fits their needs".

      1. Anonymous Coward
        Anonymous Coward

        Re: is it desirable?

        "Set Technobabble to 11, Mr Sulu"

      2. Roland6 Silver badge

        Re: is it desirable?

        And our specification creates a size-fits-all router that can be centrally managed and configured to suit a particular need.

  2. DJV Silver badge
    Facepalm

    Yes, absolutely!

    Let's create one new system to replace all of them! What can possibly go wrong?

    /s

    https://xkcd.com/927/

    1. jake Silver badge

      Re: Yes, absolutely!

      In the world BX[0], Andrew S. Tanenbaum once said "The nice thing about standards is that there are so many of them to choose from."

      [0] Before XKCD.

  3. Inventor of the Marmite Laser Silver badge

    Standards are a wonderful thing. That's why we have so many.

  4. TRT Silver badge

    So what's in it for the vendors? I mean, they need a USP, right? So how do they distinguish themselves from alternative providers? Cisco have stood head and shoulders above others for years in many fields, but they do so by supporting standards AND by offering their own esoteric functionality. Which begs the question, why is this idea proposed here differ from the status quo?

    1. Anonymous Coward
      Anonymous Coward

      I don't think Cisco will go for this idea, fewer boxes to sell. It's IMHO harder to sell and expensive box that does it all than a truck full of boxes that nibble away at your budget more slowly.

      1. Anonymous Coward
        Anonymous Coward

        and then you have Cisco support contracts which chomp away at your budget more quickly.

      2. Anonymous Coward
        Anonymous Coward

        There's a rift in the SP market with many providers moving to equipment based on merchant silicon. Cisco seem to be approaching this in three ways - trying to make their own Broadcom based platforms as competitive as possible price-wise; licensing IOS-XR to work on third-party commodity hardware; and developing and selling their own silicon chipset (that no one is buying so far). Oh, and then there's the NSO orchestration suite that's vendor-agnostic. So I'm seeing a lot of adaption, both from Cisco and their competitors.

  5. An_Old_Dog Silver badge
    Unhappy

    Wheels Within Wheels, Clouds Within Clouds

    Some people love to dream up new systems. But the new systems seem rarely to replace/displace the older systems, leading to increasingly-large piles of needless complexity (and the bugs which breed within those piles of needless complexity).

    IPv6 not replacing IPv4, and UEFI not replacing legacy BIOS come to mind.

    1. Pirate Dave Silver badge
      Pirate

      Re: Wheels Within Wheels, Clouds Within Clouds

      "increasingly-large piles of needless complexity"

      That sums up the last 20 years of the IT industry. All that complexity needs a lot of mind-space to understand, which a lot of us simply don't have (I know I don't). But we're still paid $$$ to use/admin/extend things on top of all that wobbly complexity, even through we don't fully understand it, so we do as best we can and hope nobody finds the dark corners where our understanding falters and the weeds have grown the thickest. But those dark corners are exactly where the black-hats aim for. For them, that dark corner may be a worm-hole directly into the center of the castle, whereas for us, well, we don't know what we don't know beyond a vague feeling of "there might be something there, but I've not got time to sort it out now".

      Personally, I blame marketing. They're the ones constantly searching for "new" and "different" since their job is pushing products in front of eyes, and society is always searching for the Next Great Thing.

  6. steelpillow Silver badge
    Joke

    Rules of the game:

    1. Don't mention NAT

    2. Start with a clean sheet of paper

    3. Don't mention NAT

    4. Ignore backwards compatibility

    5. Don't mention NAT

    6. There is no Rule No. 6

    7. I told you not to mention NAT

    8. What could possibly go wrong?

    1. Fred Flintstone Gold badge

      Re: Rules of the game:

      In answer to 8: IPv6

      :)

      1. Steve Davies 3 Silver badge
        Pirate

        Re: Rules of the game:

        I'll raise you OSI 7-layer network model.

  7. steelpillow Silver badge
    Boffin

    Open RAN

    This little essay rings many bells within the Open RAN initiative for the Mobile edge. Exactly this kind of flexible redistribution of functionality over commodity hardware is at the heart of Open RAN. The only fly in the ointment is that several major Western governments have thrown their political weight behind it as the solution to their 5G woes, and such government support is usually a sure portent of disaster.

    1. theloon

      Re: Open RAN

      oh yes ORAN...... which is another example of the industry expending huge effort on redefining a technology as 'Open' and then chasing it's tail to never get to a point where it is either more effective or more efficient ...

      At the 'heart of Open RAN' is actually a lot of speciallied hardware. Whilst you can buy it off the shelf you still need to find people with the same skills as any other vendor to 'make it work' and then of course you do also need your own USP to make your offering standout from all the other Open vendors..

      And around we go..

      Perhaps the industry would be better served actually spending it's time, money and brain power on something genuinely original?

      Ohhh Mesh anyone? lol

  8. Anonymous Coward
    Anonymous Coward

    Edge.....define "edge"....

    Is FB.com on the "edge"?

    Is my Ring doorbell on the "edge"?

    All I know is that all this jargon definitely leaves me on "edge"!

  9. DoctorNine

    Biology is informative

    In biological systems, monoculture is a risk, because then a single type of attack opportunity risks compromising the whole biomass. It seems to me that similar logic may apply here.

    1. jake Silver badge

      Re: Biology is informative

      As was neatly demonstrated by the Morris Worm back in 1988.

      Most of us learned from the experience. Modern kids? Maybe not so much.

  10. Anonymous Coward
    Anonymous Coward

    Except that every vendor wants you to buy their solution, and so there will still be a hundred standards; version incompatible going forwards necessitating buying the next model.

    Probably.

  11. Anonymous Coward
    Anonymous Coward

    that's not how the Internet was concieved...

    Just an IT Pro with a little knowledge. But I'm harmless, really.

    The authors seem to propose a "Utopia" that would be great. But their goal, laudable as it is, needs sobering with reality. Other comments have already explained the main ones (monoculture, etc).

    I'm just going to add that the Internet was very specifically engineered NOT to do these things.

    I understand that, at the time (1970s-1980s), the other big networking standards were SNA and OSI. SNA was a big, complex beast owned by IBM. OSI is commonly referenced as a model, but is in fact an entire protocol stack, lead by ISO and ITU-T. If it had succeeded, we'd have OSI routers, etc.

    IP routers were explicitly required to be much simpler;

    RFC 1009 - Requirements for Internet gateways - June 1987

    https://datatracker.ietf.org/doc/html/rfc1009

    1.1.2. Networks and Gateways

    The constituent networks of the Internet system are required

    only to provide packet (connectionless) transport.

    RFC 1812 - Requirements for IP Version 4 Routers - June 1995

    This document enumerates standard protocols that a router connected to

    the Internet must use, and it incorporates by reference the RFCs and

    other documents describing the current specifications for these

    protocols.

    However, the specifications of this memo must be followed to meet the

    general goal of arbitrary router interoperation across the diversity

    and complexity of the Internet.

    1.3.4 Configuration

    In an ideal world, routers would be easy to configure, and perhaps

    even entirely self-configuring. However, practical experience in the

    real world suggests that this is an impossible goal, and that many

    attempts by vendors to make configuration easy actually cause

    customers more grief than they prevent.

    2.2.3 Routers

    Routers provide datagram transport only, and they seek to minimize

    the state information necessary to sustain this service in the

    interest of routing flexibility and robustness.

    OSI was a rich protocol stack [for the era], incorporating email (X.400). But email benefits from directory services; enter X.500. Great! But how do you authenticate and encrypt? Enter certificates, X.509.

    It seems these standards were too complex, so X.400 was simplified to SIMPLE message transfer protocol (SMTP) and X.500 was simplified to LIGHTWEIGHT directory access protocol. (and remains an internal network protocol, not really an Internet-facing one). And that was when the Internet was a TINY FRACTION of it's current size.

    With this simplicity, the [IPv4] Internet flourished, and SNA and X.400 disappeared.

    The IPv4 Internet needed to be extended with IPv6, which began in the 1990s and wasn't complete until RFC 8200 in July 2017.

    Experience shows...

    - a "simple" Internet has succeeded, where a "complex" Internet (SNA; OSI) failed

    - building on this - IPv6 - has taken an enormous amount of time

    - while the IPv6 standard may have been completed over 5 years ago, IPv4 still dominates

    - where X.500 might have evolved to become a single sign on (SSO) system and the "white pages" element might have evolved to become your presence on the Internet, these roles have been fulfilled by private enterprise (Google; Facebook, Twitter and Microsoft for SSO; Facebook and Twitter for Internet presence/blogging)

    - the modern Internet is largely funded - eventually - by advertising. Their motives are private profit

    - the Internet was more-or-less apolitical; no longer so; Russia and China are obvious examples, but also https://theconversation.com/fight-for-control-threatens-to-destabilize-and-fragment-the-internet-162914

    - the modern Internet is "hostile", hence the rise of "zero trust" and the likes of ZScaler as cloud VPNs overlaid on top. Routers aren't necessarily trusted TODAY. Imagine how you're going to trust them in the proposed reality.

    We do need a sophisticated Internet; one that accommodates an enormous set of requirements (billing; security, interoperability, etc). IMHO, this should be achieved through standards. Standards should be written "in the entire public interest", and the Internet should implement these standards, and withdraw old standards. Arguably, this operating model needs to be achieved and established FIRST. It might be if all routers ran the same software, then that might actually MAKE something like this possible, but I doubt it. But then you'd have the monoculture risks, of course.

    Assuming we could design, engineer and code this stack, who's going to fund it? Who's going to update all the endpoints so they're running the latest software? How do you handle out-of-date endpoints? Arguably, Google, Amazon, Meta, Apple and Microsoft (GAMAM) have achieved something conceptually similar - globally distributed complex endpoints kept up to date (more-or-less) , but they are private enterprises with their own motives, including advertising. This advertising isn't a blanket, mass-media message, but ultra-targeted, ultra-precise advertising achieved by unprecedented monitoring. Very much not in the public interest.

    Another "Internet" is the mobile phone networks. It seems to me that they have largely succeeded. They have standards evolved through "generations" (1G; 2G; 3G; 4G; 5G), they have enough interoperability (AFAIK), and they move forward (eg 1G has disappeared entirely; 2G and 3G are fast disappearing, etc). The dominant standards are GSM, vendor neutral and developed by ETSI. This "Internet" seems to be very sophisticated and works to everyone's benefit.

    The article seems to refer to backbone and enterprise routers [only]. But wouldn't you need residential routers and mobile broadband routers to have the same software?

    Sophisticated edge routers are a great idea. But I'm sceptical it's practical or achievable.

    1. Roland6 Silver badge

      Re: that's not how the Internet was concieved...

      >I understand that, at the time (1970s-1980s), the other big networking standards were SNA and OSI.

      The big networking standard was SNA, other contenders were X.25 (CCITT), XNS, DECnet, OSLAN etc. Basically the world was proprietary.

      In academia, TCP/IP and the Janet colour books were big things.(*)

      Practically the only part of "OSI" that existed was X.25 - okay I'm ignoring IEEE 802 LAN - because ISO OSI largely adopted the work of the CCITT.

      So OSI was big because it was being heavily promoted at being non-proprietary etc. But it wasn't really until 1988 were there anything that could be called a 7-layer OSI implementation available to buy. (Yes MAP/TOP implementations based on OSI were available in 1986, but to OSI purists they weren't OSI.)

      (*)Obiously, with Unix coming from academia it had TCP/IP bundled for free.. The seeds of change were largely sewn with the widespread adoption of Unix by many new workstation vendors...

  12. martinusher Silver badge

    Seems more like a Powerpoint generating technology to me

    I'm no stranger to network management applications, software that controls and monitors network components, and its easy to see how this could be conceptually extended into a control plane abstracted from the network dataflow and how these components could be diced and sliced into all sorts of logical arrangements. The snag, as ever, is the implementation -- ultimately you have some kind of interface to each unit with all the problems that these interfaces bring (security, for example). This article suggests in a roundabout way that it would be really nice if every box conformed to a single logical model that presented a single type of interface to whatever's controlling it. Simple enough on paper -- or Powerpoint. In practice there's a lot of work there and I'm not sure that even if was all fully operational that it would yield any advantage except to the software vendor.

    What I like about the Internet is that its essentially autonomous. For the most part when I put a packet into the system addressed to somewhere it turns up there. We should strive to keep this model; it has problems when traffic flows are excessive or highly asymmetrical or when some entity tries to 'ban' a 'domain' (which to my old fashion nature merely means 'removing it from a look up table' so its not surprising when it doesn't actually disappear). If you want to build fancy structures on top of this, fine, but leave the underlying infrastructure alone.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like