that's not how the Internet was concieved...
Just an IT Pro with a little knowledge. But I'm harmless, really.
The authors seem to propose a "Utopia" that would be great. But their goal, laudable as it is, needs sobering with reality. Other comments have already explained the main ones (monoculture, etc).
I'm just going to add that the Internet was very specifically engineered NOT to do these things.
I understand that, at the time (1970s-1980s), the other big networking standards were SNA and OSI. SNA was a big, complex beast owned by IBM. OSI is commonly referenced as a model, but is in fact an entire protocol stack, lead by ISO and ITU-T. If it had succeeded, we'd have OSI routers, etc.
IP routers were explicitly required to be much simpler;
RFC 1009 - Requirements for Internet gateways - June 1987
https://datatracker.ietf.org/doc/html/rfc1009
1.1.2. Networks and Gateways
The constituent networks of the Internet system are required
only to provide packet (connectionless) transport.
RFC 1812 - Requirements for IP Version 4 Routers - June 1995
This document enumerates standard protocols that a router connected to
the Internet must use, and it incorporates by reference the RFCs and
other documents describing the current specifications for these
protocols.
However, the specifications of this memo must be followed to meet the
general goal of arbitrary router interoperation across the diversity
and complexity of the Internet.
1.3.4 Configuration
In an ideal world, routers would be easy to configure, and perhaps
even entirely self-configuring. However, practical experience in the
real world suggests that this is an impossible goal, and that many
attempts by vendors to make configuration easy actually cause
customers more grief than they prevent.
2.2.3 Routers
Routers provide datagram transport only, and they seek to minimize
the state information necessary to sustain this service in the
interest of routing flexibility and robustness.
OSI was a rich protocol stack [for the era], incorporating email (X.400). But email benefits from directory services; enter X.500. Great! But how do you authenticate and encrypt? Enter certificates, X.509.
It seems these standards were too complex, so X.400 was simplified to SIMPLE message transfer protocol (SMTP) and X.500 was simplified to LIGHTWEIGHT directory access protocol. (and remains an internal network protocol, not really an Internet-facing one). And that was when the Internet was a TINY FRACTION of it's current size.
With this simplicity, the [IPv4] Internet flourished, and SNA and X.400 disappeared.
The IPv4 Internet needed to be extended with IPv6, which began in the 1990s and wasn't complete until RFC 8200 in July 2017.
Experience shows...
- a "simple" Internet has succeeded, where a "complex" Internet (SNA; OSI) failed
- building on this - IPv6 - has taken an enormous amount of time
- while the IPv6 standard may have been completed over 5 years ago, IPv4 still dominates
- where X.500 might have evolved to become a single sign on (SSO) system and the "white pages" element might have evolved to become your presence on the Internet, these roles have been fulfilled by private enterprise (Google; Facebook, Twitter and Microsoft for SSO; Facebook and Twitter for Internet presence/blogging)
- the modern Internet is largely funded - eventually - by advertising. Their motives are private profit
- the Internet was more-or-less apolitical; no longer so; Russia and China are obvious examples, but also https://theconversation.com/fight-for-control-threatens-to-destabilize-and-fragment-the-internet-162914
- the modern Internet is "hostile", hence the rise of "zero trust" and the likes of ZScaler as cloud VPNs overlaid on top. Routers aren't necessarily trusted TODAY. Imagine how you're going to trust them in the proposed reality.
We do need a sophisticated Internet; one that accommodates an enormous set of requirements (billing; security, interoperability, etc). IMHO, this should be achieved through standards. Standards should be written "in the entire public interest", and the Internet should implement these standards, and withdraw old standards. Arguably, this operating model needs to be achieved and established FIRST. It might be if all routers ran the same software, then that might actually MAKE something like this possible, but I doubt it. But then you'd have the monoculture risks, of course.
Assuming we could design, engineer and code this stack, who's going to fund it? Who's going to update all the endpoints so they're running the latest software? How do you handle out-of-date endpoints? Arguably, Google, Amazon, Meta, Apple and Microsoft (GAMAM) have achieved something conceptually similar - globally distributed complex endpoints kept up to date (more-or-less) , but they are private enterprises with their own motives, including advertising. This advertising isn't a blanket, mass-media message, but ultra-targeted, ultra-precise advertising achieved by unprecedented monitoring. Very much not in the public interest.
Another "Internet" is the mobile phone networks. It seems to me that they have largely succeeded. They have standards evolved through "generations" (1G; 2G; 3G; 4G; 5G), they have enough interoperability (AFAIK), and they move forward (eg 1G has disappeared entirely; 2G and 3G are fast disappearing, etc). The dominant standards are GSM, vendor neutral and developed by ETSI. This "Internet" seems to be very sophisticated and works to everyone's benefit.
The article seems to refer to backbone and enterprise routers [only]. But wouldn't you need residential routers and mobile broadband routers to have the same software?
Sophisticated edge routers are a great idea. But I'm sceptical it's practical or achievable.