Bloat
Bloat has become a huge problem in the IT industry in every field and it has very real consequences.
We're already seeing some of them.
A retiring veteran of the Internet Engineering Task Force (IETF) has left the organization with a departing piece of advice: stop creating so many protocols. Ross Callon was one of just 21 engineers who attended the first IETF meeting in San Diego in 1986 and has missed only a handful of the 95 subsequent meetings it has held …
I agree that bloat (protocols and software) is a problem, but it's useful to ask why is the wheel so often re-invented?
Hardly anyone insists on writing their own strtok(), printf() etc. They know that the standard libraries are reliable. As for other software, it's typically ill-considered, confusing or badly-documented, so the right course often is to re-invent it. The result may be no better, but it's something you can understand and maintain.
Writing good library code is hard, as is designing a protocol to suit many purposes.
You would like to think that standard libraries are known, fixed and tested. Not when it comes to the IoT world where the mbed implementation of gmtime() is broken! And not fixed in over two years!
Says a lot about how well they develop and test IoT stuff, eh?
https://developer.mbed.org/questions/75856/Who-will-fix-the-mbed-system-gmtime-func/
https://github.com/ARMmbed/mbed-os/issues/1098
Actually, I created "my own" strtok(), although I admit you said "hardly anyone". Also, I created it because, back at that time, there were no freely usable implementations of the C standard library, and a few of us decided to fix that. And (almost) finally, I did it for "completeness", not because I or anyone I knew would be so barking mad as to _use_ strtok(), other than as a "reductio ad absurdum" to illustrate that _real_ character manipulation uses (pointer,count,position) structures, and doesn't poke holes in existing strings that someone else might be looking at.
And (really) finally, when I wrote a test suite (for all the functions in <string.h>) and ran it against all five commercial libraries I had access to, _none_ passed. Yeah, I verified that the failures were real, not just bugs in my test suite. I think of this every time someone opines how there are so many high quality fully tested libraries out there for pretty much every purpose. I have trouble believing that the clowns who have had bugs in memcmp() or memcpy() can somehow write a flawless network stack.
"Hardly anyone insists on writing their own strtok(), printf()"
Hardly anyone insists on writing their own for no good reason but there often are good reasons. Printf is a large function, on embedded systems with limited memory this can be an issue and a cut down version with limited capabilities can help. Printf implementations often use dynamic (heap) memory which can be a major problem and printf implementations are not always thread safe, strtok() is rarely (never?) thread safe and you may not have strtok_r() available. Writing your own standard library functions may seem daft, and done for its own sake it is but there are often good reasons that make it essential.
Everyone agrees that community efforts are a good thing, but we all still want to do things "our" way. Because "our" way is best. Although they can be the best option for you, that does not automatically make them a viable candidate to become an official standard for everyone.
I think xkcd shows a good example of this ;)
Just go into the NVO3 group to see what he is talking about. 3 invented unnecessary transport protocols with no use cases and sole rationale "we need more evil bits in the header". No agreement for 5 years. No consensus. Complete and utter clusterf***... Or the L2VPN group which despite reasonably good management by its chairs still managed to degenerate into an Alcatel Featuritis. Or the remaining MPLS zombie meetings.
Or...
When he started to attend IETF it was attended by engineers. It is now attended by Dumb and Dumber standard droids from one well known Chinese vendor and by product managers from Microsoft and Vmware and which will never allow consensus if this means their "internal" resource in their own company being reduced.
Anon. For obvious reasons.
WE NEED TO BUILD A WALL!
It's the same in the language area, really.
How many manhours and lives are currently burnt on the utterly unsecure and unmanageable WOMBAT and square wheel that is Node.js? We will be carrying the technical debt with us for decades.
Well, we do get flashy mobile-enabled websites with colorful hexagons (these are à la mode for some reason) which will be torn down tomorrow...
He's right, up to a point (and I've known him personally since 1993). But he's wrong too: you can't stop people inventing new ideas; everything is binary, and binary trees are rather like fractals; so we can't stop this constant invention, we have to try and control it and manage it. Just Say No is not really an option. (And you should see how much crud is in fact laughed out of the IETF every year.)
"He's right, up to a point (and I've known him personally since 1993). But he's wrong too: you can't stop people inventing new ideas;"
Well, we kinda do stop people inventing pointless new protocols and stuff, but only through ensuring they don't profit from them. For example, some IoT startup wanabe creates a whole load of proprietary stuff, and they're highly unlikely to develop into the next Apple or Google. In fact the problem we have is that there is Apple and Google, they have created their walled gardens, and no one else can get a piece of the action.
From the article:
"While diversity in approaches is inevitable and valuable, too many options damages interoperability," Callon observed according to a write-up of the talk in the IETF's most recent newsletter. "We have to be a little concerned about creating too many options because some vendors implement some, while some vendors implement others, and suddenly we don't have interoperability."
He's underdone it there; we should be gravely concerned about the decline of inter-operability. We have a lot of low level standards that work (http, ip, etc), but compatibility with those is not enough to ensure level competition between clouds and online service providers. A lot of new stuff has been built in recent years with the deliberate intent of walling in customers. Part of the problem is that no service of a brand new type can be inter-operable; it is inevitably in a class of 1. However as soon as someone else starts up a similar service there is zero value to the original provider to be inter-operable. And government is too lacking in technical understanding to see that these walled gardens are going to cost the consumer a lot of money. It's practically a license to gouge the market. Look at the "Apple Tax" you have to keep paying if you want to retain access to all that music you've bought in iTunes...
The industry has long since learned that bring a new service type to market 1st is key; quality doesn't really matter up to a point, but being first does. Lock those users in.
The IETF is just a place where people get together to try and get some traction for ideas they feel have technical (or, more latterly, marketing) merit. It's not really a surprise that if you let a thousand flowers bloom some will turn out to be persistent weeds. I think the problem is more that the supposed oversight body, the IAB, rather lost its bottle after the IPv6 debacle and hasn't really been able to fulfil its role of "constant gardener" ever since.
This was immortalised a bit over 20 years ago in RFC1925 - The Twelve Networking Truths
(12) In protocol design, perfection has been reached not when there
is nothing left to add, but when there is nothing left to take
away.
Ok, I agree with him in principle, but this:
"You can take an Internet Protocol (IP) packet and encapsulate it in an IP header. There are four options just for that: IPv4 in IPv4, IPv4 in IPv6, IPv6 in IPv4, and IPv6 in IPv6. Given all those options, it's hard to get one of them implemented and deployed everywhere."
is wrong on several counts.
First off, IPv4 and IPv6 are not different standards; they are different versions (editions if you will) of the IP standard.
Second, these four options are just the Cartesian product of the two versions. Want to make any list exponentially larger? Throw in a Cartesian product.
Third, each of these four options have clear uses, as defined by the capabilities of the host network and the desired capabilities of the virtual network.
There are a lot of unnecessary duplicate or overlapping standards out there. But the options for encapsulation of IP traffic over different versions of the standard don't deserve to be lumped in with them.
Bloated is too nice a word. IPv6 was just a flat-out mistake, a political reaction by k1dd13z at IETF who could not accept a working, cleaner approach, TUBA, when it was approved as the new IP. TUBA was a profile of OSI CLNP and just the mere taint of OSI, even though it was the part of OSI that worked (came from DEC, not the CCITT), was enough to drive them as apoplectic as a Republican facing Obamacare. So the B-team was set out to write what became IPv6.
TUBA, of course, was originally proposed by Ross Callon.
Quick, how many languages were announced in each of the last five years? With "major industry backing" aka Google, Microsoft, Apple, Mozilla, AlsDeli, etc. (oEck, madly inadequate list)
This has been going on forever and the reasons why have always been more social than technical, prestige than practicality. Back in 60s Texaco Oil had their own programming language, TexTran. Because they needed a Texas-sized language?
I have to wonder if this all couldn't be summarized as "I'm going to win this campaign because my orcs have _six arms_ with blades for fingernails and eyes that shoot out porcupine quills! And they smell so bad everyone closer than 20 meters is incapacitated! And Gygax helped so I'll get the respect I deserve!"
There is an assumption here that this isn't been done by design. You only have to look at how Google implemented their Privacy Checkup Software Tool.
The layout of Dialog boxes, Text off screen, Toggle switches, Confirmation Dialogs - to realise this is all by design, to make interoperability difficult/prevent something been achieved, in this case, turn off Google's snooping. This simple example shows how its become industrialised within Google, and applied to all aspects of their engineering.
... when companies introduce more protocols in order to keep their users locked in a private walled garden, and force other companies to pay for access... and maybe cripple the protocols implemented by competitors, in the marketplace... and to provide ammunition for IP lawsuits.
Ross has a good point, but it's endemic to the IETF way of doing business, and to the TCP/IP suite in general. It's all about specialized little protocols rather than looking for a general model. John Day recognized this over 20 years ago and began work on what became RINA, Recursive InterNetworking Architecture. It uses only two protocols and recurses them as many times as needed, no more no less, with many adjustable parameters for scope and requirements. Check out the IRATI and ICT-PRISTINE projects and the Pouzin Society sites:
http://www.pouzinsociety.org/
http://irati.eu/learn-rina-vs-the-current-internet-architecture/
It shrinks the complexity and code base requirement by orders of magnitude. And it's actually easier to adopt than IPv6, since it can support unmodified IP applications (as well as native ones using its single application protocol), or run inside an IP backbone if necessary.