
To me, it is people like this who matter in the world, not the politicians, not the tech-bro magnates, not the CEOs.
The people who "make things go."
David Boggs, a computer networking pioneer best-known for co-inventing Ethernet, has died. He was 71. Born in Washington DC on June 17, 1950, Boggs as a child liked to tinker with ham radios and was an amateur radio operator. He went on to study electrical engineering at Princeton University, and graduated in 1972. His big …
Back in the mid-1980s, it was not clear which Local Area Network design/technology would become dominant. I worked with over a dozen different networking technologies, including Corvus Omninet, Gateway G-Net, SMC ARCnet, Proteon ProNet 10, 3Com Ethernet, IBM PC Network Broadband, IBM PC Network Baseband, IBM Token-Ring Network, Orchid PC-Net. What clicked for Ethernet was SynOptics LattisNet running Ethernet over an unshielded, twisted-pair cable. After the emergence of structured cabling systems in the early 1990s, you have what you see today in wiring closets. Everyone has heard of Bob Metcalfe as one of the inventors of Ethernet, but far fewer people have heard about David Boggs, the co-inventor of Ethernet.
from what I remember the two big competitors was Ethernet and IBM token ring. From a technical perspective Ethernet should not have had a chance. Not only was it backed by the biggest computer manufacturer on the planet (Look it up, kids), but the way Ethernet worked seemed counter intuitive
Rather than token rings orderly passing on permission to send, you had each Ethernet station trying to randomly grab the network. It seemed Ethernet could not scale
Two things happened however. Firstly Ethernet was a open standard, meaning unlike token ring which was encumbered by IBM patents and corporate strangle hold, pretty well anyone could make a Ethernet card. This is a big lesson, that in the short term, protectionism may make the corporate balance sheet look good, but in the long term it is a technology death sentence.
The second thing is that due to the scale and competition, costs came down. Switches and routers became cheap (I hope he got his cut from Cisco [probably didn't]) , so we moved to star networks and scaling became far easier (I don't know whether the latest standards still support the CSMA mode?)
Anyone thanks David Boggs, the world owes you (As i send these packets down my local Ethernet link)
Back in the 90s, our uni lecturer was convinced that Ethernet was doomed and ATM was where it was all going. What changed was the move from coax, with all its myriad problems and relatively dumb hubs to fully switched networks. These helped reduce/remove the collisions which plagued early Ethernet networks and allowed it to scale up and out.
What is ironic is that Ethernet followed ALOHAnet which was a wireless protocol, making it wired. We've now taken Ethernet back to wireless with our wifi links. Everything turns full circle...
Did some work in the 90's with someone who made a lot of money in early network developments and invested big time in FDDI as the next big thing.
In the meantime Ethernet just evolved, added new standards and squashed any attempts to supercede it. Its a bit like IPv4. If you were designing it today it would look very different, but its good enough and so ingrained, its virtually impossible to shift, warts and all
Yes, I remember ATM (and FDDI) being the next big thing. I remember one glossy magazine stating around 1996:
ATM is the answer if the question is how do I service the telephony needs of a small country whilst also providing space heating and a light show
And all the while Ethernet just got incrementally better.
I worked on a corporate campus that deployed ATM so that it could eventually migrate to a VoATM phone system. It was used between buildings and within the executive building, but made it no further than the main telco room in our building.
We used FDDI between our building's telco rooms, switching to CDDI, 100BaseTX, 100BaseT4, and 10BaseT for connections to end devices. The larger 4K frame size of FDDI/CDDI gave a mild performance boost for some applications. The secondary fiber rings came in useful when ate one of the fiber runs.
Just around the time we finished our CAT5 wiring upgrade, corporate dropped the idea of ATM and pronounced Gigabit Ethernet to be the one true network standard. We received a couple of 3Com Corebuilder 9300s and some 3900s soon after. I remember setting up an 8 Gbps aggregated link for fun and thinking, "one could push a lot of porn across that". All the lab and officer machines were upgraded to Fast Ethernet, we nailed every hub we found to the bulletin boards in the break rooms, and we locked the ports to full duplex for good measure. And that was the end of tokens and LAN emulation in our building.
The early/mid '90s were great times for networking, a serious question then was when we jump to a high speed backbone (from 10b5) do we pick 100BaseFX, FDDI or ATM, five years later everything except simple ethernet was going the way of the Dodo because 1Gb/s ethernet had arrived.
Does anyone else remember the USR Totalswitch from the mid '90s with its switched line cards, that little beast allowed you to put switched 10b2 & 10b5 (yes really!) on the same network as 10 & 100mb/s UTP Ethernet and also had the new fangled VLANs!
It was a brilliant device with which to start the migration from Thick/Thinwire to Fibre/UTP and as I remember not an outrageous price either.
I wish I'd kept the LAN magazine print run along with a few broadsheet issues of Computer Weekly with the 1,000s of real job ads placed by real companies (if only to show the youngsters what they missed).
>What changed was the move from coax<
His original had the separate cable interface device.
Metcalf was on record as saying that Ethernet had been invented, and invented, and invented, and that all of the inventors who invented subsequent versions were inventors who invented modern ethernet
This post has been deleted by its author
The two big things that made Ethernet dominant was its migration to phone cable and the arrival of hubs and switches.
Original Ethernet used a rather special tri-axial cable, expensive to install and to access it you used an arapter box that stuck a needle like tap into the internal conductors. Not convenient and definitel not cost effective. It was replaced by a coaxial cable system that was somewhat more cost effective but extremely fiddly to work with -- any disturbance to the cabling with bring the entire network down. Efforts were made to use phone cabling, starting with Starlink, but in order for things to take off two things were needed -- one was affordable network adapters and the other was a hub that could isolate malfunctioning segments. All this was in place by the early 90s (for a given value of 'inexpensive'!) but the kit actually didn't work at all well when the traffic got over a few percent of theoretical capacity. This led to a market in testers that could generate reliable network traffic (and traffic errors) which caused a rapid uptick in development during the 90s of large 100BaseT systems. Prototype 1000BaseT adapters appeared in the late 90s (...and the rest was history).
Its worth remembering that back in the 90s processors were relatively slow and network hardware did or didn't work correctly so without specialized hardware testers you couldn't see what was going on. Another thing is that the particular technology used wasn't clear at the time; Token Ring (and FDDI) were still contenders as were other things like 802.12; it was only the availability of testgear that could quantify traffic that really sorted things out.
(We went though the same 'sorta works' phase with wireless in the 2000s. But that's another story.)
I, too, designed ethernet gear, for Data General and 3Com. Also Token Ring, but the less asid about that, the better.
10BASE5 cable is essentially RG214 coax. It works just fine as antenna lead for ham radio, and the N connectors are standard, too. At DG, we also dabbled in networking over CATV hardline, but that never went anywhere.
Every time I plug in an RJ45, I think of the old yellow cable, vampire taps and inflexible AUI cables with those crappy slide latches that tear off the first time you try to use them, and thank God for twisted pair ethernet.
When in Hawaii last Christmas, I made a pilgrimage to UHawaii and took a selfie with the marker commemorating ALOHAnet. Apparently, Abramson lived just long enough to see it installed (he died last year).
The history of Token-Ring networks is more complicated and has nothing to do, per se, with IBM's market dominance in the mid-1980s. First, Token-Ring network adapters were more "intelligent" than Ethernet adapters. It also made them more expensive to manufacture due to requiring more components. Second, a Swedish citizen, Olof Soderblom, was awarded a U.S. Patent on token-passing network technology in the early 1980s. He proceeded to trot around the world, licensing his patent to manufacturers like IBM. His royalty fees contributed to making Token-Ring network adapters more expensive than Ethernet adapters. Madge Networks in the U.K. refused to pay a license fee to Mr. Solderblom for manufacturing their Token-Ring adapters. The English courts, both the lower and appeals courts, sided with Madge Networks by ruling that Madge Networks had not infringed on Mr. Solderblom's patent in the U.K. There were other manufacturers of Token-Ring network adapters, like SMC/Western Digital, but overall there were far fewer Token-Ring adapter manufacturers compared to manufacturers of Ethernet adapters. Today, you cannot buy a PC that does not have an Ethernet interface embedded on the mainboard.
It does seem like the history of computing has a high number of technologies that priced themselves out of the market, because their inventors/IP owners wanted to make everyone pay through the nose. But instead users adopted or stuck with slightly poorer but much cheaper alternatives. The one that comes to my mind immediately is the LS-120 (or some such name) "super floppy" discs. A great idea, but the actual media were bloody expensive, prohibitively so. So people stuck with existing devices until the CD-Rom came along.
>a Swedish citizen, Olof Soderblom, was awarded a U.S. Patent on token-passing network technology in the early 1980s. He proceeded to trot around the world, licensing his patent to manufacturers like IBM. ...
Madge Networks in the U.K. refused to pay a license fee to Mr. Solderblom for manufacturing their Token-Ring adapters. The English courts, both the lower and appeals courts, sided with Madge Networks by ruling that Madge Networks had not infringed on Mr. Solderblom's patent in the U.K.
GEC had UK patents on token-passing network technology dating from the late 1970's. It was good for real-time ie. it had deterministic behaviour, unlike CSMA/CD.
Thanks to Olof, whose claims of "inventing" Token Ring were rather tenuous, a TR adapter cost around 3x what an ethernet adapter cost. And TR topped out at 16 megabits. Once 100meg Ethernet arrived, TR was finished.
I worked on 16meg twisted pair TR for 3Com, and it was a nightmare EMI-wise. All those fast edges and the required clock jitter specs were not conducive to low emissions. Ethenet did twisted pair right, by redesigning the waveform to minimise emissions and match the characteristic impedance of the cable. TR never quite got that to work as well.
The two big things that made Ethernet dominant was its migration to phone cable and the arrival of hubs and switches.
Original Ethernet used a rather special tri-axial cable, expensive to install and to access it you used an arapter box that stuck a needle like tap into the internal conductors. Not convenient and definitel not cost effective. It was replaced by a coaxial cable system that was somewhat more cost effective but extremely fiddly to work with -- any disturbance to the cabling with bring the entire network down. Efforts were made to use phone cabling, starting with Starlink, but in order for things to take off two things were needed -- one was affordable network adapters and the other was a hub that could isolate malfunctioning segments. All this was in place by the early 90s (for a given value of 'inexpensive'!) but the kit actually didn't work at all well when the traffic got over a few percent of theoretical capacity. This led to a market in testers that could generate reliable network traffic (and traffic errors) which caused a rapid uptick in development during the 90s of large 100BaseT systems. Prototype 1000BaseT adapters appeared in the late 90s (...and the rest was history).
Its worth remembering that back in the 90s processors were relatively slow and network hardware did or didn't work correctly so without specialized hardware testers you couldn't see what was going on. Another thing is that the particular technology used wasn't clear at the time; Token Ring (and FDDI) were still contenders as were other things like 802.12; it was only the availability of testgear that could quantify traffic that really sorted things out. (Another thing that aided large scale adoption of Ethernet was the ridiculous per-node license fee for Token Ring -- it worked out to be about $300 a node.)
(We went though the same 'sorta works' phase with wireless in the 2000s. But that's another story.)
Ethernet does not scale is largely IBM FUD, trying to persuade people that there were good reasons to prefer their more expensive 4Mb/s system to 10Mb/s Ethernet.
Words like 'deterministic' get tossed around, conveniently ignoring the fact that it is only achievable on a LAN with a BER of 0. As soon as you accept the possibility of random noise corrupting a bit, determinism goes out the window.
Some theoretical studies of ALOHA have the performance tending to 1/e, which at 37% is remarkably convenient for token-pushers. However, ALOHA isn't CSMA/CD.
It's true to some extent that in ridiculously worst-case scenarios e.g. 1000s of nodes on the same collision domain, all constantly sending minimum size packets Ethernet starts to break down.
(Digression: For Gigabit Ethernet 802.3 specified a somewhat complex half-duplex mode because the idea of publishing an Ethernet standard without it was too contentious). When (I think) precisely nobody developed a half-duplex 'repeater' later standards dropped it entirely)
So what I am going to do is what I usually do at this point, which is to refer you to the classic WRL paper - Measured Capacity of an Ethernet: Myths and Reality (Boggs et al). RIP David.
For my mind, the reason ethernet won the LAN wars was that (a) it supports different hardware layers (b) 10base2 used easily available, easy to make cables (50Ohm coax + BNC). (c) price (of course).
Solder/crimp a BNC on any bit of 50 ohm coax (or pick one off the shack wall if you're a radio ham), plug in a a pair of NE2000s and suddenly you can copy files around that don't fit on a floppy, network print like the big boys and play networked DOOM with your friends and family. You could even just about get away with unterminated 75ohm coax from the TV set, a pair of resistors and some matchsticks, if you didn't mind a few packet errors and no one moved the cable.
We tried it. It didn't work well, but it did work enough that we decided to invest in the proper cable.
For small offices / home users it just lowered the access bar so far.
I've got some coax stuff in the man shed that I think would probably still work. Largely because the coax is pure copper. I may be wrong but a lot of the stuff that came out early on seemed to be made of tinned iron - makes your average dunked biscuit look moveable.
Big thanks to David though - and a lesson for people today - the man was an engineer and learned whatever was relevant to the needs of the job at the time.
That and the subsequent 10baseT.
About the turn of 86/67 we moved to a building that had a different coax network which I think ran with an RF carrier. It had a lot of little boxes on it with RS-232 connections. Somewhere in the building was a room with a head end system; I'm not sure whether we got our RS-232 lines to the server from there or whether we had more little boxes under our floor. The server itself didn't have many serial ports of its own but it did have a 10base2 connector. The first step was an Ethernet connected terminal server - a rack-mounted box to take 4 plug-in modules with a number of serial ports. It started off with only one module which was sufficient for immediate needs. For ourselves we got a length of 50-Ohm cable pulled through to our desks and the wonders of an X-Term box and X-Term emulators on the on-trend 386 PCs.
In the next building a few years later it was Ethernet everywhere.
I remember going on site with a small team to do some development on a hush hush naval project. Rather than swap floppies around we set up our own small Ethernet network using T junctions and terminators (No switches here kids)
This improved productivity no end, and during the downtime it allowed us to play deathmatch on recently released Doom. First time we had ever played connected (33K modems were not up to the job). Not sure it did a lot for productivity but great team building and no better feeling than taking your boss out with a BFG
I remember installing vampire taps on the thick coax in our testbed. Not to be recommended. Then we had strings of RG58 or equivalent running round the office. It was the move from hubs to switches that really made the technology work well and scale. I'm still boggled that we're talking about 100 Gbit ethernet now.
I remember hauling drums of thick coax onto the roof to string 10b5 across the site over various buildings, ducts and catenaries (it was a few years before they realised that lighning existed). I always referenced the Netware manual for the figures for network lengths that combined 10b5 and 10b2 topologies ... ironic as I was 'seconded' to a department that hated non-microsoft stuff ... Soldering N-type connectors onto 10b5, on a roof in winter was fun :-)
I hadn't heard of "the value of a network is proportional to the square of the number of its connected users". I guess FB et al have more or less proved that to be true ...
Digging through my pile of junk copious collection of curiosa recently I found two 10b2<->TP converters. Curiously, they claim to do 10 and 100 Mbit on the TP side, possibly for upstream compatibility raisins but otherwise not that useful.
On and off I've been working on fitting a WiFi transceiver inside a H4005 (AUI to 10b5 converter), with a short length of 10b5 coax acting as the transceiver's antenna. In the end it should work like hanging your VAX or whatever off a thickwire backbone as well as look like it except for the short and otherwise unconnected length of thickwire the H4005 would be clamped on to.
This rabbit hole is deep, i remember installing a proper connection (cut the cable and install 2x N connectors) rather than a simple vampire tap, for the head of our dept (not IT back then, it was relegated to a division of the electrical estates crew, namely me and my mentor).... then putting a 'fanout unit' on the end of the AUI connector. And the boss arguing he wanted his OWN transceiver ... lol
All credit to the man. Ethernet was the breakthrough networking product, but...
What we use today has very little to do with the original invention of Ethernet. The clue to what it was is in the name, ETHERnet
When you launched a packet on baseband Ethernet, it was effectively sent without knowing whether it was received (and the adapter had to listen back to the packet to even know it had been transmitted correctly and did not collide with another packet). And it was really broadcast, to every system on that segment of the network (and all other segments if you used simple repeaters). It really was launching the packet into the ether, though constrained to a cable.
Most of the technologies for the original 10base5 and 10base2 Ethernet, like CSMA/CD have fallen by the wayside. I suppose that the basic framing has survived, as has some of the packet addressing, but switched 10baseT and later effectively threw most of the baseband stuff away. Now, 'Ethernet' networks are a series of point-to-point links, mostly arranged as connected stars, with switches handling the routing of the packets beyond the initial link. No longer do packets reach all nodes (except for specifically broadcast packets).
Most of the network magic happens at higher levels, like ICMP, ARP and IP, and these are not synonymous with Ethernet (in fact they are independent of Ethernet, and can use pretty much any Layer 1 technology), and I don't know how much David contributed to the protocols above the physical layer that we use today.
When I worked at AT&T, they had an Ethernet implementation that did not use IP. Similarly, the original DECnet used a proprietary addressing mechanism that only used Layer 1 addressing.
The real reason why Ethernet (and many other Ethernet-like networks) gained traction was the relative simplicity of the physical installation. You ran a single cable, from one end to another, around the office, and tapped in wherever you needed it (well, there were rules about where you could tap in due to the wave propagation properties of the cable, which affected both 10base5 and 10base2 to different degrees). This was a major benefit compared to other communication technologies, which needed hubs or some other type of central controller, requiring a concentration of individual cables to each device.
It's funny that the replacement technologies, like 10baseT went back to the individual cable to a hub/switch, but this became possible with structured cable strategies.
I'm not diss'ing David's memory, more the implication that Ethernet and TCP/IP are thought of in the same breath in the article.
IIRC all three types of boxes existed. Repeaters just connected the media together (so collisions crossed the box and lengths were limited), switches received the packet and retransmitted it on the other ports (so collisions were local to each port, at the cost of not knowing whether the switch had run out of buffer space and dropped the packet) and bridges did clever stuff (the most important of which was the spanning tree algorithm to break loops).
You are right. non-switched hubs still used CSMA/CD. But that is why I specifically said "switched 10baseT and later". I have not seen an un-switched Ethernet hub for 25 years or so.
(actually, that's not quite true. I still think I have a 10base2 to 10baseT repeater somewhere which was not switched, but even that was rescued from the bin of one of the companies I worked for. I stopped using that when I stopped using 10base2 in the house, which was when I started using WiFi in about 2002)
"(and the adapter had to listen back to the packet to even know it had been transmitted correctly and did not collide with another packet)"
CANbus (and, I assume, derivatives) is like that. There's even a specific ACK bit at the end of the frame where ALL receivers on the same bus need to verify they received it, and if anyone didn't (only takes one) then the whole frame gets resent.
The wiring is usually shielded twisted pair, 120 Ohm, but the bus layout looks like T-tapped coax, with 120 Ohm terminators on both ends.
Why did the auto/truck industry invent their own thing (subsequently used on truck-like armored vehicles <cough>) when Ethernet already existed? I haven't the foggiest. We even have CAN-over-Ethernet bridge devices. Must be the devils in the details. I'm just a luser (see icon -->) who used to watch things happen in CANoe and not worry about how the bits move.
"CANbus (and, I assume, derivatives) is like that. There's even a specific ACK bit at the end of the frame where ALL receivers on the same bus need to verify they received it, and if anyone didn't (only takes one) then the whole frame gets resent."
No, it's the other way around. The ACK bit in the CAN frame means that at least one other device on the network has received the message without CRC errors. See here for example.
CAN is different to Ethernet in that there is no TO filed in the protocol. You just broadcast data from an address and anything that wants to know that data listens for messages from that address. For example one CAN node will measure engine speed and broadcast that on a particular address and so both the rev counter and the gearbox controller listen for that particular message. The engine speed measurement doesn't need to be sent to both the rev counter and the gearbox.
Why did the auto/truck industry invent their own thing
CAN bus goes back to 1983 before Ethernet was an obvious universal standard. It also has some characteristics that make it more robust in an automotive environment.
The bus arbitration works rather differently. In Ethernet, the sending station listens to its transmitted frame to check whether it has been stomped on by another station transmitting simultaneously and randomly backs off is so. With CAN, the potential transmitters are synchronised and the frame begins with a (unique) ID. If several nodes transmit simultaneously, they continue to emit ID bits until the transmitting node with the lowest ID is determined and that node gets to send the rest of its message without everyone having to delay and retry. That gives you a priority mechanism - low IDs get to grab the bus ahead of high IDs - that's intrinsic in the bus and doesn't require back-offs or an intelligent switch to reorder packets.
>and I don't know how much David contributed to the protocols above the physical layer that we use today.
Ethernet Blue Book, effectively contributed Phy, MAC and LLC to IEEE 802.3.
Above this it would have been XNS; which influenced LAN specific protocols (ie. protocols for single sub-nets) over the decades.
>What we use today has very little to do with the original invention of Ethernet.
I agree, the "Ethernet" over twisted pair that we use today is not the same as Ethernet over yellow peril/coax as specified in the Blue Book.
We looked at ethernet in the early 80's (82?) in the research lab where I worked. It needed a big thick coax cable with a minimum bend radius of about 2 feet and you had to connect to it with vampire taps that could only be screwed into the cable at the wave nodes. Worse still an ethernet transceiver with its MAC address burned into it cost about £1000. I recall we all laughed during our discussion of how impractical it was and judged it to be a non starter. A little over 10 years later I had my first ethernet LAN in my house.