And it's been down hill ever since.
Thirty years ago this week the modern internet became operational as the US military flipped the switch on TCP/IP, but the move to the protocol stack was nearly killed at birth. The deadline was 1 January, 1983: after this, any of the Advanced Research Projects Agency Network's (ARPANET) 400 hosts that were still clinging to …
Thursday 3rd January 2013 12:08 GMT the-it-slayer
On how you view it. IPv4 has been well used and now becoming abused with no spare allocations of addresses left. NAT again saved it's bacon in the last 20 years. IPv6 should save the internet from crashing down on its head, but the adoption is just too slow and won't speed up until companies/ISPs get to a crunch point where the current infrastructure becomes untenable.
Thursday 3rd January 2013 09:20 GMT Anonymous Coward
Thursday 3rd January 2013 17:19 GMT asdf
Networking can be damn complicated which is why competent specialists are worth the big bucks. No expert myself but I can tell you at least on my home router Kahn, Nagel, and Van Jacobson's codel (in form of fq_codel) absolutely destroys pfifo_fast under any kind of load. pfifo_fast should have died years ago or at least not be the linux default which it still is (TOS byte really, what is is this 1997?). Whats sad is it took us so long to get such a wonderfully simple (from end users view) and effective algorithm. What is also pathetic is now little money and expertise are actually being put into TCP/IP technology today. Where are Cisco and all the other big boys (Google has helped some but not enough) to help fight the bufferbloat crises? Its largely being tackled by a small number of very smart and very dedicated hobbyists who really could use some support.
This post has been deleted by its author
Thursday 3rd January 2013 11:22 GMT corestore
Thursday 3rd January 2013 21:37 GMT jake
Yep. Earlier, even, if you were in the wrong place & time :-)
My first ARPA connection was a dumb terminal in my dorm in 1975ish. At home, I had a PDP-11 based Heath H11A personal computer that I dialed into the ARPANET with in late 1979. Following that, I had an always connected, somewhat larger BSD based DEC system under Bryant street in Palo Alto in early 1982, controlled from home by the H11A. Later, in 1985, I had an AT&T PC7300 "UnixPC" at home, permanently connected to the DEC box which was configured as what we would now call a stateful firewall.
The dorm in question was at Berkeley, the home was in Palo Alto's Johnson Park neighborhood, a couple city blocks from the Bryant Street CO ... I was young, naive, and doing research into networking and OS design for a couple of my first degrees at the time ...
Last time I was in Palo Alto (Thanksgiving), the ten-pair solid core bell wire that I pulled between home & the CO in 1982 for permanent BARRNet access was still in place, as was the two pair line I pulled in 1985 for the upcoming T1 capability into NSFNet via BARRNet. Using a TDR from my end, a small 5x8 foot closet under Bryant Street (99 year lease, $1/yr, I have my own electricity meter, they provide HVAC, halon, etc), indicates that the complete run is actually still there. Seems they can't generate a remove order for wire that doesn't exist in the system ;-) Amazing what you can get away with in a reflective vest, hard hat, a white van full of well used linesman's tools, and an official looking clipboard ...
Thursday 3rd January 2013 12:17 GMT b166er
Thursday 3rd January 2013 12:21 GMT Anonymous Coward
Thursday 3rd January 2013 20:44 GMT Midnight
Friday 4th January 2013 02:54 GMT jake
That's just Usenet, and it was September of 1993, not March of 1994. AOL had email, ftp and other Internet services earlier than that. Not much earlier, but earlier. Search on "Eternal September" for more.
The Internet as we know it was born when the Delphi BBS managed to allow USAian consumer access to TheIinternet[tm] (whatever that is) in early-mid 1992. I can't remember the exact date. Everybody blames AOL, but it was Delphi that started consumer Internet use. The folks who ran BIX are still kicking themselves for not following suit post-haste. I could tell you how I know that, but then I'd have to kill you ;-)
Thursday 3rd January 2013 12:28 GMT Anonymous Coward
Its the Internet, but not as we know it...
I think the story headline holds true for most El Reg readers but unfortunately don't think it holds true for the "Internet masses".
For me the Internet "essence" (to give it a name) has always been the fascination for that "awesome global network". And all CLI mind you. In the beginning it was using telnet to gain access to "digital cities" which were somewhat fun. Mostly Gopher based stuff, but still..
Later it was using Windows' netsock and Netscape (the other alternative 'Mosaic' wasn't that much fun). For me all using Win/OS2 and later (when I finally understood more about the way it worked) I even got OS/2 online. That was really nice.
But for me the real fun started when I finally got a good grasp of this "Unix" thing; I got sent out to a Sun Solaris course (which was the first Unix environment I fully learned, understood and grasped) and it didn't take me long to figure out that "Internet == Unix".
So when I started using Linux (ironically I only started using it to keep my Solaris knowledge fresh, man, did that take a change!) I also soon started messing with Linux to get my Internet access going at home. And that's where the real fun began.
For my parents the Internet started when I used to spent hours in the evening online (all using dial-up) but because I was using Linux I simply "shared" my connection with them as well. That was nice!
And then eventually we got ADSL, I "hacked" the modem / router to do bridging so that it wasn't the modem but my Linux box which would get the public IP address, that eventually led to hosting some websites on my own PC, setting up a FreeS/WAN IP/SEC network with some of my IRC friends (Epic / Splitfire script FTW for me). That led to learning how DNS /really/ worked (root zones) and all of a sudden I could wake up on a nice Saturday morning, get an e-mail telling me about this new cool thing called "irssi" and would simply go to (iirc, its been years): 10.2.1.1 which put me on a US Linux box hosted by a good friend of mine :-)
I honestly don't remember the domain names we came up with. Something ending in ".irc" that's for sure 8-)
"If we have this vpn thing, why not try setup a tunnel to get lan data across? You know; GRE packets or some other global unused protocol"
Some friends even routed their netbios data over it (I didn't use Windows at all back then) so they could simply copy/paste stuff to each other.
That is Internet for me. But for the common masses? I don't think so...
And can you really blame them? Back then we hacked Linux to copy/paste our X509 keys, passwords, etc. all to setup the VPN. Nowadays I have a DrayTek modem/router on both my end as well as my parents end (both online using cable) and setting up the VPN only requires a few mouse clicks and some common understanding of what you're doing.
Opening up Netbios used to be some iptables hacking now its merely enabling an option.
How many people use Linux to really "hack" and setup a cool global network of their own using the Internet? Without using some kind of wizard I mean ;-)
Thursday 3rd January 2013 12:36 GMT Dodgy Geezer
Thursday 3rd January 2013 13:18 GMT Anonymous Coward
Thursday 3rd January 2013 13:30 GMT BossHog
Thursday 3rd January 2013 15:08 GMT rictay
We invent what we need
"...without TCP/IP we wouldn’t have the internet as we know it..." Not true. I was working in the computer/telecomms industries at the time (1980s) and there was a big effort in "convergence" of the two technologies. Also a big drive towards OSI - "Open Systems Interconnectivity" where computers of differing manufacturers could 'talk' with one another. The Internet was in the very air we breathed, and if we didn't have TCP/IP then somebody else would have invented TCP/IP instead - we invent what we need.
Thursday 3rd January 2013 17:32 GMT asdf
Thursday 3rd January 2013 18:17 GMT Anonymous Coward
Re: We invent what we need
" Also a big drive towards OSI - "Open Systems Interconnectivity" where computers of differing manufacturers could 'talk' with one another."
In the late 1980s an OSI technical sub-committee meeting of the Big Twelve started by defining its mission of proving "inter-operability". We came up with - "the ability for computers to communicate successfully ...and do useful work". Then the committee chairwoman told us that another OSI committee was also debating the definition - and after 18 months were still no closer to a conclusion.
In recent years whenever TCP-IP connections closed in an ambiguous way it was always a reminder that OSI Transport had been much better at saying "what", "who" and "why".
One rarely hears Jack Houldsworth's pioneering name mentioned these days.
Thursday 3rd January 2013 16:54 GMT koolholio
Thursday 3rd January 2013 16:55 GMT Anonymous Coward
"Jacobson devised a congestion-avoidance algorithm to lower a computer's network data transfer speed and settle on a stable but slower connection rather than blindly flooding the network with packets."
Best of luck trying to do that today, 4 years in court arguing the MS, Apple, Google, etc over who owns which bit of what part of which stack.
It's was less "can do" back then and more "will do" in the good old days!
Thursday 3rd January 2013 17:13 GMT bed
Here in the UK...
Here in the UK, academia had also being playing with networks in the 1980s with the Joint Academic Network (JANET) using X.25 telecom links and a set of computer network protocols called “Coloured Books”. The name originated with each protocol being identified by the colour of the cover of its specification document. Confusingly, perhaps, the JANET naming convention was, then, the reverse of the Internet; uk.ac.university, for example. TCP/IP and Internet naming conventions started to be adopted in the late 1980, requiring various gateways, and was fully adopted after 1992. The joys of a 64K kilostream link connection to JANET and the Internet. The innocence; no Access Control Lists on routers, telnet and ftp into anything from anywhere until the script kiddies came along and made network security a career.
Thursday 3rd January 2013 17:59 GMT Not also known as SC
Thursday 3rd January 2013 18:38 GMT Anonymous Coward
Re: Barbed Wire fences
In my experience TCP-IP was usually resilient. However it was often difficult to get good throughput without a lot of tuning. The various TCP algorithms relied on "guesses" that congestion was happening.
For example - parallel link load-sharing that mis-sequenced the IP datagrams could cause a traffic stream to slow to a crawl in spite of plenty of bandwidth.
A common mistake was Increasing TCP window sizes to give higher throughput - which merely caused the Delayed-Ack mechanism to depend entirely on "time out" Acks and the traffic crawled.
Thursday 3rd January 2013 18:40 GMT Captain Save-a-ho
Thursday 3rd January 2013 18:55 GMT Herby
Re: Barbed Wire fences
Well, it does work over a variety of things. See RFC1149 for instance. Sure the delay is a bit long, but was shown to work!
Yes, you can get telephones to work over barbed wire fences. Just make sure there are two strands and hook your EE-8 set between them and crank away. Dual purposes work great!
Friday 4th January 2013 01:07 GMT jake
Re: Barbed Wire fences
Yes, TCP/IP will work over *any* connection. A serial connection is just a Tx/Rx pair, after all ... and as a mentor of mine back when were inventing TCP/IP used to put it "Wire's wire!". Would take quite the line-driver to go any distance, though. I wonder if Gandalf sold one heavy duty enough. Bandwidth would suck the further you wanted to run it.
The party line on Noyo Hill (just outside Fort Bragg, CA) briefly ran over barbed wire when I were a nipper. A storm took out the telephone wire (trees down everywhere!), and the only wire long enough to repair a section of it available was was barbed wire. It worked, but was noisy. Volcano Telephone Company (Calaveras County, CA) mentions barbed wire as a voice carrier here:
Thursday 3rd January 2013 18:36 GMT Anonymous Coward
Yes, yes, 30 years ago
Yes, yes, ARPANET, a 30 year old TCP/IP stack, and still a default MTU size of 1500 bytes or less.
Time to upgrade that old stinker of a transmission and connectivity protocol suite, would you think? In comparison, how many generations of telecommunication standards did we go through within the past 30 years (https://en.wikipedia.org/wiki/5G)? And we really could do with higher speeds and more security?
Get used to even less security over time: http://www.theregister.co.uk/2011/03/18/rsa_breach_leaks_securid_data/
Viruses' spread outrun their detection and containment: http://www.theregister.co.uk/2013/01/01/anti_virus_is_rubbish/
Thursday 3rd January 2013 19:34 GMT Anonymous IV
No mention of token ring?
I was always told (probably by IBM customer representatives) that Token Ring was a much more efficient protocol than Ethernet. What a pity that the implementation was so awful, with clunky Media Access Units and thick coaxial cabling. Ended up rather like BetaMax vs. VHS...
Thursday 3rd January 2013 20:19 GMT keith_w
Friday 4th January 2013 00:07 GMT Peter Gathercole
Re: No mention of token ring? @keith_w
You're confusing the physical MAC layer with IP.
Token Ring and Ethernet are comparable. IP can run over either, and many more physical networks as well. Although it does not directly follow the 7 layer OSI network model, it is a layered protocol (MAC, IP, TCP/UDP, application protocol), and provided it meets some basic requirements any physical layer can be used to transport IP.
Token Ring is exactly as routeable as Ethernet when running IP. Routing has nothing to do with the MAC layer, except in very simple protocols as IPX or NETBIOS.
I have worked at numerous locations where there were multiple networks using Token Ring, Thinwire (10base2) Ethernet, Twisted pair (10baseT) Ethernet, ATM, FDDI and even SLIP and PPP all routed together using Layer 2 routers.
What makes Token Ring better than 10base5 or 10base5 bussed Ethernet is that it did not use CSMA/CD to arbitrate use of a network segment, so works much better at higher utilisation rates. As soon as 10baseT switched Ethernets came along, that was no longer enough of an advantage, and Token Ring died.
If you look at network topologies, those with multiple tokens or a slotted ring (such as the Cambridge Ring) could carry much more data than Token Ring, but were more complex to set up.
If you had ever had to debug a token ring implemented with MAUs, when one system was running at the wrong speed and causing lost beacons (or beaconing), then you will be glad that TR eventually died!
Friday 4th January 2013 06:18 GMT jake
@ Peter (was: Re: No mention of token ring? @keith_w)
"I have worked at numerous locations where there were multiple networks using Token Ring, Thinwire (10base2) Ethernet, Twisted pair (10baseT) Ethernet, ATM, FDDI and even SLIP and PPP all routed together using Layer 2 routers."
Insert the proverbial newbie "me too!" here ...
"As soon as 10baseT switched Ethernets came along, that was no longer enough of an advantage, and Token Ring died."
Actually, thin-net Ethernet killed Token Ring long before switched Ethernet existed ... It was easier to install, cheaper (no IBM tax, non-proprietary BNC connectors, simple COAX), less bulky, and above all, it was pretty much implemented in hardware. Us BSD kids loved Ethernet, even in the thicknet "vampire tap" era ...
That said, I still have a contract for the maintenance of a largish FDDI "ring of trees" that I built for a local marketing chain in roughly 1994 which spans most of the Bay Area ... The silly thing refuses to die!
Monday 7th January 2013 18:14 GMT Peter Gathercole
Re: @ Peter (was: No mention of token ring? @keith_w)
It may have been that way in the US, but I was involved with a customer still installing new TR kit beyond Y2K. I admit it was mainly because the customer had a large investment in it, but when the organization split, the bit I went with dumped TR, and jumped straight to 100baseT.
In a lot of commercial organisations, being able to use a Premises Distribution System to organise your cabling for TR (and twisted-pair Ethernet, phone and RS232 terminal traffic) was a real benefit, and one that 10base2 thinwire Ethernet could not take advantage of. Thus Token Ring persisted.
I saw the benefit of a PDS when I saw 1MB/s AT&T StarLAN installed for the first time in the late '80s.
Wednesday 9th January 2013 08:50 GMT jake
@Peter (was: Re: @ Peter (was: No mention of token ring? @keith_w))
I feel sorry for you, re: customer still installing TR beyond Y2K ... Yes, I grok the heavy investment. Been there, done that, managed to convince most of 'em to drop TR before the mid-90s.
I was typing from Silly Con Valley, mid-late 1980s perspective. I was running Voice, Data, video & terminal traffic over 10base2, simultaneously, in 1985 (for small values of video, of course; think 2001 commercial digital cameras, if you weren't there in 1985). The Ethernet connected local boxen, T1 & T3 lines connected remote locations. Look up N.E.T. and ComDesign ... or ask anyone who was involved with IBM's internal network at the time. IBM, 3M, Sun and Cisco had a lot of our gear installed, world-wide.
Granted, if you didn't have "official" permission to do video over a given link & blocked the Boss's two-wire "hot line" conversation to his wife in Delaware, you got your ass chewed ;-)
TR capability, which was also provided by us, introduced too much latency "real time, in the field" for voice & video across several subnets. Worked great for telephone & terminal stuff.
StarLAN sucked from an infrastructure & maintenance perspective. By the late 1980s, they should have been pulling fiber while removing all that useless UTP copper. IMO, of course.
Anyway, have a homebrew on me, compadre. I suspect there is no one single right way of doing things, any more than there is any one single beverage of choice ... for which I am thankful. The world would be a very boring place otherwise.
Friday 4th January 2013 08:35 GMT Anonymous Coward
Re: No mention of token ring?
1. Token-ring and Ethernet are physical interconnects, not network protocols like IP. You are suffering from extreme layer confusion.
2. The 20% utilization for Ethernet is a furphy. (The actual figure was 1/e or 37%, but it's still a furphy)
This was based on queuing theory that assumed a random injection of packets.
In reality, virtually all real-world protocols relied on some sort of acknowledgement from the receiver that introduced a strong non-random element to the rate of packet injection. Together with the exponential backoff in the face of collisions, this meant that the utilization of Ethernet and token-ring were similar.
I remember being very puzzled when the original paper came out, as we were seeing 60%+ utilization on our Ethernet at the time, without and special effort.
Where token-ring was superior was fairness in sharing a heavily loaded bus, which was why the original FibreChannel was a token passing ring.
Friday 4th January 2013 03:14 GMT Anonymous Coward
Thursday 3rd January 2013 20:21 GMT Fred Goldstein
It was not the birth of the Internet
What happened on Flag Day was that NCP was turned off, except for hosts that were given permission to still use it, who got a few months' reprieve. But the irony is that before that, many users ran IP over NCP, in whcih case IP was an internetwork protocol, running atop a network protocol (NCP). After Flag Day, IP became network protocol, and the Internet basically worked more like a catenet, flat rather than layered. Oops.
TCP/IP was a lab hack run amok. VJ's stopgap congestion hack, invented a year earlier at DEC btw which patched it into DECnet, was not a good solution, just a patch. But in the true iP style, it became holy writ. That's what's so funny about this -- a lot was research projects that were never completed, which worked "well enough" but not really efficiently, so they remained in place. IPv6 is like the beer commercial in reverse, tastes worse and more filling. It's utter incompetence, proof that some people assume that "authority" is always correct even though it is obviously wrong.
Thursday 3rd January 2013 21:37 GMT John Smith 19
Friday 4th January 2013 06:33 GMT jake
Re: I doubt we'll know the *real* start of the Internet
The first pR0n was available between two different computers (probably via EBCIDIC art) the first time a couple teenage summer interns figured out there was another teenager on the other end of the link. But that wasn't the beginning of it ... See my post here:
Seriously. Think about it.
Thursday 3rd January 2013 22:10 GMT Donald Becker
OSI was a late-coming spoiler attempt
Don't put OSI into the same category as TCP/IP.
OSI and ATM were both primarily "spoiler" technologies. They were concocted and promoted by organizations that were far behind with TCP/IP/Ethernet. The goal was not to introduce a better designed network, but rather to press the reset button and have everyone start from scratch.
The OSI "layer" model remains only to classify protocols and describe products. That doesn't mean it ever helped design anything. We should remember the rest for what it was: an attempt to do evil by delaying progress.
ATM wasn't quite as bad. It was promoted by people that really did believe that the future was all about centralized control from central offices connecting you centralized computers for which you would be billed from a central billing service. You would dial up ("establish a circuit") an information service such as Compuserve, AOL or the Phone Company. That circuit would stay connected for the whole conversation ("session") giving you fixed bandwidth billed in 6 second periods.
We are very fortunate that neither became the wide-area networking standard.
(I was fortunate enough to be at MIT in 1983, and experience the extraordinary and a normal occurrence.)
Friday 4th January 2013 08:36 GMT Shaun Blagdon
Most of the comments on here are disrespectful and very self-absorbed into the writers own personel experience of computing and the internet. A lot of them spewing criticism of the IP/TCP protocol that where obviously learnt after it's invention. Saying "It's been downhill ever since", talking about how we should all be on IPV6?? It was never invented to handle such scale as IPV6, it was invented as a way of getting a few hundred computers networking not the billions of devices it handles now. Can't we just celebrate the invention of a couple of very clever men, who simply and probably at the time unknowingly, took the world to a new level. Saying "the internet was born" on that day is incredibly disingenuous as well. If you want to go way back to start of the internet, look at cave paintings, if you want to expand that.. talk! Take if further, writing and then printed press. If you want to talk computers, look up Mr Babbage... Who discovered silicon is a useful conductor? Did anyone see that bloke Tim who featured in the Olympic Opening Ceremony?...................... Let's just celebrate the invention of Vint Cerf and Robert Kahn, 30 years ago and thank them for the step forward they showed us.
Friday 4th January 2013 09:35 GMT jake
"Most of the comments on here are disrespectful and very self-absorbed into the writers own personel experience of computing and the internet."
Uh ... Duh? Are you new to TehIntraWebTubes[tm] in general?
Old Usenet suggestion: Read any given group for at least a month before posting.
Friday 4th January 2013 12:41 GMT rictay
re- ... the real start of the Internet? - a fest for pedants
Ah, a VERY debatable question as some old paper tape jocks might tell you that the email message format envelops the old "torn tape" message formats of the 1950s and 60s! Message starts or ends with "ZCZC" I seem to remember.
I certainly worked with global message switching in the 70s. In the 1960s we used 5-hole paper tape data transfer between data generator and computer with processed results by return.
So, some pedants could say we had the functionality, admittedly at 150 baud, of responsive network computing back in the 50s and 60s. On the other hand, if physical transport layer doesn't matter, we could say that Inca messengers centuries ago, carrying information encoded on lengths of string, were also a form of responsive networking.
Oh gosh, look at the time, I must go shopping!
Friday 4th January 2013 19:33 GMT Donald Becker
A bit of misunderstood info about Token Ring above.
IBM used to market Token Ring as more efficient and more reliable than Ethernet. Their marketing talking points included a claim that Ethernet had a maximum of 37% utilization of maximum capacity. This was convenient when they were flogging 4Mbps TR against 10Mbps Ethernet.
They based this fraction on a flawed paper that modeled Ethernet as a CSMA network, ignoring the "/CD" part and modified pseudo-exponential backoff. IBM knew that this was bogus, and Ethernet users were seeing 98% utilization in real life, but it didn't stop IBM from loudly spreading FUD.
A second claim was that Ethernet was undependable and unreliable. It actually _relied_ on _collisions!_ to work, and the spec said that you could _throw away_ packets! Horrors! And you could never guarantee that a packet would be sent in a bounded period of time. But IBM failed to mention that it was just a difference in reliability and delay profiles. Losing a token in a TR network could be pretty common, and the result was massive disruption and delay. Even if you didn't lose the token, the "bounded latency" had such a high bound that it was mostly useless.
I'll tie this into the current discussion: there is a close analogy between Ethernet and TCP/IP. Both were cheap over-provisioned packet-switched network that only promised best-effort packet delivery. They supported high numbers of nodes, and had seemingly-simple access and flow control rules that turned out to be surprisingly stable when scaled up.