A teensy bit of an improvement over the once-state-of-the-art 10Mbit co-ax Ethernet.
Don't things move on?
The rising tide of pandemic-driven digitization has lifted all data center boats. From the largest cloud service providers and hyperscalers down to small to mid-sized managed service companies, data center components spending has been significant with business network gear reflecting serious, steady gains. The switching …
Paper tape was much better, IMNSHO. Especially if it got tangled and you had to use the entire stairwell to slowly and carefully untangle it. I saw that with my own eyes on two separate occasions.
Once around 1968 in the old Electrical Engineering building at Manchester University, where the ATLAS was installed on the top floor, and once around 1973 in a DEC building in Reading, during a PDP-11/45 training course. (Or maybe the latter was a DECtape that had come unreeled - too long ago to be sure.)
The old audio cassette tape drive gets my nod thank you. At least you can re-use them to make a sweet ass mix tape for your vintage '80s Pontiac Fiero.
Though you can draw flip book animations on your old punch cards at least, and I once got a Christmas present that was wrapped with a band of old punch tape. (My great uncle, who also annoyed my parents by teaching me to shoot rubber bands around corners when he explained how a punch card worked, bless him)
Meanwhile, in the real world, most users still struggle with a few Mbps. If they're lucky. We have a relatively fast connection for our area (Northwest Vermont). OKLA speed test just run -- Download. 4.86 Mbps, Upload 0. I'm too lazy to figure out what contortions are required to get the upload test to run. A few folks no doubt do better. Rural users (and they really do exist) stuck with a plain old telephone line don't do anywhere near that well.
Which leaves us with the eternal questions:
1. What are the people who write these articles smoking?
2. Where can I get some of it?
It's not aimed at home users, it's for completely different worlds:
* As you get closer to the network backbone, network connections have to get faster. So an Internet provider who is aggregating multiple home network connections over a (redundant pair of) link(s), needs each one to be many times faster than the home network connections. This applies to connections within their network, and connections to their upstream Internet providers.
* In a datacenter that has a bunch of servers connected to top-of rack switches, which are connected to intermediate switches, which are connected to the core, then the top-of-rack to intermediate network connections need to be many times faster than the server to top-of-rack connections. And the intermediate-to-core connections may need to be faster still. The servers will be communicating within the data center, so their speed is not limited by Internet speeds. Modern servers will be connected to the top-of-rack switches at at least 10Gbps, and perhaps more, so if you have a rack of 40 servers that's theoretically 400Gbps, though you may be able to get away with slower connections since they might not all be sending data at the same time (depending on your use-case).
This applies to connections within their network, and connections to their upstream Internet providers.
I was working on a bunch of servers in the Atlanta area a few weeks back and everything was really sluggish, their routers won't respond to pings, but traceroute -T was showing over 250ms average times, twice what I'd expect.
Link from home to ISP, about 7ms which is better than average
The hop over the pond to New York, also looking good at about 75
New York to Atlanta over AT&T's back bone - EEEeeeks! WTF.
Times from there to the customer OK
Can't recall when I last saw that sort of congestion on an exchange to exchange part of the route.
There's someone who could use more bandwidth on their core network.
IDK where the manufacturers aimed it, but this is the Register, so I'm not betting on what is or isn't in peoples home labs.
I haven't hit a 100GB switch yet, but I have more than 10 10G ports lit in my house. Until I need to run stuff to a different floor it can stay on twinax. Damn WiFi and the internet, I mostly care if my servers can talk to the rest of my servers. The cheapest cable data plan suits me fine, but my LAN is another story.
> Meanwhile, in the real world, most users still struggle with a few Mbps
(ETA: Ninja'ed at least twice while writing this ;) )
You do understand this is talking about LAN/datacentre speeds, right? Not commercial internet connection speeds let alone home user internet connection speeds?
I have 1Gbps between my computers in my home (LAN). And I'd like to upgrade to 10Gbps. What my internet (WAN) connection speed is is irrelevant to that.
And I bet you are using either several hundred megabit WLAN or gigabit LAN between your computer and your internet gateway (modem/rounter).
You have to remember that these higher speeds (10Gb/s+) aren't generally 10Gb/s: They're multiplexes of slower speeds (e.g. 4 x 2.5Gb/s)
This multiplexing for interconnects usually* isn't a problem but for an edge connection it definitely becomes an issue where it depends. on how the multiplexing happens. (Per host, per port, per UDP/TCP flow, etc)
* - I saw a HPC user happily buy some 100Gb/s networking only to complain that they couldn't saturate it. The problem was this multiplexing issue. Once they re-configured their software, viola...
:) Not a damn clue. Magnets?
not so much a circuit level rundown, but the broad strokes are at:
https://grouper.ieee.org/groups/802/3/100GEL/public/18_01/sun_100GEL_01b_0118.pdf
I imagine single link 400gb are a good few years out, but the 100s are up and running over short cables, so they are probably already sampling to "preferred customers" that are several orders of magnitude above my budget.