
I am of the opinion that there should be a levy put on the use of such headlines.
PCI-SIG, the industry consortium that oversees the Peripheral Component Interconnect Express specification, rolled out PCIe 5.0 on Wednesday, promising speedier data transfers between connected computer components. PCIe is a serial bus that connects peripherals like graphics cards, network cards and storage hardware to core …
Even more pedantically, 'bus' meaning the ones found inside PCs (and elsewhere) may not be originating from omnibus. There is an electrical 'bus' (also 'buss') which is a shortened form of 'busbar'; the first computing 'bus' reference listed on the OED page was written 'buss' indicating that a computer bus originates from 'busbar' not 'omnibus'.
https://en.wikipedia.org/wiki/Busbar, because Wikipedia is true.
"
The term busbar is derived from the Latin word omnibus, which translates into English as "for all", indicating that a busbar carries all of the currents in a particular system.
"
It borrowed the use of (omni)bus for the same reason it was used in busbar.
Contrast with the use of the -gate ending, borrowed from Watergate.
Contrast with the use of the -gate ending, borrowed from Watergate.
Don't you mean Watergate-gate ?
> but can the rest of your system (e.g. CPU & Memory) actually keep up with data flowing that quickly?
Well change the numbers and this was the situation back in the 70's & 80's, there was a reason why the PC had DMA, and why PC's with NIC's that had on-board CPUs and DMA were notably 'faster' than those with more basic NIC's that did all the protocol processing in the PC's CPU.
. . .hence anonymity.
But 32GT/s sounds to me (naively) on a single data line as equivalent to 32 Gb/s.
x16 sounds to me like 2 Bytes per transfer.
So I get to 64 GB/s. That measn the author is wrong (but none of the first 11 commentards have noticed) or I'm missing a crucial component of understanding.
I'm going to go with the odds and ask someone to enlighten me.
I'm going to publicly display my ignorance by replying while logged in.
32GT/s means giga-transfers per second in each direction, so a smidgen under 32Gbps in each direction, lets round it up to 64Gbps.
I think the x16 is the number of lanes, so total throughput would be 16 times 64Gbps. Which is 128GB/s (bits to bytes conversion subtly hidden in there).
What really matters is whether it's supported by your CPU, motherboard, storage and graphics card, and whether it deliver real world performance benefits as a result. Unless you're working in hardware design it's really all just a distraction; look at the real-world benchmarks and base purchasing decisions on those.
I'm known to break out into the song on occasion whilst driving about with the stereo cranked up and the windows down on a nice day. But wrong photo. Should be apple in this case. Though I do love me a nice blueberry pie, that looks more like a blueberry crumble.
Many bytes! Moar Bandwidth!
"Since v4 is only just shipping; this means the rule about tech being obsolete as soon as it is shipped is holding true."
v4 was approved in 2017 and is only just beginning to ship in PC's and the reality is that outside of storage and very high speed interconnects/network adaptors, the requirements aren't there. Realistically, v5 is a similar length of time away from commercially available products such as motherboards and 400Gbps NIC's.
Look outside of the PC market to things like blade servers, network switches and other devices that provide high speed access to shared resources and you will find the real drivers for these speed increases.
Looking at AMD's X570 motherboards which support PCIe 4.x, they provide PCIe4 8x connectors versus the previous generations PCIe3 16x connectors as they don't believe that cards requiring more bandwidth are likely and that the PCIe lanes are better spent elsewhere.
Intel isn't even going to have PCIe 4.0 until next year, they might as well skip it entirely and go straight to 5.0. Seriously, if they already prepared things for 5.0 with the 4.0 spec why didn't they just call 4.0 a 4x gain instead of 2x and skip what we now call 4.0 entirely?
Probably because there's PCIe 4.0 stuff in the wild now? Just because your average gamer doesn't know about it or want it doesn't mean it's not used in industry.
E.g.
Motherboard with three 16 lane PCIe 4.0 slots, it's been shipping for well over a year now: https://www.raptorcs.com/content/TL2MB1/intro.html
And the kind of card you plug into it: http://www.mellanox.com/page/products_dyn?product_family=266&mtag=connectx_6_en_card , also shipping for quite some time already.
Or this: https://www.tech-critter.com/gigabyte-unveils-aorus-gen-4-nvme-aic-8tb-ssd/
Not exactly something you'd have in your bedroom as a teenager, but quite cool anyway.
One last comment: AMD wasn't first to this party. They’re quite late despite what they want to say for marketing reasons, it's complete BS but AMD learned quite well from Intel. Copy the Management Engine and call it the "Platform Security Processor", disclaim any responsibility for bugs in it. Lie like a rug if your product lags in any way. Make empty noises about doing things people want (like making the PSP optional) then quietly back away from those statements once enough chips are sold on that marketing-fuelled rumour and the resultant furor dies down. Work with OEMs to restrict supply of any technology that makes yours look old (like PCIe 4.0 SSDs when your chip is stuck on PCIe 3.0).
Old tricks from Intel's playbook.
Why PCIe 4? Because intel lack PCIe 3 lanes if you are using a lot of devices, particularly fast storage.
The mobile platforms had 12 lanes which was bumped to 16 last year - take out 4 for Optane, 4 for thunderbolt/USB4 and 4 for a GPU and your non-Optane storage sucks. Remove the GPU and add multi-gig WiFi or Ethernet and you have similar issues. Even on desktops with 24 lanes, you struggle if you add a fast GPU.
While I accept PCIe 3 has the bandwidth to meet the needs of everything but NVMe, existing solutions don't provide the number of lanes required. Doubling the bandwidth halves your traces/repeaters while offering similar performance. IO might be AMDs killer advantage in 2019...
Security could well be very important too.
And with the Orwellian double-speak branded "Platform Security Processor", completely out of AMD's reach...
I for one don't need a digital nanny in my computers that I can't control, replace, or remove*. It's the Intel Management Engine all over again, it just hasn't been targeted as much for CVE hunting and zero day disclosure technical analysis yet. I'd take anything else other than Intel and AMD at this point, they're two sides of the same privacy-invading, DRM-shoving coin.
* From Wikipedia (https://en.wikipedia.org/wiki/AMD_Platform_Security_Processor):
"its functions include managing the boot process ... and monitoring the system for any suspicious activity or events and implementing an appropriate response". Given that this is firmware, not hardware, and could be regionalized by fiat, try the following substitutions:
"Managing the boot process" -- "Only allowing (regionally backdoored?) Windows 10 or (regionally backdoored?) specially approved, prebuilt Linux kernels to boot"
"Monitoring the system for any suspicious activity or events" -- "Detecting unauthorized open-source encryption routines and sending the keys to a central server, temporarily storing said keys in the BIOS Flash chip if needed."
Scared yet? No wonder China was so keen on using AMD's technology at one point, though at some time they also seem to have realized what a bad idea this is for state computers and are now pushing for wholly domestic chips not using any Intel or AMD technology...