
R/F Design
There must be some serious R/F design going on with those link speeds.
The Peripheral Component Interconnect Special Interest Group (PCI-SIG) has revealed a roadmap for PCI 5.0 to debut in 2019 at 128GB/s. And that's before it finalises PC 4.0 at half that speed. The SIG met last week for its annual DevCon. The basic message from the event is that I/O bandwidth needs to double every three years, …
Yeah, there is.
I was working for a company on 400Gbps Ethernet solutions last year. The 50Gbps transcievers used on the link (8 are used bonded together) are running PAM-4 modulation. :)
I am wondering how long it will be before PCI-E will require optical fiber. It was thrown about as an idea for PCIe-4 but maybe thats where they will be going for PCI-e 5
PCI-E already has symmetrical lines which make the problem a bit easier, but those lines are proppely engineered HF transmission lines. As far as I know the lanes don't even have to be the same length electrically, as each lane has it's own sync signal. So though this is HF engineering, a lot has been done to make the problem simpler and therefore more solvable.
However the problem is much more complex with memory interfaces as you have more lines and you need to have all your lines having the same delay.
Looks like more and more things are going to consolidate to using PCI-E connections in a system, whether an internal slot or via high speed external connections like USB C. Storage is increasingly going that way - and as Optane starts to blur the lines between storage and memory, I wonder if DRAM will end up going that way too, once things get fast enough?
Well external PCI-E on a PC is a very bad idea, as it allows an external attacker really easy access to your PC.
However PCI-E is already common in some more exotic places like Routers. High performance routers (the kind that needs kilowatts just to drive the fans) already use PCI-E to connect the interfaces. They use a fabric of PCI-E switches there. This might even be the first area we see those new interconnects.
Half of the article refers to PCI rather than PCIe.
Bandwidth won't be 128GB/s it'll be 4GB/s. 16x is just 16 lanes but you could easily use 32 lanes and get 256GB/s.
The whole point of PCIe is that it uses lanes and you configure lanes in the mobo/chassis to your requirement. network cards are usually not 16x for instance.
Not really, as the slot design isn't long enough to make a 32-lane slot. GPUs are the traditional use case for something that needs all that bandwidth because the GPU chews through tons of data when running full out, so it's understood that the max bandwidth quoted is for a max-sized (16-lane) slot and provides a consistent metric. And while most network cards wouldn't need 16 lanes, an adapter for the emerging Ethernet standards probably would need it. Also, the trend in CPU and motherboard tech is to provide more of these lanes to accommodate more devices using them such as NVMe solid-state drives (these currently top out at 4x via the U.2 connector, but a future spec may expand this to improve performance).
And nobody ever made a dual slot cart I suppose?
The point was that most systems have a mix of 4, 8 and 16 lane slots in order to balance number of devices against bandwidth to devices. a lane or two is usually taken up by motherboard functionality too such as USB so my point stands that PCIe 5 is 4GB/s and can be aggregated while the article said something quite different in order to pointlessly sensationalise a quite bland subject.
It starts to become very clear why AMD is happy to just use PCI links as its interconnects for its new server chips. Currently, the design uses 64 PCI 3.0 links to connect 2 processors which is as far as their published designs go. With PCI 5.0 they could greatly increase the overall interconnect speed or allow a 4 processor design using 16 links* between each with the same overall performance as now. I guess PCI 4.0 will provide a shorter term compromise with 4 possible processors connected at a higher overall speed, but lower processor to processor speeds.
* A 4 processor system would only need 48 PCI links (16 to each of the other 3 processors), this would allow the possiblity of hyper-cube designs (many processors not directly connected to each other) if AMD designs the correct protocols for cross processor communications.