back to article PCI Express 3.0 spec sneaks out

The PCI-SIG - the organisation behind the PCI Express - quietly released the base spec for version 3.0 of the bus standard. PCIe 3.0 was originally due to be released in 2009, but in August of that year it was delayed until 2010, in order, it was said at the time, to ensure compatibility with PCIe 1 and 2. Come January 2010, …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Alert

    All meat no potatoes?

    From the article: "PCIe 3.0 ups the bus' based clock to 8GHz from 2.0's 5GHz. A change to the bus' line coding system, from 8/10-bit to 128/130-bit further increases the amount of data that can be pumped down the PCI pipe every second. So while PCIe 2.0 can do 4bn transfers a second, PCIe 3.0 can do 8bn"

    What I was really looking for was bandwidth, and according to these guys* for an x16 is 32 GB/s vs. the prior 16 GB/s... which IMO makes the whole PCIe-based SSD stuff even that much more attractive (although lower prices would be nice:)

    * http://www.trentontechnology.com/support-center/technical-information/resources/163-trenton-technology-pci-express-3-pcie-2

    1. foxyshadis

      Only RAMdisk can use that much bandwidth

      And you're usually better off putting them on extra main memory slots, not PCIe. A single SSD will be quite happy on PCIe 2.0 x1.

  2. Robert Hill
    FAIL

    Falling behind...

    OK, how long has PCIe 2.0 been with us? More than a few, depending upon when you want to start counting (spec, mobo availability, common in add-ons, etc.). So here we have a spec that doubles the effective bandwidth - but in several multiples of 18 months. Given Moore's Law, that means that PCIe is effectively falling behind the rate of progress of computing power.

    The only brightspot is that more chipsets are moving critical communications links off the PCIe bus onto direct links - but then again, given the turgid pace of PCIe development, what choice do they have? The killer is disk and other large storage interfaces - those with minor storage needs (desktop PCs) can use SATA interfaces embedded directly to the chipsets, by-passing the PCIe bus. But those with larger storage needs (i.e., add-in RAID cards) have to plug into the PCIe bus - which is NOT keeping pace with either storage development or computing development.

    The PCIe 3.0 spec needed to be at least a quadrupling of bandwidth - it offers half that. Perhaps it is time to consider dropping the entire "legacy compatibility" mantra and design a new bus from the ground up for current and future needs. They have done that in the past - PCI to PCIe! - so perhaps it is time to do it again.

    1. Giles Jones Gold badge

      Cheapness

      Anything that is used by PCs has to be cheap, which is probably why progress is slow. It's why PCs use USB and not Firewire. It's why USB3 still requires a lot of CPU intervention.

    2. Reg Sim

      Hmm not so sure....

      When I look back at the old AT ISA crap, PCI was a big step, and consider how long PCI was with us before PCIe, PCIe 2.0 came quite fast compared to that, and I think PCIe 3.0 whist delayed is still an improvement.

      PCIe has always been designed for the mass market, so unlike other server only PCI variants it needs backwards compatibility. I mean even now we are only really seeing motherboards with out legacy PCI, even though PCIe has been out for ever.

      And as you say things that NEED to go faster can currently connect directly to the chip set, fast things are always bleeding edge, and I consider PCIe as a mass market product and an evolution rather than revolution.

      I suspect after the next PCI step we will see a new interface enter the fray, but it will be years after that till PCIe falls off motherboards. IMO

    3. foxyshadis

      What's going to use it?

      I think you're a little overly enthusiastic about just how much bandwidth most applications NEED; no component but cpu has been speeding up that fast. Disks certainly haven't; graphics cards barely manage to use all of a 2.0 x16, let alone dual x16 links. This is coming down the pipe now because of 4-port 10GbE cards (and 40/100GbE), which use more than the x4 allotted to most server ports, so the incentive to git-er-done is finally there. Otherwise, main memory is the one and only consumer that could flood the bus, and no one is insane enough to put it on PCIe.

      When the real need comes, such as PCIe-over-fiber interconnects between server memory modules, you'll see the next generation come out quicker, but given that the current generation is on the edge of technical feasibility today, it'll be insanely expensive.

      Good luck finding a RAID of current SSDs that can flood an x4 RAID card.

This topic is closed for new posts.