back to article Why you should start paying attention to CXL now

After more than three years of development, compute express link (CXL) is nearly here. The long-awaited interconnect tech will make its debut alongside Intel's upcoming Sapphire Rapids and AMD's Genoa processor families. This means there's a good chance the next server you buy will support the emerging interconnect tech. So …

  1. Doctor Syntax Silver badge

    "Just pop a CXL memory module into an empty PCIe 5.0 slot, and you're off to the races."

    Memory expansion boards. Everything old is new again.

    1. Anonymous Coward
      Anonymous Coward

      I have 4MB* free to play with.... so yeh old is the new new.

      * Android's garbage collector doesn't collect the garbage.

    2. Zippy´s Sausage Factory

      Thank you. I'm glad it's not just me getting PCMCIA flashbacks then.

      1. Doctor Syntax Silver badge

        PCMCIA? I remember S100 memory boards.

    3. Plest Silver badge
      Facepalm

      Yep! Next thing they'll be bringing back QEMM!!

  2. Anonymous Coward
    Anonymous Coward

    Imagine

    how many Chrome tabs you'll be able to have open with all that RAM. Maybe 30.

  3. DS999 Silver badge

    This will be adopted almost exclusively for VM farms

    Very few people will use it to expand memory in an individual server because it is very rare to have fully configured all the DIMM slots with the largest available DIMMs. Sure those 512GB DDR5 DIMMs cost a fortune, but most servers in current use have DDR4 slots and while the max DDR4 DIMM size may have been out of reach when DDR4 was new, it isn't now.

    Being able to assign memory to individual VM hosts on the fly as needs dictate is the killer app for CXL memory. Without that need driving it, the capability never would have been developed.

    1. J. Cook Silver badge

      Re: This will be adopted almost exclusively for VM farms

      Indeed, the largest costs of the servers we just bought was memory, and is nearly 60% of the costs for our reference configuration. This will be interesting to see where it goes.

    2. slimshady76

      Re: This will be adopted almost exclusively for VM farms

      This standard sounds like LVM or SVC for memory, and from a recent piece featured here:

      https://www.nextplatform.com/2020/09/03/the-memory-area-network-at-the-heart-of-ibms-power10/

      It looks like another main use would be memory pooling from several systems to share data but offload calculation to other physical nodes.

      It's more or less a standarization of the one-off implementations of several memory sharing strategies in the niche supercomputer market.

      I, for one, salute our memory socialization overlords.

  4. Corporate Scum

    There is alot more here

    As this layer is tying together links that were titchy to extend. The memory pool has it's uses, but the performance hit that will come with it will hit that application harder then extending the message passing layers, tiered storage, and network io. Also seems to have some GPU specific enhancements. All of these are of as much or more interest to those of us feeding I/O bound applications and trying to pool resources between nodes.

    If the description of this new iteration lives up to the hype I could build a box that could chew through a considerable number of packets. It will be nice to have something to work with that is actually designed to do this. We home build a few things in years gone by, which was a PITA to support. Anyone wonder what the inter box links will look like?

  5. Hurn

    Does CXL run over anything other than PCIe?

    Given the short range of PCIe cables ("borrowed" [stolen] from IEEE/SAS specs), how are the PCIe to photonics / fibre channel interfaces coming along, these days?

    If one wants a "real' mesh/network, on a budget, one needs to support cables longer than 2 meters (metres).

    Trouble is, with current tech, one hits latency/delays going from PCIe to photons and back.

    CXL over PCIe over Ethernet (using encapsulation over RDMA/ RoCE 3?) would suffer (even more) from latency.

  6. Danny 2

    Tim, nice but DIMM

    This article could be a joke / trap article for all I know. Sorry to admit that. Even the comments here are just over my head. Way over. I still like reading them and I don't want to continue polluting the comment section, but all I can add is when I used to have to embed photos of SIMMs into Word documents to train my colleagues.

    I was not only unjustly made redundant, I am now wholly justly redundant. I didn't expect the IT industry to stop with me, I'm glad you didn't and I do salute your progress. Just didn't expect to be a technophobe at my age and am unsure what career to pursue now. Maybe typewriter tech, maybe PC gardener.

    I just watched '13 Lives' on Amazon - an English IT guy saves 13 lives. Sorry, spoiler alert. Tech angle: How much did paedo-guy Elon Musk pay to buy a US court? Just enough.

    I leave and heave a sigh and say goodbye

    Goodbye!

    I'm glad to go I cannot tell a lie

    I flit I float

    I fleetly flee I fly

    The sun has gone to bed and so must I

    1. Anonymous Coward
      Anonymous Coward

      Re: Tim, nice but DIMM

      Don't let redundancy get you down. However bleak things seem right now, these things have a tendency of working out for the best in the long run. All those cliques are cliques because they're based on truth: One door closes, a window opens. It's always darkest before dawn..... etc.

      1. Hero Protagonist
        Headmaster

        Cliques

        *Clichés

  7. Bitsminer Silver badge

    Goodbye Ethernet, I hardly knew ye

    ...instead of TCP and UDP over Ethernet, we're talking CXL running over PCIe.

    Really dude?

  8. MJB7

    Memory sharing

    There's no mention in this article of security. While I'm sure nobody in this day and age could be so dumb (*) as to completely ignore security, I look forward to a rich vein of CXL vulnerabilities over the next few years.

    *: I may be an optomist.

  9. herberts ghost

    Sorta sounds like Superdome

    One piece of data lacking is latency. If this technology were to be udse for a database or similar application, what would be the implication of locking in a remote CXL node.High remote memory latency could cause a number of problems, How many parallel threads can I run before I am in lock hell. Processors typically have a fixed number of queue slots for outgoing writes. What is are the implication of this when the targets are in high latency CXL. What is the situation of error detection and correction on these CXL paths. Is there support for chip kill and other technologies that have proven important fo large memory systems.

    I suspect the success of CXL will be workload dependent.

  10. anonymous boring coward Silver badge

    Pay attention? Why?

    Was that an ad?

    Zzzz...

  11. abufrejoval

    And how does a DOS like Linux know how to handle a dynamic heterogeneous fabric...

    ...when it's never even understood networks and treats GPUs like any other dumb device?

    Unix was born a DOS (disk operating system) and a God: Sharing, delegation, coordination, submission, social interaction with other instances are entire alien concepts: just try how accomodating it is by pulling some CPUs and RAM, or switching out GPUs! There is a reason VMs are much easier to migrate as a whole than in bits that match fluctuating resource demands.

    Yes, some cloud vendors will eventually be able to make that work with their kit, they already have so much bypass code that the Linux kernel is only used for boot I/O. But once scheduling also has gone library, the first memory to reclaim would be the Linux bootstrapper.

    Not that any other popular OS is naturally social....

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like