back to article SmartNICs, IPUs, DPUs de-hyped: Why and how cloud giants are offloading work from server CPUs

The recent announcements from Intel about Infrastructure Processing Units (IPUs) have prompted us to revisit the topic of how functionality is partitioned in a computing system. As we noted in an earlier piece, The Accidental SmartNIC, there is at least thirty years’ history of trying to decide how much one should offload from …

  1. MikeLivingstone

    This is to replace soon to die peripherals like GPUs

    NVIDIA kicked off this trend with their DPU and it makes sense for Intel and certainly Cisco to join in. At a workload level as this is an increasing DC overhead workload, but it also makes sense from a business continuation standpoint for NVIDIA. It is no secret, but GPUs probably won't exist ten years from now, as really we do not need bigger monitors at higher resolution. May be 8k in some cases, but my 32 inch 4k is already more than I need and can usefully see, at 8k it would either be 64" or the pixels woud become smaller giving no benefit. GPUs will just get built into CPUs again, hence the interest in ARM and why NVIDIA needs DPUs. AI won't cut it for NVIDIA, the specialist vendors have superior technology and Gus fail at scale

    1. A Non e-mouse Silver badge

      Re: This is to replace soon to die peripherals like GPUs

      ...but GPUs probably won't exist ten years from now, as really we do not need bigger monitors at higher resolution.

      I thought the next big thing in graphics was real-time ray tracing - which requires an order of magnitude higher in GPU Performance to perform at high resolutions.

      1. jeffty

        Re: This is to replace soon to die peripherals like GPUs

        It's not so much a question of resolution as a question of use.

        GPUs on the same die as an x64 CPU have existed for over a decade - Intel and AMD have both produced CPUs (IGT/APU respectively) with this functionality. Integrated graphics (on the same motherboard) go back even further. Neither has replaced standalone cards.

        For everyday use (where they're just used as a display output at 1080p or 4k) they do the job nicely with no additional hardware needed, but for specialist use (gaming, animation, rendering etc) they lack the raw grunt or features needed and a standalone card (or multiple cards depending on your use case) are still the way to go.

    2. Anonymous Coward
      Anonymous Coward

      Re: This is to replace soon to die peripherals like GPUs

      "GPUs will just get built into CPUs again..."

      The trend is to separate logic not to integrate. Makes sense for specialization, but this so called "DPU" is really just a low powered AIO CPU or a co-processor of sorts. Honestly, after reading a bit on these DPU's, I don't see how these are new nor anything but putting a CPU where a CPU once wasn't. The description of them uses SoC, and if a DPU is just a SoC with more baked in than the previous SoC, is it still not a SoC?

      As far as Nvidia (of all companies) making GPU'$ obsolete... come on man.

  2. Warm Braw

    Welcome back, the Front End Processor

    There has always been a trade-off between cost and performance between general-purpose and dedicated hardware. Do you want your costly compute hardware constantly servicing interrupts from teletypes and paper tape readers or can you use some lower-cost logic to mitigate the overhead? Does you disk driver need to understand optimal command queuing or can you leave it to the disk hardware?

    Data-centre hardware is a very different use case to traditional desktop computing. Clearly you don't need a GPU (at least not for display purposes). But exactly how much of a traditional general-purpose CPU do you actually need when you have very specific roles? Do you need all the elaborate virtual memory support? How much virtualisation is required? Would you be better off with multiple simpler individual CPUs than a small number of complex CPUs that can be virtualised as many? And when it comes to I/O do you really want a "Smart NIC" or would you better off using different network protocols than those that require the twiddling of unaligned bytes?

    Clearly manufacturers such as Intel have an interest in promoting dependency on new features, but the shots are ultimately going to be called by the bitbarn barons. I think there is a reasonable chance we're going to see something more like a "dumb CPU" rather than a "Smart NIC" - where the CPU simply executes the workload not the "overhead" and perhaps has some sort of OOB configuration plane used by a remote "hypervisor" so set up the CPU for a specific job with much of what now constitutes low-level I/O abstracted at a much higher level over optimised buses rather than traditional network interconnects.

  3. Steve Channell
    Happy

    Intel playing a 3-Com strategy

    Aside from mainframe front-end processors, the first to embed logic into a network card was 3-com in the 1980's, but the cards failed to keep pace with the increasing speed of CPU, with negligible market pentation. The reason to mention it was that 3-com's apparent technology advantage discouraged other vendors from entering the ethernet card business, allowing 3-com to dominate the market with their dumb cards, with very high margins. Intel's strategy would seem to be to prevent other vendors from dominating the market.

    discrete GPU may not have a future for regular desktop graphics (browsing, word-processing, spreadsheets) but high-performance ray-tracing will still require powerful GPU for the considerable future. but NVIDIA is currently disadvantaged by the time it is taking for games software to move to ray-tracing for 4k and 8k games.

    General Purpose GPU will always have a place for AI and simulation,

    Offloading TLS encryption to a SmartNIC/DPU is the best way to reduce CPU load, but additional functions (RDMA, block storage) will require cooperation with OS providers - something NVIDIA can do while it waits for ray-tracing games to catch-on

  4. dave 93

    Crypto key generation

    Generating session keys on high volume servers has been 'offloaded' to a dedicated HSM for years...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like