back to article The network is indeed trying to become the computer

Moore's Law has run out of gas and AI workloads need massive amounts of parallel compute and high bandwidth memory right next to it – both of which have become terribly expensive. If it weren't for this situation, the beancounters of the world might be complaining about the cost of networking in the datacenter. But luckily – …

  1. Anonymous Coward
    Anonymous Coward

    In the end, the network WILL be the computer

    And all costs will move to the network.

    And then back again to the device as network traffic is rather energy hungry and energy and latency are the stuff that has to be brought under control in the end. As it will be some time before energy will really be "free" as in beer and the speed of light will not change in the coming years.

    As has been said before, it is a pendulum.

  2. MatthewF

    "The Network is the Computer" memories... Sun Microsystems

    Ah the golden days of Sun MicroSystems at the turn of the century. Their advertising at the time "The Network is the Computer"

  3. HuBo Silver badge
    Windows

    Way to go

    Yeah, Intel incorporated FPUs into its CPUs (486DX, 1989), AMD integrated the IO-MMU in there (2007), they (and competitors) later integrated many cores in-package, io-controllers, GPUs, vector units (and matrix, and NPUs), and to some extent NoCs ... the trend has definitely been towards combining ever more functionality that was previously realized through distinct external units, into increasingly sophisticated integrated packages.

    NoCs and in-package Networking (with DPUs, SerDes galore, and associative memory) sure sound like the next thing to integrate, especially with CPO, to deal more seamlessly with, both, the increasing number of in-package cores, and many similarly packaged external units in a system, plus myriad other devices (pooled memory, storage, etc...), imho.

    It seems to me that the current rack-scale approach where scale-up requires external switches could largely do away with those by using in-package networking chiplets, a bit like the last 3 diagrams here (based on POWER9⁴). And if IBM was using POWER10 CPUs as switches 3 years ago, I'd imagine that similar tech could readily be integrated via chiplet(s) in a contemporary CPU/GPU/switch highly modular combo device that is easy-and-economical-to-deploy-and-scale.

    The perf-per-watt efficiency enhancements of going rack-scale could see a further uplift with this sort of tech I think ... (maybe?)

    ⁴⁻ still waiting anxiously for POWER11, and now POWER12 too!

  4. Paul Hovnanian Silver badge

    Don't care

    It's just a bunch of hand-wringing by the bean counters. Whether it's all in one server blade or running across a data center on fiber channel to the storage racks over yonder, it's just a scaled up version of the buses and interconnects inside the beige box under my desk. Or the Ethernet cabling I use to put xclients over there and displays on various desktops around my house. I paid for the whole kit, I installed it and I maintain it.

    The only time "the network" becomes a problem for me is when I am obliged to pay rent for continued use of it to get my work done.

  5. Anonymous Coward
    Anonymous Coward

    Would Moores Law be dead...

    ...if we spent engineering time and money on serious cooling solutions instead of shrinking process nodes? We seem to be focusing hard on air and water...both pretty good at carrying heat, but not the most efficient.

    1. druck Silver badge

      Re: Would Moores Law be dead...

      Yes liquid sodium is the way to go.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like