back to article IBM, Intel tease 2020's specialist chips: Power9 'bandwidth beast' – and Spring Crest Nervana neural-net processor

At the Hot Chips symposium in Silicon Valley on Monday, IBM and Intel each revealed a few more details about some upcoming processors of theirs. From Big Blue, an addition to the Power9 family, dubbed a "bandwidth beast," and from Chipzilla, a Nervana neural-network processor code-named Spring Crest. I got the Power Let’s …

  1. ArrZarr Silver badge

    From what i understood, the Cerebras chip basically wants to be as close to a brain as possible, but could somebody clarify whether a transistor is equivalent to a neuron, a core is equivalent to a neuron or whether it's not that simple?

    1. Anonymous Coward
      Anonymous Coward

      It would take a lot of transistors to fully model a neuron even if we knew exactly how neurons (and synapses, etc.) work, which we don't. But nowhere near a core's worth. So the answer to your questions is no, no, and yes.

      1. ArrZarr Silver badge

        Thanks for the answer. I guess if it were easy then it would have been done already.

      2. Jaybus

        Cerbras, like Nervana, really functions very differently from a biological neural net. These are clusters of vector processors optimized for performing many matrix operations in parallel. The real trick to these is the intra-chip networking with all compute clusters interconnected by a crossbar switch and extremely high intra-chip bandwidth.

        It isn't really designed to be barin-like. There are, however, neuromorphic chips that are designed to model/approximate a neural network, such as TrueNorth, Loihi, and whatever is used in the SpiNNaker. These are likely the future of the actual use of already trained ANNs (inference) in devices, since they will use far less power.

  2. _LC_

    Same problem as Intel

    With 14nm until Anno Domini (202X) they won't be exciting anyone.

    1. Anonymous Coward
      Anonymous Coward

      Re: Same problem as Intel

      For POWER? IBM are hitting their roadmaps (*cough* SPARC and Itanium *cough*) and they have the challenge of moving from GF to Samsung given GF's dropping of new process nodes.

      Given that IBM already have good CPU scaling stories, increasing IO and memory bandwidth does make for a significant performance increase when combined with custom accelerators and given the relative market share, the argument for avoiding high risk designs on unproven process nodes with theit customer base (i.e. low volume, cyclical purchases) is valid.

      And it's entirely likely POWER10 will arrive before Intel have a competitive process node (i.e. 7nm given their issues with 10nm)

  3. fredesmite

    Too bad no one uses Power9

    Except IBM

    1. John Savard

      Re: Too bad no one uses Power9

      What's really too bad is that not just anyone can make x86 chips. So Oracle has to make do with SPARC, IBM has to make do with PowerPC, and so on and so forth, and there isn't a fully competitive microprocessor market, with only AMD permitted to compete with Intel.

      Is the world going to switch from Microsoft Windows running on x86 to Linux running on RISC-V? Not any time soon, I wouldn't think.

      1. returnofthemus

        Is the world going to switch from Microsoft Windows...

        Depending on the world that you live in, I'd say that it already has, the cloud was built on Linux and Linux is ISA neutral, you no longer have to choose the architecture for your applications, you can run your applications on the best architecture

        1. Paul A. Clayton

          Re: Is the world going to switch from Microsoft Windows...

          Linux is not ISA-neutral. E.g., it effectively requires caches to support virtual address aliasing. It is also strongly oriented toward a ring-based permission system and linking translation and permission. The application environment for Linux is certainly not oriented toward use of capabilities (i.e., passing permissions in a fat pointer).

          1. returnofthemus

            Linux is not ISA-neutral...


            Let's examine this one.

            Introduced on the Mainframe in 1999 with a growing MIP count, ported to POWER in 2000, with little endian support added via the OpenPOWER foundation in 2014. Then you've got Fedora SPARC for....well you've probably guessed it already?

            Nothwithstanding all those ARM-based so-called smart devices.

            So you've got Linux on the Mainframe, Linux on POWER Systems, Linux on SPARC and Linux on ARM, as well x86 both Intel and AMD.

            I'm not sure how you'd describe it, but I think much of the world would describe it as ISA-neutral

            PS Microsoft's Cloud was initially called 'Windows Azure', why do you think they changed the name to 'Microsoft Azure?

            No prizes for the right answer ;-)

        2. Anonymous Coward
          Anonymous Coward

          Re: Is the world going to switch from Microsoft Windows...

          "the cloud was built on Linux and Linux is ISA neutral, you no longer have to choose the architecture for your applications"

          Yes - you no longer choose the architecture, you use the architecture provided by the cloud providers. It will hasten the demise of some of the minor players, but the decisions will be economic rather than technical.

      2. Anonymous Coward
        Anonymous Coward

        Re: Too bad no one uses Power9

        The POWER architecture is open source via OpenPOWER and SPARC is open source via OpenSPARC.

        While they are used for certain applications, the high end chips are very expensive to develop and produce - these aren't 10-20mm cores like ARM.

        The reality is that x86 is good enough for most high end applications and ARM is good enough for power conscious applications/general purpose CPU's for managing dedicated processors that the requirement for other architectures makes them uneconomic. I suspect China will prove me wrong with MIPS or a similar architecture, but designing/enhancing/producing a CPU to compete with ARM/x86 is a multi-billion dollar process.

        That said, I think IBM have got a few more years to go with POWER (10th generation with 2/3 revisions and then a likely 11th generation to get us towards 2030 unless quantum computing surpasses mainstream CPU's - even then it's likely to be too soon for mainframes and their customer base to take that leap)

        1. Michael Wojcik Silver badge

          Re: Too bad no one uses Power9

          unless quantum computing surpasses mainstream CPU's - even then it's likely to be too soon for mainframes and their customer base to take that leap

          I can't think of an algorithm in BQP that's particularly useful for the vast majority of mainframe workloads. Someone might naively think that typical mainframe workloads involve "a lot of searching" and so Grover's is applicable; but the kinds of complex-key searches that are common in those applications create a huge keyspace (due to combinatorial explosion), so even if you could get a QC implementation with sufficient qubits it would be hugely inefficient to perform those searches directly using QC.

          If we ever have economically-feasible QC for business computing (and that's a big if), Grover's might well be useful for microarchitectural and kernel-level lookups and other cases where you might make use of a (probable) quadratic speedup in evaluating a function. But it's likely to remain practical only where the domain is relatively small, except for specialized applications against a small set of high-value data.

  4. John Savard

    What's Old is New Again

    This OMI, based on 25 GHz data lines, but not as many of them, that reminds me of the fast but 16-bit wide Rambus of the Pentium 4 days.

    1. Anonymous Coward
      Anonymous Coward

      Re: What's Old is New Again

      OMI is about buffering to allow longer paths between the CPU and RAM and allowing more RAM chips per memory bus (i.e. greater electrical load). Rambus was about decreasing the memory bus width in exchange for very high transfer speeds.

      The ideas behind OMI are not new (it has been used in many forms over the years to provide high memory capacities) but it has always had the significant drawbacks of being expensive and increasing latency. Given the latest POWER chips focus on improved IO/memory bandwidth, the drawbacks are outweighed by the benefit of cost effectively increasing memory bus width for large capacity servers.

      OMI's advantage over some of the previous methods of increasing memory/memory bus width appears t be around the effect on latency, although I suspect the latency hit maybe compared to other POWER memory options such as Chipkill that already have additional overhead.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like