back to article Intel's Gelsinger talks up 'systems foundry' era of trillion-transistor chips

Intel CEO Pat Gelsinger sees a future where everything is a computer assembled from chiplets using advanced packaging technologies like Intel's own as the chipmaker seeks to keep Moore's Law alive. Gelsinger was delivering a keynote from the annual Hot Chips conference, running this year in virtual form. As you might expect, …

  1. elsergiovolador Silver badge

    Heating

    It's interesting that Pat has not found a career in building heating systems.

    A PC with an Intel processor can heat an entire room during the winter.

    1. Caesarius
      Mushroom

      Re: Heating

      My first thought was "how are they going to get the excess heat out of these things? I thought that was the limiting factor, but I may need to research this better.

      (Icon only applies for its intrinsic appearance)

      1. Vikingforties

        Re: Heating

        That was Intel's question too. It's probably why they're putting $700 squagillion into finding out the answer.

        I see a future where Data Centres send out their heavy workloads to Fish and Chip shops so that servers can heat the frying oil. Everyone's a winner in that scenario.

      2. Roland6 Silver badge

        Re: Heating

        >(Icon only applies for its intrinsic appearance)

        But it does bear some resemblance to a i486 without heat sink...

        If memory serves me correctly, the i486 was the last Intel CPU without built-in thermal protection.

    2. martinusher Silver badge

      Re: Heating

      Well, they didn't name the conference "Hot Chips" for nothing.

      Curiously enough, in other semiconductor applications there's been a huge emphasis on low power operation in recent years and its not just because a lot of equipment is going to be battery powered.

  2. MOV r0,r0
    Stop

    Come on El Reg, it's Moore's law, not Moore's Law. If it were a "Law" it would have a formula you could plot on a graph and stuff. And Intel talking about it as if the economics that underpinned it have in any way been in operation for the last ten years - forget Moore's law, we're in Amdahl country now.

    1. Snowy Silver badge
      Coat

      They are right it is Moore's Law, after all you capitalise names and a name is about all it is now.

  3. CrackedNoggin

    The drivers behind this are that customers don't just want more chips, they want more powerful chips, because AI models are getting larger and data volumes are getting bigger

    AI models aren't being computed purely on digital silicon because that is the best way - it's just because that's currently the only only option. It's hugely inefficient - we know human brains can develop a language model using a fraction of that energy.

    Real breakthroughs do happen too.

    1. Filippo Silver badge

      My thoughts exactly. If you want to make an enormous AI model, then use a neuromorphic chip. The performance gains could easily be several orders of magnitude.

      I'm not sure why there isn't more pressure on this front. My best guess would be that the fundamental design of how neural networks work is still an area of very active research, so making a huge investment in one specific neuromorphic design carries a very large risk.

      Which is reasonable - although, at some point, the economy of scale has to tip. Probably as soon as someone figures out a really good and widespread use case for very large models.

    2. martinusher Silver badge

      Just last week I read an announcement about a new AI part that's apparently partly analog. Its designed to do the job that intense -- power hungry -- digital computations do at the present, the goal being to build technology that can be incorporated into devices instead of having to be in the cloud as it is at present.

      So the work's ongoing. I expect in a decade or two we will come to regret making our devices smart (see the current series of "Non Sequitur" comics for suggestions).

  4. Binraider Silver badge

    I can't help but think a logical thing for Intel to do would be to get more support out there for developers to take advantage of the silicon.

    Processors get fatter and have more features, but in practise you are dealing with 5, maybe 10 layers of abstraction by the time you are grinding your problem in API and IDE of choice. Can you rely on all those layers to take advantage of fatter silicon?

    NVIDIA caught onto this and produced the widely used CUDA API, which very definitely lets you exploit the hardware without too many headaches.

    By contrast Itanium, (amongst other causes) failed software, particularly compilers could not take advantage of the systems features.

    1. Korev Silver badge
      Boffin

      Most people can't write vectorised code so Intel produces compilers which along with Math[sic] Kernel Library etc do exactly this.

      Whether or not the Intel Compilers produce faster code than say GCC is always up for debate though.

  5. QLord

    1T Transistor Chips

    The only way Intel gets 1T Transistor Chips, is if TSMC is fabricating them.

    1. Si 1

      Re: 1T Transistor Chips

      Indeed. It’s a sad sight that Intel are reduced to putting the parts together that were manufactured by other fabs. I’m still unsure if Intel’s complete failure to keep up with process shrinks is down to complacency or they just don’t have the best engineers any more.

  6. John Smith 19 Gold badge
    Unhappy

    7nm is < 34 atoms wide

    Tick tock people.

    And let's skip the "EUV" BS. They're into X-ray territory already.

    The challenge is to find a way to do at scale.

    IE whole chips or wafers at a time.

    A Smith-Purcell generator could do it to 3.5nm using crystals of metal salts as the grating.

    Beyond that things get kind of tough.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like