back to article AMD thinks it can solve the power/heat problem with chiplets and code

Semiconductors have been getting progressively hotter over the past few years as Moore's Law has slowed and more power is required to push higher performance gen over gen. Because of this, chipmakers are having to get creative about how they design and build chips so that even if they consume more power they are doing so in …

  1. Mister Dubious
    Boffin

    Long way to go?

    In 2021 AMD aimed to improve efficiency thirtyfold by 2025. As 2023 shuts down they've achieved a 13.5x improvement. El Reg calls this "just 13.5x" and opines that AMD "still has a long way to go."

    As I see it they need only another 2.3x improvement, which they should attain (if they keep on at the rate they've achieved over the past two years) sometime in the Spring of 2024, comfortably ahead of deadline.

    Or if they keep up the pace all the way through 2025 we should expect a 13.5x13.5~=182-fold improvement.

    Mathematics is FUN!

    1. Grunchy Silver badge

      Re: Long way to go?

      “Moore’s Law is long gone,” “but we need to achieve 30x performance gain by 2025.”

      Hey, isn’t it true that if you double performance every year, for 5 years from 2021-2025, that 2^5=32 times?

      Huh, sounds almost identical to “long gone” Moore’s Law.

      1. NoneSuch Silver badge
        Flame

        This approach doesn't change the fact Moore's Law is slowing down.

        Moore's Law is Moore's Law.

        Technology development is slowing, not the thing you measure it against.

  2. HuBo Silver badge
    Thumb Up

    Great job!

    AMD's done a great job of increasing the power efficiency of its chips. Already, 7 of the top 10 machines in the latest Green500 use MI250x accelerators ( https://top500.org/lists/green500/2023/11/ ) and the MI300x gives from 1.7x to 6.8x more performance than MI250x, at under 1.4x the power consumption, for even better juice efficiency (eg. Tables in The Next Platform's coverage). For example, replacing Frontier's MI250x with MI300x should boost it to 2 ExaFlops/sec (FP64) in 31 MW, or 65.8 GF/W, which beats current Green500 #1 Henri.

    Papermaster didn't mention using new transistor materials to further reduce leakage of the teeny-weeny little FETs, but that should surely be part of future efficiency enhancements IMHO.

    1. Mike 137 Silver badge

      Re: Great job!

      "reduce leakage of the teeny-weeny little FETs, but that should surely be part of future efficiency enhancements"

      Only up to a point. As the stored charge gets smaller the problem becomes one of susceptibility to interference. Quite a few years back some folks at Cambridge (UK) announced a transistor that could be switched by a single electron. We never heard much more about it, probably because one would never be certain whether it was switched by the intended electron or a stray one.

      In any case, leakage is not a significant limiter of efficiency in power terms. The primary source of power consumption is state switching, and it depends directly on level shift and frequency. This is why processor supply voltages and logic thresholds have progressively reduced as speeds have risen, but it can only go so far until the interference problem dominates (even within a chip). There have been several reports over the recent years of unexpected interference within multifunction chips (e.g. microcontrollers) leading to malfunctions.

  3. Mike 137 Silver badge

    "as Moore's Law has slowed and more power is required to push higher performance"

    I can't see how the (imaginary) "Moore's Law" has any bearing on power versus performance. Moore actually wrote (in 1965):

    "The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000."

    So he was talking about cost of production versus feature density, not about power or performance, and indeed at the feature densisties he was discussing heat dissipation was largely irrelevent.

    1. Kristian Walsh

      Re: "as Moore's Law has slowed and more power is required to push higher performance"

      He was slightly out, but not by much - and for a technology predictions made in the mid 1960s, the accuracy is astonishing. The first RAM chips to break the 65,000 transistor barrier came not in 1975, but in 1977. The first >65k gate CPUs arrived in 1979 with the Motorola 68000 (the clue to the transistor-count is in the name).

      In 1975, memories were running around 16k gates, and CPUs were at around 5k gates. CPUs lagged behind the “Moore’s Law” curve through the late 1970s and early 1980s, and only started to follow that pattern of doubling transistor counts every year once on-board cache became commonplace from the mid 1980s.

      The reason CPUs had fewer gates is that much more of a CPU is active at any given time than in a memory, and thus you need to space things out a bit more so that heat doesn’t become an issue.

  4. M.V. Lipvig Silver badge

    Glad to see

    They're FINALLY going to stop relying only on hardware for speed and start pushing back on software coding. Look at M$ alone - 20-30 years ago an entire operating system came on about 20 3.5 inch disks. How many disks would it take to hold 11? The stack would probably be as large as a house, and take a month to load (just to find when it was done that disk #6 was corrupt, please reload from start.)

    Imagine how fast Win7 would load on a modern computer, had they stuck to basics and just made it capable of running on the new hardware instead of making the OS use the computer's resources to hoover data for packaging back to Redmond. Yes, I get that sometimes you do have to start from scratch, but that's no reason to bloat up the software. That's no different than developing a new automotive engine that makes twice the power on the same amount of fuel, then stuffing another couple of thousand pounds of weight onto the car so mileage remains the same when the proper course would be to leave the body alone and (I can't believe I'm typing THIS) make a smaller engine.

    1. quxinot

      Re: Glad to see

      Nope.

      Writing elegant, fast software is not easy. Which means it's expensive to get written, and that's only if you can find people capable of putting out work of that level.

      And, as you note, because the overwhelming function of software today is to serve advertising and trackers rather than simply blitz through a task--really, it's quite a shame. The computing experience hasn't gotten any faster in years, because while the processing and storage and so on have gotten faster, the software is wasting more and more time calling home and loading vastly more information than needed for the task at hand.

      1. nintendoeats

        Re: Glad to see

        Something else to consider, is that a lot of software safety gains have come from performance improvements (and then been consumed by the complexity enabled).

        Consider a checked array access. Without branch prediction, it is totally impractical to check whether every array access is out of bounds. With it, your easily predictable jump effectively blows away like dust.

        Another example, shared state is the enemy of correct coding. If `malloc` is fast, and memory is plentiful, maybe you can just duplicate that data structure rather than having to manage multiple users of the same copy.

        So some of the efficiency loss in software is because of a shift to techniques that are marginally slower but a lot easier to manage in a large complex program. That said, I think for most uses these gains have already been made.

    2. Kristian Walsh

      Re: Glad to see

      To answer your question, Windows 11, at 3.5 Gbyte, would require 2,431 floppy discs... or 5 to 6 CD-ROMs.

      The main reason for what you call “bloat” really is additional functionality, plus resources for multiple languages and higher-definition displays. Yes, NT 3.5 came on a handful of disks (the Workstation installer is 25 Mbyte - less than 1% of the size of a Windows 11 install), but: it only just had a TCP/IP stack (built-in TCP was an advertised feature of 3.5!), had no SSH, no web-browser, no support for GPU acceleration (not even a thing back then), no management features, limited graphics drivers, limited command-line shell tools, no assistive technologies, no resources for languages other than English, a handful of fonts (all bitmapped, all Latin-only), and the list goes on...

      Then there’s resources: I remember back in 2001, a friend noting that the 256x256 RGBA icon file he had just included in his MacOS X app bundle was larger than the entire system ROM of the original Macintosh (in the System 6 days, the Macintosh OS was mostly in ROM; the System disk added the localisable resources and additional machine-specific drivers) Higher resolution displays with more colours means every graphical element gets bigger.

      The install isn’t what’s loaded, though. Resources and libraries don’t get loaded until they’re actually needed, so startup times are not much longer between 7 and 11: you can find out how fast Windows 7 loads on modern hardware by using a VM - it really isn’t significantly faster than 10/11. If you want to see just how I/O bound the OS startup process is, load something like Win 3.1 or Mac System 7.x from SSD on a modern emulator. Back in the days of MacOS 8 on spinning disks, I used to turn on my Mac in the morning, then go to the coffee machine, because there was no point in staring at icons appearing along the bottom of the screen for two minutes. I launched the same OS on an emulator recently, and the whole thing sprang to life in three seconds!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like