back to article AMD rips covers off 64-core Threadripper desktop monster, plus laptop chips, leaving Intel gesturing vaguely at 2021

AMD this week touted a bunch of new laptop and desktop silicon that put main rival Intel to shame. The raft of Radeon and Ryzen components – launched at this year's Consumer Electronics Show (CES) in Las Vegas, USA – will run between 10 and 20 per cent faster than Intel equivalents, according to AMD’s own testing. Included in …

  1. cornetman Silver badge

    AMD says bend over, and Intel says, erm OK.

  2. Shadow Systems

    Clean up on aisle 5...

    *Copious geeky drooling over that desktop AMD chip*

    <Homer Simpson>Me waaaaaaants...</Homer>

  3. J. Cook Silver badge

    128 hardware threads... that's a lotta CPU Horse right there. I could see ESX servers being built with that monster... as long as it has a large enough pool of memory to back it up with.

    1. RAMChYLD Bronze badge

      I'm looking at it from a different perspective

      Nothing says "power" like large projects like glibc and the Linux kernel compiling in seconds.

      128 threads. That's technically 128 instances of GCC running at the same time (make bzImage -j 128). You could probably even push it harder with 256 instances.

      This is the CPU for the impatient Linux developer.

      1. DuncanLarge

        Re: I'm looking at it from a different perspective

        Installing Gentoo becomes something that is done after you finish drinking your tea

    2. big_D

      They make a great development workstation for testing virtual machines.

      You'll run into problems with throughput and memory (only has a max. of 256GB RAM). It also isn't designed as a server platform (motherboard etc.). I've been using a Ryzen 7 as a VM testbed for a couple of years and it is a great solution, but I wouldn't trust it for a live, production environment.

      1. Fenton

        Thats why you have EPYC which can support 4TB per Socket.

        1. big_D

          Exactly. An EPYC is a great server platform.

          But, for me at home, testing a small network of VMs, it is overkill.

      2. Anonymous Coward
        Anonymous Coward

        WikiChip state 512GB per channel and there are four giving 2TB max memory. Last years threadripper maxed out at 256GM per channel, with 1TB total. This does all depend on motherboard support but 2TB should keep most desktop workstations happy.

        1. phuzz Silver badge

          "2TB should keep most desktop workstations happy."

          It'll certainly keep your memory seller happy, 256GB DIMMs do seem to be available, if you have £3000 per DIMM to throw around.

          Still, that's only £24,000 worth of RAM. Totally makes sense to pair it with a £3000 CPU.

      3. TeeCee Gold badge
        Facepalm

        Ryzen != Threadripper.

  4. Glen 1
    Paris Hilton

    "It is fabricated using TSMC's 7nm process, and sports 32KB of L1 instruction and 32KB of L1 data cache per core, 512KB of L2 cache per core, and 256MB of shared L3 cache..."

    It's been a while since I've been au fait with CPUs, but I'm surprised at the small size if the L1 cache. I suppose as its per core, it adds up, as well as depending on the speed ratios with the L2/L3 cache.

    For comparison, the Pentium Pro (1995-1998) had an L2 cache between 512KiB-1MiB over its life. The Pentium MMXs that were the consumer grade chips around at the same time had 16KiB-32KiB. Am I missing something?

    1. Anonymous Coward
      Anonymous Coward

      L1 cache size is chosen with a particular latency in mind. For example, you might be able to manage a 2 cycle latency at 16KB but need 3 cycles if you go larger.

      That's the whole reason you have multiple levels of cache. Otherwise you'd just have a 16MB "L1" with a couple dozen cycles of latency and call it a day.

      1. Glen 1
        Paris Hilton

        I was under impression the main limiting factors were die area and power budget, combined with diminishing returns dependant on the size of the working set.

        What causes the latency to increase? Path length?

        Thanks in advance.

        1. Anonymous Coward
          Anonymous Coward

          Caches are complex, they usually have multiple "ways" to make them more efficient, there is snooping involved with higher levels of cache since there is no longer such a thing a single CPU system, and must handle exclusive access for certain primitives used for locking.

          So yes it is partly path length, but there is a lot more than that and the bigger it is the more time these things require - thus more cycles of latency.

    2. diodesign (Written by Reg staff) Silver badge

      Cache sizes

      Your Pentium from the 1990s is, like, single core, right? So there's space on the die for cache. With 64 cores, you can't bung too much on without producing dies that smash your yield targets.

      Look at it this way: there's a total of 4MB L1 cache, 32MB L2, and 256MB of L3 in the 3990X.

      And despite leaps in processor technology, it's likely today's software still works comfortably within 32KB working sets anyway, what with the latency issue DougS mentions above in mind.

      C.

      1. Sgt_Oddball
        Holmes

        Re: Cache sizes

        That odd moment when you realise you could fit and run a whole Linux distro just in the cache alone.... What a time to be alive! (Tiny core Linux for the curious).

        In all seriousness I wonder if you could? (to hell with if you should)...

        1. katrinab Silver badge
          Paris Hilton

          Re: Cache sizes

          Or my old 486 from back in the day, you could fit the entire contents of the hard drive in there as well.

      2. hmv

        Re: Cache sizes

        Er ... Threadrippers aren't a single die; they're multiple chiplets (dies).

        And whilst the working set of a single piece of software may well fit into 32Kbytes, if you're only running one piece of software I'd be very surprised - I'm currently running nearly 700 processes here and nearly 900 at home (on a first generation threadripper).

        1. Schmomonic

          Re: Cache sizes

          doctor_malcom_JPark.gif

    3. Tom 64

      Don't forget that those old pentium's didn't even have an L3 cache

    4. bazza Silver badge

      Small L1 cache sizes like this are annoying if you’re into DSP. For example if you’re doing a lot of FFTs, it’s convenient if the twiddle factors (which remain constant), input, intermediated and output (which may overwrite the input) all fit in L1. That way the fastest possible compute times are achieved. With only 32kB of L1 cache, the size of FFT that fits entirely in L1 is fairly small. This may be ok for some applications but it can be an annoying limitation for others.

      The Cell processors (as found in Sony’s PS3) were radically different. They had no cache at all, but had 8 math cores each with 256kB of static on chip RAM instead of L1, and huge bandwidth between those RAM chunks and external RAM. This was excellent because it was a lot bigger, but were notoriously difficult to code for (unless you were already into that kind of thing). Though sadly abandoned by IBM who’d developed it, it took 10-ish years for Intel to finally design a CPU with more maths performance than the Cell. It was a truly awesome piece of silicon and drew only 80W.

      Some POWER CPUs from IBM have 64kB L1.

      1. Korev Silver badge
        Boffin

        Great post Bazza, one tiny thing though:

        >Some POWER CPUs from IBM have 64kB L1.

        As do some of Intel's Xeons.

        Edit: Intel's new Ice Lake has changed its L1 config which might break quite a lot of optimisations

        1. bazza Silver badge

          Hi Korev,

          >As do some of Intel's Xeons.

          Do they? Thanks! Boy, I'm out of touch. Better get back to coding quick!!

        2. bazza Silver badge

          Edit: Intel's new Ice Lake has changed its L1 config which might break quite a lot of optimisations

          Yes it might!

          A thing Intel did get fairly right I think was to do the MKL / IPP libraries, and encourage adoption of those. Apologies if you're already deeply familiar with those, ignore the rest of this post!

          Use those, let them work out combinations of threads, etc. that best suit the CPU architecture that the program finds itself running on, and the developer only really needs to write single threaded code to get good general purpose performance. Optimising code for a specific CPU by hand can result in better performance, but one has to be pretty determined to buck the cost / benefit curve for that to be worthwhile.

          A difficulty with the approach is that it only works optimally on newer CPUs if one has a sufficiently recent version of the MKL / IPP library that knew how to optimise itself on them. This means you have to keep issuing software updates. The OSes don't provide these libraries as part of their default set of packages, meaning their software update mechanism isn't doing this.

          Anyway, provided that this is addressed, a software dev doing it the Intel way probably can probably ignore the changes in L1 config. Unless Intel have fallen behind the curve too.

          1. Korev Silver badge
            Boffin

            We tend to compile most of our stuff with the open source (GCC, BLAS etc.) as most software doesn't seem to expect ICC, MKL etc. It's only for when we want to get it really fast that we'll go for MKL.

  5. eldakka
    Coat

    I hope Scott feels better...

    ...after vomiting up that load of tripe:

    Here’s the canned excito-quote for this range, attributed to Radeon head Scott Herkelman: “From heart-pounding esports competitions to eye-popping AAA blockbusters, today’s games demand more performance, higher framerates and lower latency than ever before… We’re committed to providing all 1080p gamers with the raw horsepower and incredible features that enable the absolute best gaming experiences with all settings maxed out.”

    1. Bronk's Funeral

      Re: I hope Scott feels better...

      Scott Herkelman didn't write that.

      Source: I write stuff like that.

  6. Will Godfrey Silver badge

    A bit much

    I have absolutely no need for such a monster and couldn't possibly afford one...

    but...

    I want one!

    1. Fading
      Thumb Up

      Re: A bit much

      I've been saying the same since the 3970X (only 32 cores - 64 threads) was released at the end of last year. There's nothing I do that particularly stresses my existing 7820X machine (well not for long) but maybe I could think of something......

      1. phuzz Silver badge
        Flame

        Re: A bit much

        I've just upgraded form a quad core, to a 8core/16 thread monster...and most of the time all 16 threads are ticking along at <5% usage.

        Still, transcoding video using Handbrake does use all the cores at full speed, and lets you find out if your cooling system is up to the job.

        my computer goes a bit like this >>>>>

  7. Andre Carneiro

    It’ll be interesting to see BOINC scores when these monsters start turning up in the wild :)

  8. DuncanLarge

    The only thing I can say is

    jesus f*cking christ

    Just imagine buying one of these in 10 years off ebay for £100

    1. Sir Runcible Spoon

      Re: The only thing I can say is

      By which time your phone could probably trounce it :)

      1. defiler

        Re: The only thing I can say is

        I watched a presentation by Sophie Wilson on CPU design progress. It was from 2016, and she said that the latest and greatest phone CPUs couldn't dissipate enough heat to run flat-out, and half the cores had to be off (on average, I guess).

        So, yeah. It's like the old F15 vs Concorde thing. the F15 will take it in a sprint, but Concorde has the legs to just keep going. And that analogy shows my age, but there you go...

        27nm - that was the other thing that stuck in my mind from that lecture. 27nm was the optimum feature size for cost. Any further required crazy interferometry. And her description of x-ray lithography was just astonishing.

  9. W. Anderson

    As n ardent AMD CPU subscriber for my personal computers and small business servers in 1990s and early 2000s, I am pleased that AMD can now offer chipset that are proven superior to anything from Intel.

    However the concerns for me and many of my clients now seems to be that the company has geared all it's development towards the Microsoft Windows realm almost exclusively, thus eliminating significant adoption in the technology spheres where almost all the implementation, and therefore sales are Intel based - like Data Centers, Cloud Computing infrastructure, Supercomputing research for medical, aerospace, defense and climate change, web hosting and Mobile telecommunications data centers that all deploy Linux or BSD UNIX-like (*NIX) software.

    Even in the area if graphics and 3-modeling, I am aware that "every" major animation studio now use Linux design workstations instead of Apple MacOS and even set up their rendering farms on Linux instead of Windows server - at least according to three of these large animations companies' official statements, and confirmed public knowledge that most all animated movies from Toy Story1 through 4, the Shrek series to Avatar and beyond use Linux, not Windows.

    Even NASA, Airbus and Elon Musk's SpaceX company use non-Windows almost exclusively in their engineering, design and programming software models.

    Why then is AMD so hung up on Microsoft Windows? - except maybe for Gaming base, which will never bring the company to substantial earning, especially in the industries mentioned above. Note: Microsoft Azure is 64% plus Free/Open Source Software (FOSS) based, including *NIX verbally enunciated by company tech executives during a SuSE Linux Expert Days conference in NYC in early and again in late November 2019.

    Financial success has and always be in enterprise and government sales, not niche enclaves.

    1. TeeCee Gold badge
      WTF?

      Just out of interest, why is "fast as fuck for multithreaded workloads" windows specific?

  10. W. Anderson

    AMD must be more broad in software support

    As n ardent AMD CPU subscriber for my personal computers and small business servers in 1990s and early 2000s, I am pleased that AMD can now offer chipset that are proven superior to anything from Intel.

    However the concerns for me and many of my clients now seems to be that the company has geared all it's development towards the Microsoft Windows realm almost exclusively, thus eliminating significant adoption in the technology spheres where almost all the implementation, and therefore sales are Intel based - like Data Centers, Cloud Computing infrastructure, Supercomputing research for medical, aerospace, defense and climate change, web hosting and Mobile telecommunications data centers that all deploy Linux or BSD UNIX-like (*NIX) software.

    Even in the area if graphics and 3-modeling, I am aware that "every" major animation studio now use Linux design workstations instead of Apple MacOS and even set up their rendering farms on Linux instead of Windows server - at least according to three of these large animations companies' official statements, and confirmed public knowledge that most all animated movies from Toy Story1 through 4, the Shrek series to Avatar and beyond use Linux, not Windows.

    Even NASA, Airbus and Elon Musk's SpaceX company use non-Windows almost exclusively in their engineering, design and programming software models.

    Why then is AMD so hung up on Microsoft Windows? - except maybe for Gaming base, which will never bring the company to substantial earning, especially in the industries mentioned above. Note: Microsoft Azure is 64% plus Free/Open Source Software (FOSS) based, including *NIX verbally enunciated by company tech executives during a SuSE Linux Expert Days conference in NYC in early and again in late November 2019.

    Financial success has and always will be in enterprise and government sales, not niche enclaves.

    1. Fenton

      Re: AMD must be more broad in software support

      I think that is a little unfair, we just don't hear much about the Linux/Enterprise side. Given the rise of EPYC and the use of EPYC CPUs and Navi GPUs for Google Stadia which is based on Linux.

    2. phuzz Silver badge

      Re: AMD must be more broad in software support

      Short answer; because there's a lot more desktops and laptops out there, running Windows in people's homes, than there is large animation studios.

      Sure, AMD probably make more margin (which is a polite way of saying "overcharge") on their server chips, but the low end market outpaces that in volume, by spades.

      Financial success comes from the end user market, not government and enterprise.

      1. W. Anderson

        Re: AMD must be more broad in software support

        This commenter is basically very ignorant about the technology industry in regard financial earnings.

        The US Defense department alone just awarded a 10 plus billion dollar contract for Cloud Computing, which at this time is almost 100 % Intel server based. The vast majority of earnings for IBM, Oracle and other technology behemoths is in Intel data center servers . Wall Street and all USA/International Banking and Stock Exchange earnings reports for all major companies spell this out.

        In what country is this person living? Even if they resided in North Korea, or Bangladesh, Myanmar or other less sophisticated, developed nation they would know this.

        TheRegister needs to attract more informed readers.

        1. phuzz Silver badge

          Re: AMD must be more broad in software support

          Thanks for that particularly passive-aggressive insult, I suppose it's more gentile than "omg ur a idiot".

          Anyway, a quick glance at AMD's most recent earnings show that they made $1.28B in revenue from desktop and mobile CPUs, and only $525M from enterprise and embedded (which includes the CPUs for the PS4 and XboxOne, I can't find any figures that just break out the server/enterprise revenue).

          AMD are concentrating on where they make the most of the their money (ie consumer), rather than enterprise, an area where Intel completely dominate.

          So perhaps I should clarify my previous comment, AMD's financial success comes from the end user market.

        2. magicaces

          Re: AMD must be more broad in software support

          Latest earnings from Intel show a 50-50 split between Desktop/Laptop revenue and Datacenter revenue of all kinds. So actually its very balanced and AMD doing better in the consumer market will better them for the enterprise and large corp/governments. No one has trusted AMD for a decade and that takes time to recapture even with very good products because Intel have such a huge market share and are making billions in all markets.

  11. Agent Tick
    Stop

    Why investing in AMD?.....

    .. when they can't get their drivers working? Their GPU drivers are an increasing drama scenario since May last year and they do nil!

    People mostly return to older driver releases to get working drivers again - that's AMD for you!

    1. Carpet Deal 'em
      Facepalm

      Re: Why investing in AMD?.....

      Even if AMD were the bottom of the barrel graphics-wise, they're still king of the hill on the x86 front. Which is what this article is about(and in which nVidia doesn't compete).

      1. jason 7

        Re: Why investing in AMD?.....

        Been using AMD GPUs since 2009. Never ever had a driver issue. Solid.

        So many people are doing it wrong I guess.

        1. keithzg
          Trollface

          No driver issues for me either

          Although OP is presumably talking about Windows drivers, and I barely ever bother to boot into Windows :P

  12. jason 7

    It's nice to have all that power...

    ...I just wish I had some software that needed it or could use it properly.

    Yes, sorry I'm not you, that super uber power user that always needs double the power for...whatever.

    1. keithzg

      Re: It's nice to have all that power...

      Pro-tip: then don't buy this expensive and specialized CPU!

  13. Horridbloke
    Gimp

    Yes but...

    ... can it run the Crysis remaster?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like