back to article Third time's still the charm: AMD touts Zen-3-based Ryzen 5000 line, says it will 'deliver absolute leadership in x86'

On Thursday, AMD CEO Lisa Su presided over a webcast to introduce the chip designer's latest line of Ryzen processors based on its Zen 3 microarchitecture. "Zen 3 increases our lead and overall performance," said Su. "It increases our lead in power efficiency. And also now it delivers the best single threaded performance and …

  1. swm

    Go AMD. No longer second best.

    1. Anonymous Coward
      Anonymous Coward

      Except they're comparing a 12 core AMD part against a 10 core Intel part so not really.

      1. Boothy

        Yes, you're right, they are comparing the current top end Intel i9 10900K, against the 2nd to top Zen 3 part, so that is rather unfair.

        They really should be comparing against like for like, i.e. top of range Intel vs top of range AMD, so they should have used the Core i9-10900K vs the Ryzen 5950X instead of the slower 5900X.

        /s

        A bit less /s I would be curious about a Core i7-10700K vs the Ryzen 7 5800X though, as these are both 8 core 16 thread parts.

        1. Anonymous Coward
          Anonymous Coward

          Once again not really, Intel have a 12 core consumer CPU which could have been used. The Core i9 10920X which looks to be priced not far behind AMDs soon to be released chips.

          1. Boothy

            The 10920X is classed as a high end desktop/workstation/HEDT part, whereas Ryzen 9 is a regular desktop part, the equivalent AMD HEDT part being Ryzen Threadrippers.

            Even if we do compare this Intel Workstation part against the AMD desktop part, the 10920X is the older Cascade Lake model, rather than the newer Comet Lake of the Core i9-10900K. So you've got two extra cores over the 10900K, but the chip itself clocks about 8% slower.

            So for heavily multithreaded tasks you might see perhaps a 12% increase over the 10900K (more cores, but slower clock), but for single threaded tasks, or those more heavily dependant or one or two threads, like gaming, you're going to see a drop in performance compared to the 10900K, which is why the 10900K is touted by Intel as being the best gaming CPU.

            Assuming the quoted figures in the article are correct : "For general content creation, AMD compared its 5900X to Intel's 10-core Core i9 10900K, with the 5900X coming out ahead in video editing (13 per cent), rendering (59 per cent), CAD (6 per cent), and compiling (14 per cent)."

            So the 10920X could likely beat the Ryzen 9 5900X for the CAD task, but would still be slower for everything else, and that would especially be worse for single or lightly threaded applications, such as gaming.

            You could use the 14 core Core i9-10940X, which could likely beat the 5900X at multithreaded tasks, but you're back at a miss match of core count again, which seems to be what you don't like.

            Perhaps a Core i9-10980XE (18 core) vs 5950X (16 core) would be interesting? AMD clocks faster, but is 2 cores down on Intel.

  2. cb7

    Some observations:

    1. Usually, the chips with fewer cores clock faster. Here the opposite appears to be true. I suppose it's a way to encourage people to buy the higher end chip. Unless the lower end models can be overclocked easily. Time will tell.

    2. No Ryzen 3 chips were announced. I'm hoping they'll come sooner rather than later. Not everyone can afford a $299 CPU. The trouble is, most gamers don't need more than 4-6 cores, so releasing Zen 3 Ryzen 3's early will no doubt hurt higher margin Ryzen 5 sales.

    3. It would appear AMD has finally won the single core crown from Intel (at least according to the announced single core Cinebench score). Well done AMD.

    The Cinebench score seems remarkable given that Zen 3 tops out at 4.9GHz vs Intel's 5.3GHz.

    However, Intel's hitting those frequencies at 14nm whilst AMD's now on 7nm. I wonder what's stopping AMD clocking faster? And when (if?) Intel hits 7nm, there's a good chance they'll be on top again.

    Nevertheless, it's a great time to be building some kick ass machines :-)

    1. Spacedinvader

      1. They do, TDP difference between bottom 2. Turbo is single core.

      2. moar cores!

      3. Intel suffers a little in AVX iirc but still, AMD is the nuts!

    2. Sgt_Oddball

      Hold up..

      2. No Ryzen 3 chips were announced. I'm hoping they'll come sooner rather than later. Not everyone can afford a $299 CPU. The trouble is, most gamers don't need more than 4-6 cores, so releasing Zen 3 Ryzen 3's early will no doubt hurt higher margin Ryzen 5 sales.

      I still remember when quad cores where niche and everyone said they were pointless as you only needed 1 or two cores.

      As 6-8 cores are becoming commonplace then you'll see them used more (even more so as games consoles are now moving to multi-core chipage).

      Give it a couple of years and 'nobody will need more than 8 cores' another 5-6 and it'll be 'nobody needs more than 12 or 18 ad nauseum'.

      1. chuckufarley Silver badge

        Re: Hold up..

        ...because I think there is another scenario to consider.

        As the designs of the CPU's becomes better they become more efficient at processing. With the double digit gains in IPC there may come a day when reducing the number cores is a good idea for main steam computing chips. This is because with a high enough IPC CPU manufactures are better off selling a four core CPU with 4x SMT than they are selling an eight core CPU with 2x SMT. A case in point (Although at the Enterprise level) is IBM's Power 10 CPU with it's 8x SMT. Once AMD and Intel get their designs dialed in we could see dual core laptops supporting 16 threads.

        Remember that here in The Land Beyond Moore's Law we will see improvements to designs more often because it's is easier to do than shrinking a die. Am I right, Intel?

        1. Steve Todd

          Re: Hold up..

          You realise that SMT is good only when you have a cache miss. The core can then suspend work on the current thread and work on another. SMT 4 and 8 only make sense when you’re working with huge data sets that cause many cache misses. As it is you only get about a 30% gain from SMT 2 on these desktop class CPUs with SMT 2, so you’re better off with 6 single threaded cores as opposed to 4 SMT.

      2. Boothy

        Re: Hold up..

        Quote: "As 6-8 cores are becoming commonplace then you'll see them used more (even more so as games consoles are now moving to multi-core chipage)."

        I think 8 cores is going to be new the sweet spot for gaming PCs. (arguably 6 cores currently).

        I think a major part of this is that the current gen XBox and PS consoles have had 8 cores since 2013, (and of course the new gen consoles will again have 8 cores, although much faster in comparison Zen 2 cores this time), but its taken years for developers to update game engines to really take advantage of these 8 cores.

        But that does seem to be changing now. Recent core count benchmarks on PC of modern games (i.e. modern game engines) are quite interesting these days. (When paired with a decent GFX card and on high settings).

        I saw one recently with an RTX 2080 at 1080p, and with a 4 core CPU, the GPU was running at less than 70% utilised (or less) across a mixture of games (AC Odyssey, Battlefield 5 etc) as the 4 cores couldn't keep up. In Battlefield 5 for example, the 2080 was only running at 47% utilised, (although this was still averaging over 60pfs).

        With an identical 6 core system [1] showing 30-40% more FPS than the 4 core, and at 8 core being another 20-30% faster than 6 cores, but once you get to 10 cores, this is less than ~8% gain over 8 cores and you're then down to just 1% or 2% gains past 10 cores, so very much diminishing returns.

        This can be mitigated by playing at higher res, like 1440p and 4k, as that pushes the bottleneck towards the GPU instead, which can alleviate the pressure on the CPU, and of course this all depends on what games you're playing, what the game engine is, and if you actually want the high framerates etc. i.e. If you're not playing action type games, then it's less of an issue.

        Currently it seems, if you have a mid range card like a 2060/2070, you need to pair this with a 6 core CPU as a minimum to get the best out of the card, for a 2080 and higher, then 8 cores. 10 or more cores is likely overkill, and I can't see that changing any time soon, with consoles being on 8 cores for likely another decade.

        1: The benchmarkers disable cores in the same system, so same hardware being used each time at the same clock speeds, just different number of cores available each time.

        1. Anonymous Coward
          Anonymous Coward

          Re: Hold up..

          "I saw one recently with an RTX 2080 at 1080p, and with a 4 core CPU, the GPU was running at less than 70% utilised (or less) across a mixture of games (AC Odyssey, Battlefield 5 etc)"

          Yes, but can it run Crysis?

    3. The Dogs Meevonks Silver badge

      I'm waiting to build my mum a new PC as her's is really old and still on the AMD FX platform.

      I've been trying to actually find a 3300X as that's the perfect match for what she needs... throw in an old RX580 GPU of mine, and the spare 32GB of 3200Mhz DDR4 I have laying around... and all she needs is a motherboard, new PSU and some NVME storage to make that thing fly.

      But can I actually get a 3300X... nope... been trying for 6 months. No one seems to ever have any stock at all... it's like it never actually existed.

      So if there is going to be a Zen 3 Ryzen 3... it won;t be this year and getting hold of one, may be a case of rocking horse droppings laced with hens teeth again.

      1. Mark #255

        Ryzen 3

        The Ryzen 3 3100 (Zen 2, like the 3300X) is generally available (I got one 6 weeks ago, and Scan still have them in stock), and has the same core/thread/TDP/cache; you lose 200 MHz base, 400 MHz turbo vs the 3300X.

        1. Kobblestown

          Re: Ryzen 3

          However, the 3100 has two CCXs with half the resources each, whereas the 3300 has a single complete CCX. The latter is better because the core-to-core latency is much higher across CCX-s. So the difference is bigger than what the frequencies alone would suggest. Should still enough for a mom PC though.

      2. cb7

        B550M and 3300X bundles still in stock here:

        https://www.ebay.co.uk/itm/114433660318

        Buy, then stick the board back on eBay if you don't need it.

      3. Anonymous Coward
        Anonymous Coward

        There are some AMD Ryzen based mini PCs floating around on Amazon that have more than enough oompf for what you need, maybe an idea? You're looking at €450 for enough power for most normal use, and they're silent.

      4. Tom 7

        I'm not sure what your mum needs all that grunt for, We have a family PC that is used for all the general browsing etc, kids school zooms etc. Its on a 4core Intel i3-8100 CPU @ 3.60GHz but it really is bored shitless most of the time. I've got it running F@H just to use up some of the PV.

        1. JClouseau

          Mum's rig

          Ditto here, the "big" PC is (drum roll) a Phenom II X6 1065T (a 2011 design I believe) coupled with a GTX1060. And a SATA SSD for the system.

          It's more than OK for browsing and office work, and the kid isn't complaining about Fortnite, Rocket League or Battlefront performance.

          The real bottleneck at some point was the HD5770, but now I don't feel compelled to "refresh" (i.e. replace) it. I can also proudly contribute to the Vulture's COVID F@H effort.

          Hell, I'm still running Win7

          So unless OP's mum is a serious gamer, or an AI researcher, I see no reason to replace everything.

          Now I really love AMD, but am I the only one confused by the complete lack of synchronization between the "Zen" architecture numbers and the Ryzen CPU references ?

          This "Ryzen 5000" thing based on "Zen 3" lost me for good.

    4. Boothy

      1. From what I understand, this is a thermal and silicon quality thing.

      AMD heavily bins their chips for the higher end CPUs, saving the best quality silicon for the top end.

      That's why the top 3 CPUs have the same 105W TDP, despite ranging from 4.7GHz and 8 cores, to 4.9GHz and 16 cores, but still at the same 105W TDP. Just better quality silicon in the faster CPUs helping to keep thermals down. (And conversely of course, worse quality silicon in the slower running chips, meaning they run hotter comparatively).

      2. AMD seem to have stated so far, no Zen 3 based Ryzen 3s are coming (or more specifically no other Zen 3 variants are coming, other than the four announced already), at least not this year, I guess they expect if you're a budget gamer, you'll just buy a Zen 2 part instead, perhaps upgrading to Zen 3 at a later date.

      For gamers with a less restricted budget, modern gaming engines (Battlefield 5, recent Far Cry and Assassin's Creed games etc) they scale quite well through to 8 cores these days, (if paired with a decent card like a 2070 Super/5700XT or higher grade card). A 4 core system can drop 30%+ fps compared to a 6 core system with the same GFX card, although beyond 8 cores and it's very much diminishing returns.

      Obviously if you're using a lower end card, (or mobile version), then it's less of an issue, and 4 cores would likely be fine, as the GPU would be more of a bottleneck then. Or if you just play older titles of course.

    5. R3sistance

      "The Cinebench score seems remarkable given that Zen 3 tops out at 4.9GHz vs Intel's 5.3GHz.

      However, Intel's hitting those frequencies at 14nm whilst AMD's now on 7nm. I wonder what's stopping AMD clocking faster? And when (if?) Intel hits 7nm, there's a good chance they'll be on top again."

      I suspect power leakage is a big part of the issue and that becomes more of an issue the smaller/more dense the circuit/transistors get, more so below 14nm. FinFET is replacing MosFET for 7nm for reasons like this, and where there is some* validity to Intels stance of not rushing further down the smaller silicon sizes. But I'm not an expert in this area, so not certain if this is the reason.

      *a very very small amount of validity, Intel is still trailing massively to the gains that smaller silicon is offering.

      1. Boothy

        From what I've seen, TSMCs 7nm node was designed around low power usage, rather than raw speed, as their main market was mobile CPUs (iPhones/Android CPUs etc.), rather than desktop CPUs.

        AMD are basically factory overclocking everything beyond the TSMC specifications, and using thermal monitoring to control this.

        I think this is likely one of the reasons AMD have put so much effort into making their chips more IPC efficient through architectural changes, as they know they can't hit the same speeds as Intel due to the 7nm process they use.

        Plus of course, Intel's 14nm is very very mature now, so they've managed to squeeze everything they can out of that node.

        Be interesting to see what Intel does do once they hit 5nm, i.e. they'll likely improve power efficiencies, but will this also hit speed issues like they did with 10nm?

        For ref, Gamers Nexus have got Zen 2 to run at ~5.6GHz, but it takes extreme cooling (i.e. liquid nitrogen) to get that high.

      2. Anonymous Coward
        Anonymous Coward

        How can you claim they're trailing massively when AMD are only just about to release cpu's to the market that narrowly beat the Intel at single thread?

        1. Anonymous Coward
          Anonymous Coward

          Intel haven't released a major update to their architecture for years now, and are still stuck on 14nm, again for years now.

          Intel keep tweaking their architecture, (and giving it a new name and 'Gen' number) but it's still basically the same architecture it was 5+ years ago with just minor changes here and there (security mitigations, better thermal management which let them clock a bit faster etc).

          Intels 10nm has basically been a failure, and their new node is going to be years away, assuming it even works.

          So far, Intel show no sign of doing anything really new, so next year we'll most likely see yet more minor tweaks and changes, again.

          AMD on the other hand have released complete new architectures and/or improved nodes on a regular basis, and show no sign of slowing down.

          With Zen 2, the only thing Intel beat AMD on was single thread performance, for IPC, multithreading and thermal efficiencies, AMD were already well ahead of Intel.

          AMD has now beaten Intel with Zen 3 on single thread performance, Intel's last holdout, therefore there is basically nothing left technically that Intel can compete with against AMD (they could still compete on price).

          Zen 4 is also due out next year, and is both a major architecture change, new memory with DDR5, and a node improvement. I doubt Intel can even do one of those things, let alone all of them in one year.

          Unless Intel have some magic rabbit they can pull out of a hat somewhere, it's unlikely they can keep up with AMD now, let alone pass them again.

          1. Anonymous Coward
            Anonymous Coward

            Eating huble pie not rabbit

            TSMC's lead in this process generation is pretty authoritative. Intel may have to tap out and eat humble pie by outsourcing some fab work for the next few years. That may already be in process. AMD has enough history and pull to keep making process tweaks on top of TSMCs process to optimize performance for their needs, Intel can buy into the same opportunities.

            The biggest fear would be China going into Taiwan and kicking the T out of TSMC. Intel has one or two worldwide manufacturing sites, so I'd try for the bold play to JV TSMC run fabs at Intels current sites to remove them from Chairman Xi's reach.

  3. chuckufarley Silver badge

    I wonder if...

    ...The manufacturing yields will take a hit do to the new chiplet design? Most applications will not even notice but a few will. I have to use containers to isolate some work loads on Zen2 CPU's because they loose about 10% of overall performance if the process has to reach across the Infinity Fabric too often to get it's data. By ensuring that the tasks stay pinned to specific sets of cores I can crunch more numbers but with a higher management overhead.

    Not that I am going to recommend too many people currently on Zen2 run out and invest heavily in Zen3 just because of this. On the other hand, if you have been putting off upgrading for a while then this might just be your new kit.

    1. cornetman Silver badge

      Re: I wonder if...

      > The manufacturing yields will take a hit do to the new chiplet design?

      Their yields at TSMC have probably become sufficiently good that it is no longer the issue that it was.

      Note though that we still don't have figures on how big these 8 core complexes are and they are still using the modular design, just stuffing more cores in the module. So it might be that these complexes are not too much bigger.

      Another point might be that they have cleverly figured out how to utilise more defective cores usefully on lower level SKUs through redesign of the layout. Just spitballing there.

      1. Boothy

        Re: I wonder if...

        Quote: "...just stuffing more cores in the module. So it might be that these complexes are not too much bigger."

        I got the impression from the AMD presentation that the core count per module hasn't changed between Zen 2 and 3. i.e. it still maxes out at 8 cores. What they've done seems to be to rearrange things (plus some other inner core improvements). Previously the modules split the 8 cores into two blocks of 4 cores, with each block of 4 cores having their own 16MB L3. So you had 2 x 16MB L3 in each chiplet. They now rearranged things, so the L3 is now a single block of 32MB down the centre, and is used by all 8 cores (4 each side of the L3 block).

        From what I've seen so far, the chiplet layout itself is the same, one IO chiplet, and either 1 or 2 CPU chiplets, depending on model, with upto 8 cores in each chiplet. Exactly the same as Zen 2, and still on the same 7nm (i.e. this isn't the 7nm+ which had been rumoured).

        Although the chiplets themselves might grow in size a little, as I understand they've added more to the math side of things, so likely a few more transistors in there now than in Zen 2, but we'll likely need to wait for someone to de-lid one to know for sure.

    2. Boothy

      Re: I wonder if...

      Quote: "The manufacturing yields will take a hit do to the new chiplet design?"

      Why? The updated chiplet is still 7nm rather than say the new 7nm+ or 5nm, and 7nm is quite a mature process now. Plus the architectural changes mostly seem to be rearranging things, like merging the two separate L3s (used by 4 cores each in Zen 2) into a single double sized L3 block, with all 8 cores in each chiplet using that one L3 now.

      The only thing that might impact yields, is if the new Zen 3 CPU chiplets are larger than the existing Zen 2 ones, but as far as I know, other than a some additional maths processing capabilities, the basic CPU core design hasn't changed much from Zen 2. So we are perhaps looking at a single digit % increase in size, if at all.

      If there hasn't been a die size change, then I'd expect the yields to be the same as Zen 2.

  4. The Dogs Meevonks Silver badge

    I'm kinda excited by this news, but not enough to rush out and buy a new CPU. I've only built my current gaming rig back in April with a 3800X, X570 MB and 32GB DDR4 3600Mhz... and that's more than quick enough for now... Especially as that 3800X will do a 4.4Ghz all core clock at 1.23v.

    I'm more interested in the GPU side at the moment... I do have a 5700XT which is a solid 75fps 1440p card... But I've also bought a 144hz 1440p monitor to go with the 2x 75Hz ones I already had... and it would be nice to take advantage of that. So I'm leaning towards the new 6xxx series GPU's and how they compare with the 30xx nvidia offerings. I'd like to be able to take advantage of DLSS rather than raytracing though... if I had a HDR monitor, I might be more interested in the RT side... Maybe in a couple of years I'll get a 4K HDR 240hz monitor to play with.

    1. R3sistance

      DLSS is a nice innovation but it requires game devs to implement it and to implement it they have to go to nvidia and get nvidia to train their AI to enhance upscaled images for it. It will not be accessible to all games, more so don't expect it for games associated to AMD like Borderlands 3 for example. However the most notable title of this year, Cyberpunk 2077 will definitely have DLSS 2.0 support.

      Right now, I wouldn't even expect the 3090 to be able to keep a constant 240hz@4K even with DLSS, without DLSS a constant 120HZ@4K should be achievable at native/rasterization. At native I expect the 3080 to potentially be able to hit 120HZ but not to be constant and the 6000 series to probably top 100-110HZ but doubt it'd get 120Hz. This is all based on estimates, not testing or empirical numbers and I'm going off of the rough average demand of games today, not next year or the year after.

      8K on the 3090 is a joke, it's ONLY achievable with DLSS, without it expect console level FPS (20-35FPS) for 8K for most games. Most games aren't optimised for 8K anyways, so this should be expected. 8K will remain niche for more than a few years to come, 1440p is hardly widely adopted and 4K isn't much better. 1080p still remains the dominant resolution for now.

  5. John Savard

    Not This Year for Me

    I'm not too discontented that I broke down last year and bought a 3900X. Sure, this year's chip is even better, but it needs a fancy cooler. So I seem to have gotten on the train at the right time, and someday I'll upgrade again when there are more fantastic improvements.

    It's great that their single-thread performance now beats Intel's, but last year it was just so slightly behind that it hardly made a difference.

    Last year was the transition from being significantly behind Intel to near-parity, this year is just from being ever so slightly behind to being ever so slightly ahead, so it seems to me that this year is less exciting.

    Unless the 'wider floating-point unit' means these chips have AVX-512. Then I would save my money and run out and buy one.

  6. RM Myers
    Unhappy

    How long will it take

    For the buyers with bots to buy all the new Ryzen CPU's on day one and/or crash the retailers websites? It is 2020 after all. Personally. I'll be sticking with my 3 month old 3700X and trying not to suffer too much buyer remorse. No matter when you buy CPU's, there is always something better coming in the next 6 months. That has been one result of AMD's revitalization.

    1. Sorry that handle is already taken. Silver badge

      Re: How long will it take

      I don't think mainstream CPU availability has ever been an issue but demand outstripping their supply projections would certainly be a "nice problem" for AMD to have.

    2. Dave K

      Re: How long will it take

      I've never seen the point of that. I have a first generation Ryzen 7 1700X. It's three years old now, but performance from it is absolutely fine for my needs, still handles all games and heavy workloads I throw at it. Sure newer chips are a bit quicker - and that's a good thing, but I've never seen the point of throwing hundreds of pounds at a new thing which is a "bit quicker" each year.

      Saying that, my previous PC (Core i5 750) lasted me for nearly 8 years before I replaced it. So in comparison, my current PC is barely broken in...

    3. Andy Tunnah

      Re: How long will it take

      They could come out with 100% IPC games and I'd still feel chuffed with my 3700X. I can't think of a single thing that would make me not think it was the best CPU I'd ever bought, in the sense of the gains I achieved vs the one before it, and how much more "breathing space" it gave me doing..well, everything!

      Plus because I was previously using Intel it's so nice using a chip that has such a reasonable TDP that I can have my fans ticking along at 200rpm at idle, and barely audible under load.

    4. The Dogs Meevonks Silver badge

      Re: How long will it take

      I'll be sticking with my 3800X system that I built back in April... It's a good little CPU that I've got on a 4.4Ghz all core OC @ 1.23v. So it's nice and cool on the H115i AIO, with fans/pump on the quietest settings it idles 18ºC above ambient and about 20ºC higher than that under heavy loads.

      I can't see any reason to upgrade a CPU like that for a couple more years... even if that system had a 3090 in it, I doubt you'd be anywhere close to a CPU bottleneck.

  7. guyr

    Why skip a number series?

    So the last generation of desktop processors were the 3000 series, why has AMD jumped to 5000? I see on this page:

    https://www.amd.com/en/products/ryzen-processors-laptop

    AMD appears to have allocated the 4000 series to mobile chips. Is that a one-time thing, or are they going to be allocate even-numbered series to mobile and odd-numbered series to desktops?

    And then there is this head-scratcher. On that page: "AMD Ryzen™ 3000 Series Mobile Processors with Radeon™ Graphics", under which is ... wait for it ...

    AMD Ryzen™ 9 4900H, AMD Ryzen™ 7 4800H, etc. How's that for clear marketing???

    1. Steve Todd

      Re: Why skip a number series?

      They made a b*lls up with the marketing of the Zen2 mobile processors by calling them the 4000 series. At least they didn’t compound that by naming Zen3 CPUs in that range, and their naming is very much clearer than the mess that is Intel’s.

      The 4000 series naming is fairly clear, with U series being standard low power, H series being high performance and HS being slightly slower than H, but being designed for compact chassis. Trying to mix much higher performance desktop SKUs, with a different CPU architecture, into the same range wouldn’t have helped.

      1. RM Myers
        FAIL

        ...their naming is very much clearer than the mess that is Intel’s.

        Faint praise indeed. Is it even possible to have a less clear naming than Intel's?

        Is Intel trying to intentionally confuse the consumer? If so, they have succeeded spectacularly.

        1. Greg 38

          Re: ...their naming is very much clearer than the mess that is Intel’s.

          Yes, it is intentionally to confuse the consumer. I'm a former fab engineer at Intel's development site in Oregon (10 yrs ago). I asked one of the product engineers about the convoluted naming system at the time. He confirmed it was intentional as a means of focusing on the "core2" name and away from the raw MHz that had been the previous convention.

    2. IGotOut Silver badge

      Re: Why skip a number series?

      It's most likely the same as cars.

      For decades BMW had the 3, 5, 7, 8 series.

      When they wanted a new range of smaller cars they could drop in the 1 and 4 series.

      It would of been a bit odd to have the smallest car become the 9 series.

      1. Nursing A Semi

        Re: Why skip a number series?

        Odd numbers as the saloons, even numbers the coupes.

    3. Andy Tunnah

      Re: Why skip a number series?

      4000 are APUs, they split em off to make a clear line, instead of mixing it up.

    4. Boothy

      Re: Why skip a number series?

      I think they realised they'd made a mess....

      Existing:

      Ryzen 1000 (desktop) = Zen

      Ryzen 2000 (desktop) = Zen+

      Ryzen 2000 (APU) = Zen

      Ryzen 2000 (mobile) = Zen

      Ryzen 3000 (desktop) = Zen 2

      Ryzen 3000 (APU) = Zen+

      Ryzen 3000 (mobile) = Zen+

      Ryzen 4000 (APU) = Zen 2

      Ryzen 4000 (mobile) = Zen 2

      Plus making things worse, they've already used things like 4800 for mobile devices.

      So you have an existing 4800U, 4800HS, 4800H, are you going to add a 4800X to that as a desktop part? Where 4800X = Zen 3, but 4800U, 4800HS and 4800H are all Zen 2?

      I think the idea behind Ryzen 5000 is that they plan to have all 5000 branded items as Zen 3.

      So we have the confirmed :

      Ryzen 5000 (desktop) = Zen 3

      and the potential:

      Ryzen 5000 (APU) = Zen 3

      Ryzen 5000 (mobile) = Zen 3

  8. Richard 12 Silver badge

    Well done AMD

    Intel annoyed me by dropping the socket unexpectedly immediately after I upgraded my machine, so forcing me to buy a new mobo to upgrade CPU at all.

    So... might as well switch to AMD.

    1. Steve Todd

      Re: Well done AMD

      AMD have said that this is the last generation to support the AM4 socket, but then it’s going to be at least 12 months before Zen4 and the next socket (presumably AM5).

      1. 9Rune5

        Re: Well done AMD

        And, for those of you coming from Intel motherboards: The BIOS contains microcode updates for the CPU. Since BIOS size is limited, any given motherboard can only support a range of Ryzen CPUs.

        So "same socket" sounds nice, but...

        Still love my old Ryzen 2700X to bits though. Rock solid. It will take a lot to sway me back to Intel again.

  9. StrangerHereMyself Silver badge

    Security

    I buy AMD Ryzen because I'm fed up with all the security vulnerabilities present in Intel processors, such as Meltdown and Spectre. Many of these cannot be fixed by updating the software or result in enormous performance penalties. Intel is hoping people will simply forget about them and get on with their lives, but I haven't forgotten and am avoiding Intel parts for at least a couple of years until they get things sorted out.

    1. NetBlackOps

      Re: Security

      In my case, it isn't so much the vulnerabilities, it's the slowdown I see on each and every one of my Intel systems (9 in all) as the patches were applied. That's why I switched to AMD, not that they are totally immune, it's just that the percentage of lost performance, so far, has been much less. Zippy i7 slowing to a crawl when it used to scream isn't something I ever want to see again.

      1. StrangerHereMyself Silver badge

        Re: Security

        I forget to mention another reason I shun Intel: the infamous Intel Management Engine (IME) which is a processor running its own operating system within my processor having complete control over it, silently overwriting its memory and registers.

        I just want to make it clear to Intel: I DON'T WANT THIS!!

        I do NOT want you to quietly invade and surreptitiously alter my CPU and operating system with no way for me to intervene.

        By voting with my feet I hope I can make them understand.

  10. Andy Tunnah
    Thumb Up

    Watch the gap!

    Unsure why but they've left out the 5700X that will surely fit in the gap between the 5600 and 5800 much like the 3700X did, and much like that it'll be the absolute sweet spot of price and performance.

    AMD can't seem to do no wrong in the CPU space and I do take a certain joy in Intel getting a good kicking considering all the years of lacklustre and overpriced chips they flogged; I had a 2700K from 2011, wanted to upgrade it for years but every new Intel was a fart's worth of performance increase for a shite load more power usage, with TDPs so far off actual power draw that overclocking ends up needing you to just patch right into the local transformer, so seeing AMD come back around, with TDPs that have the same number of digits as the actual power draw, the 3700X was a legitimately exciting upgrade. If I need another one before the decade is out I'll be very surprised!

    Now to wait on those new Radeons, because considering nvidia's previous gen pricing and power usage, (30% increase of performance for 50 bloody percent more money?!), the only reason they bestowed us such a heavy performer as the 3080, and as thirsty as it is, is because they are aware of the capabilities of the new Radeons. The 3080 was a 1200 quid card, easy, if nvidia weren't scared of what's on the horizon.

    It might be a right balls up of a year but if you're going to be in lockdown you might as well have some fantastic looking games while in there!

    1. R3sistance

      Re: Watch the gap!

      I don't think the price drop of the 3000 series has anything to do with AMD, I am more inclined to say that it is due to the 2000 series probably not selling quite as hotly as anticipated with pricing being one of the reasons given internally for that. There were rumours of AMD saying that their rx 6000 series will be competitive against the 3080 but going by similar things from the rumour mill, that maybe a slightly exaggeration and it'll be mid way between the 2080ti and 3080.

      If the 3070 is around or slightly better than the 2080ti then the rumour it'll be around $550 (compared to $400 for 3070 and $800 for 3080) it's slightly better price for rasterization performance. However also going by those rumours the 6000 series beats Turing for Ray Tracing, which isn't a great boost and that AMD has some competitor to DLSS but how good that will be is questionable at this stage compared to DLSS which is much more mature and is hardware accelerated with Tensor cores. A hybrid approach is expected which means performance outside of rasterization will likely be closer to the 3070...

      But take it all with a grain of salt, right now until 3rd party reviewers can get their hands on these cards and actually test these things out, it's hard to be certain how much of this information is correct. Also the thing that is a boon for AMD right now is nvidia's lackluster supply/production of the ampere chipset as well as a very well known bug in the windows driver which was causing multiple issues for RTX3080 users but is now patched. The flip side being that AMD has a worse reputation regarding driver and card stability.

      Overall I don't think the 6000 series is going to be the ampere killer that a lot of people would like to think it is and AMD needs to surpass itself with regards to what is expected from nvidia hopper which will be a 5nm card with MCMs and production expected to start next year with TSMC.

  11. I am the liquor

    Sounds ideal for all the javascript

    When I see these stories about the latest CPUs, I start thinking it might be time for an upgrade. But as a non-gamer, I have to admit my 10-year-old Intel is fast enough.

    Then on the odd occasion, I turn off NoScript in the browser, the CPU fan starts making like Concorde warming up for take-off, and maybe I do need some extra CPU power after all.

    Seems like there's something wrong when browsing a shitty web site is the most computationally demanding task of the day.

    1. R3sistance

      Re: Sounds ideal for all the javascript

      Honestly, without a workload that demands an x86-64, I wonder if there is any point staying with one. For low usage like internet browsing, I am pretty sure we are entering the age of ARM laptop/desktops which will be cheaper and serve those purposes with much lower power draw. Heck a phone or tablet does this just fine. While PC games are still far off, the ARM based (nvidia Tegra) Nintendo Switch does show future potential on the gaming front too.

      And I wouldn't be surprised if in the future we see the same from RISC-V either.

      1. Tom 7

        Re: Sounds ideal for all the javascript

        Has anyone writted any games that run on the Google Coral USB jobbie? there's a couple of terraflops in there that might be usable for something and make old PCs usable.

    2. Tom 7

      Re: Sounds ideal for all the javascript

      I have found one reason for Javascript eating up cpu is due to the addblock stopping stuff loading so the crappily written JS loops when it the things are missing. It doesnt matter what CPU you have when thats happening.

  12. Silverburn

    Agnostic in compute surplus

    As an agnostic gamer, it's increasingly coming down to things like stability, price, supply availability, support and ease of install.

    I feel we have more compute than we need right now, regardless of vendor. Hell, most 2020 games will still chug along nicely on a 4.2ghz 6600K, with decent enough RAM and GPU.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like