back to article AMD is now following More's Law: More chips, more money, more pressure on Intel, more competition in the x86 space

AMD on Tuesday said it had made it through a healthy second quarter of 2020 during which its Ryzen and Epyc microprocessor lines doubled their revenues. Just after rival Intel admitted this month it is stuck in 7nm hell and has ousted its beleaguered engineering supremo, AMD published encouraging financial figures for the …

  1. Anonymous Coward
    Anonymous Coward

    No room for a Chimpzilla crack in those earning reports?

    Seems like ol' red may be due for a M$ style reg rebranding! Or will this just be a few more sunny days before the hardware market puts them back into another anti-trust legislation proof headlock? I remember hearing a Intel foundry tech sarcastically quip that they let AMD make just enough to stay out of bankruptcy to keep the government off their back.

    I wouldn't be surprised if they got up to their old tricks again either. Probably waiting to see which way the wind blows in November. Then again, both Apple and AMD have all of their fab eggs in a Taiwanese basket. If I were in TSMC's shoes I'd be packing a couple of those fabs up and shipping them some place like Texas or Germany, before Chairman Xi jumps gun before the new guard is sworn in.

    1. Steve Davies 3 Silver badge

      Re: No room for a Chimpzilla crack in those earning reports?

      TSMC already has a number of fabs outside Taiwan including at least one in the USA. But, you are right. TSMC needs a 7mm (going to 5mm) fab outside S.E. Asia.

      With Apple going to their own CPU design (based on ARM), I can't really shed a tear for Intel. They have engineered their own demise. Being unable to get volumes out of 10nm and using an architecture that is well passed its sell by date, where else is there to go but down ef?

  2. darklord

    Again seems history repeating itself

    AMD have been playing catch up since the early 90s and still haven't managed to topple intel after nearly 30years, I don't think intel are worried.

    To much software wouldn't run on AMD architecture then and I suspect its still the case today. Shame really as when yo9u can use them they weren't bad in an underdog kinda way. Provided you had a reason to use it apart from price. Like software which would run on it.

    1. Sgt_Oddball Silver badge
      Headmaster

      Re: Again seems history repeating itself

      If AMD has always been playing catch-up, why did Intel licence x86-64 from AMD?

      Whilst I would admit Bulldozer was a misstep, I've yet to see any evidence of incompatibility with software and AMD Ryzen/Threadripper/Epyc chips (for the record I'm running a Ryzen on my desktop and haven't come across anything that hasn't worked yet).

      Though I'd always be interested to see if you have any sources for remark.

      1. ST Silver badge
        Devil

        Re: Again seems history repeating itself

        > If AMD has always been playing catch-up, why did Intel licence x86-64 from AMD?

        So that Intel could make a profit off it. Unlike AMD.

        1. Nick Ryan Silver badge

          Re: Again seems history repeating itself

          More that Intel's Itanium play failed spectacularly and around the same time AMD introduced the x86-64 instruction set which proved to be rather more popular instead. Backwards compatibility tends to win quite often, even if it means sticking with a quite horrible CPU instruction set.

          1. ST Silver badge

            Re: Again seems history repeating itself

            > Intel's Itanium play failed spectacularly [ ... ]

            Yes, in the end it did fail miserably. A few good things came out of it, though.

            But, Itanium's failure didn't make a dent in Intel's profits.

            And, in the interest of full disclosure: Intel wasn't solely responsible for Itanium's failure by any stretch. Itanium was supposed to be a joint project.

            A lot of well-deserved blame for Itanium goes to HP, and to Sun Microsystems. Instead of producing something constructive, Sun decided to play the obstruction game. Because they believed it was in their best interest (SPARC64) at the time. Sun played that game a lot, back then, and it always ended in failure. Undermining Itanium didn't save SPARC.

            1. sw guy

              Re: Again seems history repeating itself

              Trying to replace history by fantasy does not help Intel, you know.

              Sun, as a competitor was not involved in Itanium.

              Some others competitors believed the Itanium tide would sweep their own CPUs and gave up even before fight. In the end, they no longer had their own CPU, and not a good enough one as a replacement.

              BTW, amongst these competitors, there was Alpha, whose demise was a big boost for AMD as some design team went there after there boot was sunk by management.

              1. ST Silver badge
                FAIL

                Re: Again seems history repeating itself

                > Sun, as a competitor was not involved in Itanium.

                Bullshit.

                Yes, Sun was involved in Itanium. Sun had a port of Solaris to Itanium, back then.

                Several groups ported operating systems for the architecture, including Microsoft Windows, OpenVMS, Linux, HP-UX, Solaris,Tru64 UNIX, and Monterey/64. The latter three were canceled before reaching the market. By 1997, it was apparent that the IA-64 architecture and the compiler were much more difficult to implement than originally thought, and the delivery timeframe of Merced began slipping.

                Which means Sun had an Itanium compiler. A.k.a. the Sun compiler. You can read the long list of contemporary references to Solaris Itanium on the Wikipedia page.

                As usual, after promising everything under the Sun (pun intended), Sun backed out of their commitments to deliver Solaris Itanium, because of SPARC.

                Just like Sun had a port of Solaris to IBM PowerPC, and backed out at the last minute, again:

                Solaris 2.5.1 included support for the PowerPC platform (PowerPC Reference Platform), but the port was canceled before the Solaris 2.6 release. [ ... ] A port of Solaris to the Intel Itanium architecture was announced in 1997 but never brought to market.

                I still don't see how Itanium is of any relevance to AMD, or vice-versa. AMD was never part of the consortium that worked on Itanium.

                Sadly, ElReg seems to have been overrun by clueless bullshit artists who will write just about anything to promote falsehoods.

                1. sw guy

                  Re: Again seems history repeating itself

                  Sorry for misunderstanding

                  From the post I answered to, I assumed you meant Sun participated in designing the chip(s).

                  Anyway, Sun was not alone to have a compiler and an OS port for Itanium (hint: company I worked for had, too). But CPU not able to reach announced performances, or too late, had a big influence to bad reception by potential customer. Plus price. Plus abysmal performance on legacy binaries. Plus need to recompile when CPU version changed if you wanted maximum performance...

                  1. ST Silver badge

                    Re: Again seems history repeating itself

                    > [ ... ] CPU not able to reach announced performances [ ... ]

                    Yes the performance of the first iteration Itaniums was quite terrible. OK, let's be more precise: the performance of software running on the first iteration Itaniums was terrible. But that's to be expected: Itanium was a completely new architecture.

                    Writing a compiler backend with an efficient codegen for a VLIW CPU was, at the time, (a) super-bleeding-edge new, with not a lot of prior experience to draw from, and (b) very difficult. It still is.

                    Itanium wasn't designed to be an instant replacement for RISC CPU's. It was more of a forward-looking future development direction. In the end, Itanium's underlying motivation proved correct: RISC CPU's - and SPARC is no exception - did hit a performance wall. Most of these RISC architectures died along the way.

                    A lot of concepts from Itanium's instruction scheduling were re-targeted by Intel in their Core/Xeon x86_64 architectures. So, in final analysis, Itanium wasn't a complete waste of time. There's quite a bit of Itanium under the hood in today's Intel x86_64 cores.

                    1. Anonymous Coward
                      Anonymous Coward

                      Re: Again seems history repeating itself

                      The performance of first generation Itaniums was more than just architecture - it was fundamentally a bad design as VLIW relied on memory/cache throughput to deliver performance and it lacked this. With native performance tests, Merced failed to outperform the 2-3 year old CPU's that it was supposed to replace. Performance when emulating other CPU's was so poor the "feature" had to be dropped completely.

                      Itanium 2 was where the limitations of the VLIW architecture became clear - SPARC and POWER were able to scale to performance levels Itanium 2 couldn't match. Intel then focussed on software optimisation to deliver on its contracts with HP because it couldn't pull any more performance from the hardware. You're right about VLIW optimisation being difficult but don't take acknowledge it was a return to architectures that were severely performance limited as it was difficult to scale throughput and clock speed with complex instructions.

                      I'm interested in where you think Itanium delivered performance/scheduling features to x86 cores - Pentium 4 was already beyond anything delivered in Itanium/Itanium 2. Itanium did deliver "enterprise HA" features to Xeon architectures but as they previously existed in PA-RISC and Alpha and Itaniums competitors exceeded the features found in Itanium I would be cautious about if there were any real advantages gained from Itanium here.

                      As for VLIW beating RISC - the RISC vendors died because they couldn't compete financially with the pace of x86 development and thought partnerships (HP and DEC) or specialisation (MIPS) were the best alternative. x86 has been a CISC front-end on a RISC processor since P6 (pre-Itanium) and if we just look at enterprise CPU's, 99.99% of the market is RISC/RISC with limited specialist instructions (i.e. AVX) or CISC front-end on RISC like z or x86.

                      VLIW was one final attempt for provide an alternative to RISC and it failed spectacularly.

                      1. ST Silver badge
                        Stop

                        Re: Again seems history repeating itself

                        > [ ... ] VLIW relied on memory/cache throughput to deliver performance and it lacked this [ ... ]

                        That's not true at all. Not for Itanium and not for VLIW CPU's in general, either.

                        Itanium's particular flavor of VLIW delegated the responsibility for static parallel instruction scheduling to the compiler. Unlike super-scalar CPU's that schedule their instruction pipeline dynamically (read: at run-time), in silicon.

                        Both approaches (VLIW vs. super-scalar) attempt to deal with ILP.

                        It is possible to design a hybrid super-scalar/VLIW CPU. Not sure what the advantages would be. You end up with a super-scalar CPU anyway.

                        Static parallel instruction scheduling was Itanium's single point of failure. Itanium compilers were really bad at parallel static instruction scheduling. Mainly because there was very little - to zero - experience in writing that type of scheduling backend. That, in turn, killed any performance benefit that could be derived from Itanium's VLIW architecture in the first place. That's the real reason why software compiled for Itanium performed so poorly. And that's the real reason Itanium software had to be re-compiled every single time there was an Itanium iteration update, or a compiler upgrade.

                        Yes, in hindsight, Itanium's designers shouldn't have delegated parallel instruction scheduling to the compiler, simply because it turned out to be almost impossible to do correctly. In the end, Itanium instructions ended up being scheduled just like on an in-order CPU. Goodbye parallelism, which was the point of having a VLIW CPU in the first place. But, the thinking at the time was that it would be possible. Maybe with another 5 years of sustained effort it would have been.

                        You're wrong on the P6 assertion. Current x86_64 core microarchitecture (Core2) has nothing to do with P6 (i686). The list of differences is simply too long to type here. I'll just mention one major difference: the length of the instruction decoding pipeline.

                        Please stop repeating the incorrect drivel about x86 being a CISC front-end on a RISC backend. It's not, and not by a long shot. Who comes up with this stuff?

                        RISC micro-architectures are load/store architectures. x86/x86_64 is definitely not a load-store architecture. It never was, and probably never will be.

                        1. Anonymous Coward
                          Anonymous Coward

                          Re: Again seems history repeating itself

                          So your argument that VLIW is good comes down to "it could be good if some one could write a parallel instruction scheduling compiler but no one can" - I could be Jesus if someone would just make me water walking shoes.

                          What happens when the compiler fails to correctly schedule the next instruction? Where does the correct unstruction get fetched from and why did the significant changes between Merced and McKinley focus on cache and memory bandwidth to try and address performance issues if it wasn't an issue in Merced?

                          For x86 being a CISC frontend that is translated to RISC micro ops since the P6, I'm basing it on Intel technical documents. Yes, the internal RISC core has evolved over time and never been publicly exposed but it is still a RISC core. Arguing over whether it fits some textbook description of RISC then exposes all the non-RISC exceptions in other RISC processors which effectively creates a circular argument outside of "pure" experimental processors.

                          1. ST Silver badge

                            Re: Again seems history repeating itself

                            > I could be Jesus if someone would just make me water walking shoes.

                            I don't know how this relates in any way to VLIW and Itanium.

                            I explained why the performance of software compiled for Itanium was poor. If you don't like the explanation, or don't understand it, invent another one for your own use, and move on.

                            > For x86 being a CISC frontend that is translated to RISC micro ops since the P6, I'm basing it on Intel technical documents.

                            No, you're not. This legend about Intel being a RISC machine hidden inside a CISC machine is a piece of poorly manufactured fiction being peddled on web sites such as Ars Technica or AnandTech. These are not reliable sources of information. I've been reading this online story for at least 10 years. It is just as incorrect today as it was five or ten years ago.

                            Same goes for your P6 theory.

                            If you have links to Intel documents stating anything remotely close to your RISC inside CISC architecture theory, or your P6 theory, please provide the URL(s) here.

                            1. Anonymous Coward
                              Anonymous Coward

                              Re: Again seems history repeating itself

                              "I explained why the performance of software compiled for Itanium was poor. "

                              You did - that wasn't the full story as Itanium's memory and cache architecture were significant issues in Merced. Discussing the theoritical issues with VLIW is fine but history shows that VLIW performance increased significantly between Merced and McKinley where the most significant changes were doubling the speed AND width of the memory bus and adding 1.5MB of level 3 cache as well as increasing level-2 cache size from 96KB to 256KB causing a growth in the size of Itanium from 25 million for Merced to 221 million for McKinley. Not theory - actual implementation details.

                              For the micro-op architecture of x86 processors, the trace cache has been used as the main indication to back up Intels claims, but in more recent generations of x86 Intel has started documenting micro-op behaviour. While Intels secrecy around the trace cache in P4/Netburst/For places to start:

                              Agner Fogs CPU blog: https://www.agner.org/optimize

                              And information on more recent x86 CPU's: https://www.uops.info/paper.html

                              If the vendors (Intel and IBM) claim the units are based on CISC instruction decoders providing micro ops for an underlying RISC architecture, why the disbelief? The architecture wars ended 20 years ago and RISC won.

                2. Anonymous Coward
                  Anonymous Coward

                  Re: Again seems history repeating itself

                  "I still don't see how Itanium is of any relevance to AMD, or vice-versa. AMD was never part of the consortium that worked on Itanium."

                  Itanium affected AMD's fortunes by causing Intel to abandon its plans for 32-bit x86 to allow for a premium 64-bit Itanium. The combination of the Itanium architecture AND implementation underperforming and AMD producing a 64-bit x86 design effectively ended the premiums available for other 64-bit chips be eliminating the performance tax for crossing 4GB. I know x86 could do >4GB via PAE but there was a performance and complexity hit.

                  As for the "non-native" OS's everyone announced in the late 90's - I would suggest they were never able to deliver a similar environment to the native OS on the preferred architecture (i.e. Windows, Solaris, OpenVMS, Tru64). While some had to continue because Itanum was the only choice, the vendors that could abandon Itanium quickly did so. Could they have been successful? Only if Itanium was able to deliver performance and maintain the 64-bit premium.

                  Solaris on Itanium was never delivered because Itanium never threatened SPARC for performance and by the time there were usable Itanium platforms with Itanium 2 in 2002, AMD was about to release 64-bit Opterons in 2003 that delivered a usable workstation platform for Sun while forcing Intel to concentrate on x86 development culminating in a Pentium4 replacement (Core2) in 2006 that largely relegated Itanium to an also ran. Or maybe a never was...

                3. Anonymous Coward
                  Anonymous Coward

                  Re: Again seems history repeating itself

                  > Sadly, ElReg seems to have been overrun by clueless bullshit artists who will write just about anything to promote falsehoods.

                  Have you heard the expression - When you are in a hole, stop digging?

                  Because its pretty obvious to those of us who have been clock-cycle counting since, well, the days of the Am2900 bit slice, you really dont have a clue what you are talking about. The people you are disagreeing with are mostly correct, and your points read to me at least as mostly the rather facile opinions of someone who has not been in the trenches shipping product for quite a few decades by this stage. Which the people you are berating most certainly do. You seem to know just enough about the subject to get the wrong end of the stick of every argument.

                  But, hey, what do I know. I'm just a guy who had to hand optimize out of order asm execution for a 5 p.u PPC because the compilers available would not do it reliably. About 25 years ago. As for VLIW. Works great for DSP, GPU's and vector processing, crap at everything else. If you have written VLIW asm you would know why. Do you actually know x86 asm? Like good enough to write compiler codegens? Machine code level. Because if you did you would not have made some of the sillier statements written above.

                  Someone who has written so much asm for so many architectures for so many decades that they still think 68K is as good as it gets.

      2. Jou (Mxyzptlk) Bronze badge

        Re: Again seems history repeating itself

        Well, Nested Virtualization with Windows 10 on AMD requires currently an Insider-Dev-Channel build, and you need to create the "-Version 9.3" machine with powershell. Until June (build 19640) it did not work at all. It will probably enter the mainstream with 21H1 build.

        See "AMD nested virtualization?" at Github

        Quite annoying if you have to use your old and slow Ivy Bridge CPU just for that...

        Other than that I am quite happy with my Ryzen 3900x CPU, though my previous Ryzen 2700x was a bit dissapointing since I expected more.

        1. Anonymous Coward
          Anonymous Coward

          Re: Again seems history repeating itself

          MS announced nested virtualisation support for Intel and AMD in 2015. AMD support didn't work.

          5 years of being the most requested feature (5 times as many requests for this as the next most popular feature request) to add (i.e. working support on AMD hardware) has finally resulted in a fix.

    2. Anonymous Coward
      Anonymous Coward

      Re: Again seems history repeating itself

      It would be daft to count out Intel, but they have been having plenty of issues of their own recently, with full roll-out of 10nm chips delayed by years and now delays with their 7nm process. In comparison, AMD has all its ducks in a row at the moment.

    3. commonsense

      Re: Again seems history repeating itself

      You mean in the same way that everything ran just fine on IA-64? Oh wait...

    4. Esme

      Re: Again seems history repeating itself

      I've never had any problems whatsoever with AMD CPU's (which I've been using for a good many years)- everything runs just fine. Heck, the PC I'm using right now has an AMD CPU - no issues whatsoever. Methinks your past software woes were likely down to something else!

    5. Anonymous Coward
      Anonymous Coward

      Re: Again seems history repeating itself

      If you want to make sweeping statements comparing AMD and Intel, then AMD has been a company that has excelled at processor design while Intel have often trailed in CPU design while excelling in the production and optimisation of chip fabrication.

      This has then led to Intel dominating the industry financially- while Intel have made a number of missteps over the last 5 years, the still have the revenue and funds to compete, an area where many of their competitors have failed.

  3. Pascal Monett Silver badge
    Thumb Up

    Go AMD !

    The little company that could. AMD deserves a bit of time in the sun, Intel has made its life hell for long enough.

    But we still need Intel around, lets AMD start slacking off.

  4. Fenton

    Non AMD compatible

    The only piece of Enterprise software I know that does not run on AMD is SAP HANA, but this is purely an artificial limitation imposed by SAP (I have heard AMD themselves are running HANA on Epyc).

    To get this market and in light of Intels delays, AMD really need to think about the 4 Socket market for very large memory foot prints e.g. larger than 8TB, where they can then also start competing with IBM and Power.

    AMD have always been great innovators, first to dual core, first to SIMD instructions, first to integrated memory controller, and first to x86-64, I see the as a modern day digital/Sun. Great innovators not neseccarily good at execution/marketing but Lisa sure is turning that around.

    1. Anonymous Coward
      Anonymous Coward

      Re: Non AMD compatible

      I'm sure I read on this site that some software complains about the high core counts of AMD's latest and greatest. The core limits, entirely coincidentally, are higher than Intel currently provide.

      Even as a massive AMD fan though, I have to acknowledge that AMD is a lot smaller than Intel. $2billion per quarter revenue when Intel's is more like $20billion is uncomfortable reading. More frightening is that the price-earnings ratio on AMD is over 20, whereas Intel's is about 3. Speculators have run AMD's share price up to crazy levels.

      1. Nick Ryan Silver badge

        Re: Non AMD compatible

        Luckily share prices are artificial valuations that don't necessarily reflect an organisation's performance, more the external artificial market of trying to get the best returns from them: short term, long term or just stock manipulation. Where the share price can cause problems is if the price drops too low and predatory purchases of shares can be made leading to take overs or damaging external board influence.

      2. Bitsminer

        Re: Non AMD compatible

        AMD is not crazy dumb, they introduced Ryzen and Epyc at very attractive prices. They are now raising prices (retail). Compare 3900X vs 3900XT price: a minor uptick in performance, zero change in manufacturing cost, but a total margin booster in price of 15% (newegg.com). Anandtech had quoted AMD as having no price change in their announcement on 7 July.

        I would challenge your PE numbers: check yahoo.

  5. Anonymous Coward
    Anonymous Coward

    How long before Dell and HP see the light?

    I'm very glad that AMD have pulled themselves out of the pit they found themselves in for the previous decade. It'd be very good now if the likes of HP and Dell begin to offer some serious AMD desktops etc. as at the moment Dell is only offering 7 desktop models with Ryzen and they are in their Alienware range.

    Not good for those looking to purchase for corporate use, which I suspect (or at the very least hope) both companies would like to address. Given their Intel 'rebates' though, I think it would take consumer pressure to make them shift to team red in any resonable volumes.

  6. IGotOut Silver badge

    They also have..

    The Xbox Series X and PS 5 under their belts to ramp up those figures.

  7. jmch Silver badge

    What do stock analysts do?

    Following this call, AMD stock jumped 10%, and following intel's from yesterday's article, theirs fell 10%. Is everything going on in these companies such a huge secret that the stock market was so taken by surprise?

    Because surely anyone competent following either or both stocks could have made a killing, but if MOST analysts were competent, the moves would already have been priced in?

  8. RM Myers Silver badge
    Happy

    This is an extremely interesting situation

    AMD clearly is winning the performance for price battle for desktop PC's (Ryzen 3xxx), single socket workstations (Threadripper), and now even laptop PC's. However, despite being a solid generation and a half behind the AMD CPU's from a fab standpoint, Intel is still extremely competitive, and the new 10xxxx CPU's still seem to be top of the heap for gaming, although by an extremely narrow margin. This implies Intel's designs are probably better than AMD's from a performance standpoint, but are held back by their issues in fabbing 10 and 7 nm. But AMD doesn't do there own fabbing, TSMC does. Thus, Intel actually seems to be losing to TSMC, not AMD.

    Now, does that matter to buyers - hell no. I recently built a new Ryzen 3700X based desktop, and I could have cared less about the part AMD versus TSMC played in making Ryzen more compelling than Core. But it does imply that over the next 5 -10 years Intel could still take back the lead from AMD.

    We live in interesting times, for better or worse. Time to get more popcorn!

    1. nautica
      Happy

      Re: This is an extremely interesting situation

      "...We live in interesting times...".

      Indeed.

      There is an old Chinese--exceedingly veiled--curse, which most assume is meant in a complimentary fashion; as a wish to the recipient for happiness and contentment:

      "May you live in interesting times."

      Indeed.

    2. Boothy Silver badge

      Re: This is an extremely interesting situation

      Quote: "Intel is still extremely competitive, and the new 10xxxx CPU's still seem to be top of the heap for gaming, although by an extremely narrow margin. This implies Intel's designs are probably better than AMD's from a performance standpoint, but are held back by their issues in fabbing 10 and 7 nm."

      Not really, the main reason Intel have these small margins over AMD on some benchmarks, is due to clock speed. Gaming for example tends to lean heavily on a single thread [1], so if you're gaming on an AMD that's peaking out below 5GHz, but a similar Intel can hit 5GHz+, then Intel end up with an advantage [2]. That advantage is then lost on productivity workloads, where AMD pull ahead due to the extra cores you can get at the same price point as the Intel chip.

      The upside for Intel on 14++++++++ is that the node is very mature, so very stable, and so can be clocked very high. The big downfall for Intel, with being stuck on 14++++++++ is that to get to these clock speeds, they consume large amounts of power, so generate lots of heat. Not too much of an issue in a decent desktop, but really bad in a laptop. The new Ryzen mobile chips for example have been outperforming the best mobile Intel chips by 45% or more (in productivity tasks), yet are only pulling half the wattage.

      TSMC focus has been on lower power consumption, rather than clock speed, most of their 7nm is going into mobile devices, so the chips are efficient, but don't clock as well. Likely one of the reasons AMD have been pushing core count so much, as they knew they can't compete on pure speed alone.

      Also from an architecture point of view, AMDs chips have better overall IPC (Instructions per clock), than Intel [3], which demonstrates overall, that AMD have the better architecture. What has let AMD down a bit has been bottlenecks, but this keeps being improved upon with each iteration, with updated fabric and increased cache sizes etc.

      Zen 3 is rumoured to likely beat current Intel chips on these last few benchmarks, such as gaming, as Zen 3 is meant to both be faster (improved 7nm+ node), and has improved IPC over the Zen 2 design, combined these improved should give a good boost in performance. Looking forwards to seeing what the 4950X can do!

      Addendum...

      1. Modern game engines are now multi-threaded (most of them anyway), but those threads are typically for discrete functions, such as one thread for AI, another for sound etc. so workloads are not evenly distributed, and one of those threads (typically GFX related, but depends on game type) tends to be the bottle neck, and so games tend to favour core speed, over core count. Although this is gradually changing, and as the new consoles are basically under-clocked Ryzen 3700 parts (although as a SoC rather than chiplet), I expect future game engines to be more and more optimised for cores count over clock speed, and potentially more optimised specifically for AMD, especially as it's no longer considered 'the budget option'.

      2. Worth mentioning that gaming benchmarks are also rather artificial. If you're looking at charts comparing one CPU against another (Intel vs AMD, top end vs budget etc), the results are unrealistic for most actual end user, because CPU testing is generally done with the best GPU they have around, typically an RTX 2080Ti. This being done to try and remove as much as possible any GFX card related bottleneck, and therefore only show how good the CPU is.

      But most people purchasing/building a system will be on a budget, and so won't be buying a RTX 2080Ti, and so the GFX card becomes a bottleneck, depending on factors like detail level and resolution etc. Once the GPU becomes the bottleneck, it doesn't really matter that much what CPU you have, as long as it's at least fast enough to keep up and not become the bottleneck itself.

      3. There has been benchmarks run, where an otherwise equivalent AMD and Intel system (same core count, memory, M2 drive, GFX card etc), have been fixed to run all cores at the same speed. So same number of cores in both systems, and at the same clock speed, with same GFX card. AMD basically ran the Intel into the ground.

  9. Michael Hoffmann
    Thumb Up

    AWS...

    ... made a big announcement last re:invent of adding AMD instances with a 10% lower price tag. The smugness at the AMD stand was palpable.

    The big lagging indicator was "certified AMIs" - but I started shifting test and stage workloads over almost immediately without any issues whatsoever. Apart from telling the manglement and beancounters "and it's 10% cheaper".

    Spot instances are *really* sweet: in my region the price tag has been as low as 20-25% of on-demand (Intel generally around 25-33%).

    I'm due for a refresher on my own dev/play box - but the horrible exchange rates, shipping costs and other covid snafu are making that a "will not pass the missus" proposal :( It would have been my first AMD machine...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020