back to article AMD comes out swinging, says: We're the Buster Douglas of the tech industry!

Some metaphors do bear close examination. Just ask plucky, perennial underdog AMD, which this week compared itself to Buster Douglas, the short-lived heavyweight boxing champ who shocked the world when he dethroned Mike Tyson in the 1980s. AMD was celebrating a win of sorts by convincing Lenovo to integrate its Ryzen and Epyc …

  1. Peter2 Silver badge

    If that pending 32 core/64 thread Ryzen/Epyc server CPU does in fact offer anything approaching a 50% performance increase while being cheaper than the intel part, then it's certainly going to be very attractive.

    Assuming that a well known competitor doesn't do something very anti-competitive, that is.

    1. Anonymous Coward
      Anonymous Coward


      I purchased half a dozen servers in 2010 powered by 2x 16 Core Bulldozers, they were the best servers I've ever brought.. They cost 50% less than Intel, delivered more punch, and were more reliable than Xeons. They were so good, even when we had a refresh in 2015, these servers stayed operational.

      Of course, all of this depends upon your work load, but we certainly got our moneys worth from them.

      1. fobobob

        Bulldozer wasn't out until late 2011, are you sure they weren't 12-core Magny Cours CPUs? Those were pretty good, from my experience. The bulldozers were also okay; the extra cores did help, especially considering the lesser role of FPUs in a typical web hosting environment (BD had 8 full-width FPUs per 16 cores, MC had 12/12). However, there did not seem to be anywhere near the ~1/3 improvement one would expect from the extra cores. The Piledriver servers that were being deployed where I was working were definitely a step up from the BD chips, but still not overly impressive compared to the MC machines that were several years older.

        One thing that may have interfered with my perception was the eventual discovery that basically all of the BD machines had been fitted with only two DIMMs per socket, wasting HyperTransport bandwidth serving memory accesses to RAMless NUMA nodes (each chip is two distinct nodes, each with 2 memory channels, and stitched with HT). The older MC servers were all fitted with 4 DIMMs per socket, or in some cases 8.

        Experience in this case is systems monitoring at a major hosting company circa 2012-2013, ~ 50% of which was bludgeoning people's misbehaving SEO spamblogs. Servers in question were grievously overloaded 'shared' servers, often with well over 1000 users.

        Pedantic footnote: I guess Magny Cours was the codename for the specific CPU lines, and should be referred to as K10 to compare to Bulldozer (whose Opterons would have been called Interlagos in the 16 core form).

        1. Anonymous Coward
          Anonymous Coward

          I'm fairly certain this was late 2010.. but could have been early 2011. Definitely not post this.

  2. This post has been deleted by its author

  3. Wiltshire

    AMD's UK press conferences will be at Hampton Wick.

    Followed by an AMD-sponsored pro-celebrity cricket match at Lords.

    The bowler's Holding, the batman's Willey.

    The umpire is Hampton.

    1. This post has been deleted by its author

  4. James 51

    Just release Raven Ridge already. Starting to be impatient.

    1. Anonymous Coward
      Anonymous Coward

      Hear. Hear.

  5. Anonymous Coward
    Anonymous Coward

    "Chumbawamba's famous tune "I Get Knocked Down""

    It's called "Tubthumping".

    1. Excellentsword

      True, but Tubthumping in itself is meaningless in the context whereas the lyrics are not.

  6. Michael H.F. Wilkinson

    I will certainly be looking at the AMD chippery for a new compute server here. My older 4x16 core Opteron machine has been great. No issues whatsoever (uptime 869 days, last time I looked) and it still packs quite a punch processing multi-gigapixel images. If the performance per euro (I'm based in the Netherlands, after all) is better than Intel, I have no reservations going for AMD

    1. hughgrection

      So you don't seem at all concerned with security, never doing kernel updates?

      As long as your workload isn't overly reliant on memory or I/O AMD is probably a good choice. Intel are easier to tune an application for (not as many NUMA nodes)

  7. 2Fat2Bald

    It's kinda interesting...

    I've long used AMD stuff. It's fine, honestly. I've used AMD since the days of the first AMD I386DX40 I bought. Back then people said "Get the intel - it's faster". And it was - slightly - but also more expensive.

    I think for some people the biggest disincentive to AMD is simply "It's not an Intel" and the perception that it's a cheap knock-off of a "proper" CPU. Like not buying an Android because it's a knock-off of an iPhone and "not a proper smartphone" (different scale, but similar Logic).

    I do wonder how much Intel would actually like AMD to go away. I think they probably don't want that as they don't want to be in a monopoly (often bad things happen - like being open-sourced) and really want a dominant market position that isn't *quite* a monopoly.

    1. Griffo

      Re: It's kinda interesting...

      I think Intel would very much like AMD to go away. And here's hoping they don't.

      Remember when Intel said x64 was impossible and everyone needed to migrate over to their new iTanic CPU architecture? Thank god for AMD.

  8. BinkyTheMagicPaperclip

    Difficult to tell

    The Coffee Lake processors are out, and although several reviews sites are trying to spin it as a notable improvement (and to be fair on average it is a bit better than the previous generation), it is hardly a stunning improvement. It's nice that Intel is now shipping a default of six cores, but it doesn't always beat the high end processors of the prior generation, or comprehensively destroy Ryzen.

    If AMD manage to improve Ryzen a little, the next generation of processors will be very interesting.

    1. hughgrection

      Re: Difficult to tell

      Especially when AMD manages to get more CPUs on a die and on a compute cluster. Intel still has an advantage here since AMD doesn't manage more than 8 cores to a die at the moment, so anything more complex (think databases) with a lot of threads and RAM/storage needs will run a lot better on Intel.

  9. Anonymous Coward
    Anonymous Coward

    AMD has rebult themselves properly

    AMD is so much more than Ryzen on so many fronts that it has shocked the PC industry, especially Intel who is in damage control mode. That's good news for consumers and enterprise as AMD is delivering not just good but outstanding products and customer support. Software companies have come to understand that AMD is on the uptick and Intel waning with a clear intent of exiting the CPU market in the not too distant future. It's gratifying to see AMD recover from the blatant, outrageous and illegal anti-trust violations that Intel got away with worldwide. Payback may be Hell for Intel.

    1. Peter2 Silver badge

      Re: AMD has rebult themselves properly

      Intel has never shown any intent that they would like to exit the CPU market.

      They would certainly like to stop spending huge amounts of money on R&D and just farm in a huge profit from chip sales as they have been able to for about the few years but that's normal business. It's also why we don't want AMD going down, because competition from AMD forces Intel to put more money into R&D.

      If AMD had never existed then I think that chip speeds today would probably be at around the levels that we had around 15 years ago. It's undeniable that the competition between Intel & AMD has been great for faster processors, we've seen pressured R&D on a scale only seen before when nation scale R&D has been poured into research to win a war.

  10. Anonymous Coward
    Anonymous Coward

    I'm glad to see them hitting their stride again after the ATI merger, but it's not a level playing field, and the game is rigged. The new product line is a big jump, and I'm sure their marketing people are exited to have a product they can push again, but the game was over when Meyer spiked the lawsuits against his former employer. The twenty year holiday of US antitrust action has ensured Intel can just play king of hill every ten years when AMD makes a big push. Back in in the middle 2000's they might have had a shot to break out, but it's unfair to dump on them when Intel has been leveraging their absolute control of the PC and server CPU market to ensure that AMD's engineering spend is a tenth what they could. The relationship between the two companies is now locked in place, with Intel using some pretty dirty tactics to put AMD into a permanent headlock.

    So instead of competition and innovation, we've endured bundling, price gouging, and a decade wasted watching Intel try to expand it's stranglehold into mobile, embedded, and storage. Infuriating because I have ended up paying the "Intel tax" on anything other than entry level equipment, where 20% more performance costs 400% more money. And AMD is stuck, because Intel will pile on price cuts to make sure they can't take enough margin on even their best parts to really stay in the game.

    It's sad for me to see how this whole saga has played out for AMD. I spent my entire childhood savings to buy my first DOS/windows box, a 386. Intel and AMD were still punching on even terms, and the two companies regularly traded leads in the PC race. Back in the middle 2000s Intel tripped on it's dick with netburst and Itanic, and AMD punched back with some great hardware. That competition literally revolutionized CPU architecture.

    Without competition, what have we seen from Intel? Lazy architecture, tiny improvements to single threaded performance, and eye watering prices for parts with decent memory bandwidth or high core counts. Marketing BS like Bronze, Silver, and Gold masking a strategy of artificial market segmentation and complacency. Complacency that has lead the entire PC industry in a decade of decline.

    1. Anonymous Coward
      Anonymous Coward

      I wouldn't exactly call Core 2 (2017) to the current i7 a "decade of decline".

      1. admiraljkb

        "I wouldn't exactly call Core 2 (2017) to the current i7 a "decade of decline"."

        Remember that Core/Core2 was simply a souped up, decade old P6 (PentiumPro) architecture which spawned from a black ops project that saved Intel's bacon from the P4/Netburst debacle. Meanwhile work on the more modern (AMD K8 like) Nehalem arch was occurring, which is the lineage of the current Intel chips. Since Nehalem and it's tock followup Westmere, the performance ticks up, haven't been spectacular. Westmere roughly coincides with AMD's architectural equivalent of the Netburst debacle - BULLDOZER....

        So yeah, Intel (so far) hasn't made any huge improvements since the MagnyCours era on the AMD side of things. :) Having competition again is good, and drives higher compute densities in the datacenter so I don't have to pay for more racks on the floor and bigger cages to house them. :)

  11. Anonymous Coward
    Anonymous Coward


    I know both AMD and Intel have bought into whatever MS deal keeps them from proper Win 7 drivers. But whichever company abandons that will start selling a lot of processors. It's hard to want to buy either one if you are stuck with Win10 or a kludgy Win7. Faster processors are nice, esp if cheap, but if you have to endure an OS you don't want, why bother? Earlier gen cpus work fine.

  12. aberglas

    SGX -- More than speed

    It is tough being in a market that you only compete on price/performance. So Intel is trying to add features. One is SGX, which enables code to run in "enclaves" that cannot be accessed by the O/S or anything else. A bit spooky, but novel and riddled with patents.

    If you have an application that relies on SGX, you will not go AMD even if it is faster and cheaper.

    1. admiraljkb

      Re: SGX -- More than speed

      AMD has Secure Memory Encryption and Secure Encrypted Virtualization which encrypts the RAM and keeps processes and VM's completely separate respectively. It is also the path of least resistance, and for SEV doesn't require code changes to anything other than the Hyper-visor which is happening presently. To implement SME in software doesn't require any heavy lifting either.

      SGX appears to be the equivalent of AMD's long deprecated 3DNow! instructions (or probably closer to Intel's Itanium) in terms of being setup for non-adoption and deprecation. I don't see it surviving due to requiring too much work to implement. Currently Intel's initial attempt is borked in silicon and still being worked out. Cascade Creek is the earliest release arch now for SGX to work sometime in 2018, To date, I'm not aware of any SGX apps in the wild, given no CPU support. In this case, Intel will likely do what they did with AMD's AMD64 and NX instructions and adopt SME/SEV from AMD, and deprecate SGX like they did with IA64. They're cross-licensed on tech after all since both are using several patents from each other.

  13. Adrian Tawse


    "it helps to keep the monopoly's commission at bay". Just what commission does the monopoly possess?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022