back to article AMD laughs at Intel with Opteron Bulldozers

There is no such thing as the last laugh in the server chip business. But you can get the next laugh, and Advanced Micro Devices thinks it is going to get that sequential chuckle on rival Intel in the x64 server racket with next year's launch of the "Bulldozer" family of Opteron processors. The reason is simple. This year, AMD …

COMMENTS

This topic is closed for new posts.
  1. alain williams Silver badge

    So how much faster my web server ?

    It would be nice to see some sort of real guesstimate this new chip will run various typical applications, eg: web server, database server, ...

    Chip vendors wax lyrical about how fast the CPUs are, but forget to say that typically it will be waiting on disk or RAM to give it something to chew on.

    1. Michael C
      Alert

      web server?

      Unless you're talking about virtualizing a few servers (or dozens) on an architecture, do you seriously need more than 2 or 4 cores for a web box? The app server behind it doing the actual processing, maybe, but Apache or IIS is really low yield... We have dozens of cores of app and DB that run pegged most of the day behind a pair of dual core Zeon web front ends that do very little real effort...

      When scaling a web farm needs more than 4 cores per web server, its time to start talking blade farms (or VM or VIOS) and larger scale load balancing, not more cores in a single server.... This simplifies load management, logging, security auditing, downtime issues, and more at about the same price. Most web apps dont even take advantage of larger scale systems properly.

      1. Anonymous Coward
        Happy

        All in one server instead of Web Server?

        Good question Michael! I was reading Alain's comment as "web server" was a LAMP-type setup with everything running on the same box.

        In either case it's a pretty simple question/answer IMO... is the server in question experiencing high CPU load (i.e. does it ever peg to 100%) today? If not, then you could double the core count and it really wouldn't matter as lack of CPU capacity is probably not a major issue for you. In many ways the issue of memory is similar... you might be able to get some incremental improvements with increased speed and/or number of channels, but if you're only using 6 GB of memory on your server putting in 24 GB isn't really going to do anything all other things being equal.*

        If so, it still doesn't mean more processor capacity is necessary. I have seen numerous examples of high CPU load being caused by performance bottlenecks elsewhere and/or software problems. If everything else is right (Disk I/O, memory capacity, etc) and you're seeing high CPU utilization alongside poor performance, then by all means... throw more CPU at the problem. Over my years, I've found true-to-life lack of CPU performance issues to be quite rare, but I'm sure other people have different experiences.

        * please note the "all other things being equal" comment, you could of course reconfigure your system to exploit that additional memory in a number of ways to increase performance, but that is a different discussion IMO

  2. DRendar
    WTF?

    FLOPS per second.... EH!?

    "The six-core Xeon 5600s do 24 flops per cycle and there is a less-cored version of Sandy Bridge that will do 32 flops per cycle. The current twelve-core Opteron 6100 chip can do 40 flops per cycle."

    Urm... FLOPS = Floating-point Operations Per Second

    How does a chip do "Floating-point Oerations Per Second - per cycle"?

    Does it run at one 1Hz ?

    I'm fairly sure you meant Floating Point Operations Per Cycle?

    Or Perhaps you meant FLOPs per cycle? (FLoating-point OPerations per cycle)

    Either way you need to capitalise your acronyms.

    1. Anonymous Coward
      Boffin

      Janet and John talk about new multicore processor architectures...

      The article cannot be expected to both inform and educate.

      A basic level of education and common sense needs to be supplied by the reader.

      "flops" can only sensibly refer to "floating point operations" within the context given.

      It is up to you, the informed reader - getting to page 2 of a complex technical article - to deduce this for yourself.

  3. Anonymous Coward
    Anonymous Coward

    One clarification

    From the article:

    "As AMD said earlier this year, the Bulldozers, core for core, are expected to offer about 50 per cent more oomph with a 33 per cent increase in core count over the Opteron 4100s and 6100s - and do so in the same power bands of 65, 80, and 105 watts"

    Should this be:

    "As AMD said earlier this year, the Bulldozers, socket for socket, "

    My understanding is part of the performance increase comes from more 33% cores (i.e. 12 vs 16) with the rest from the architectural changes but there seems to be a lot of confusion regarding this.

    1. tpm (Written by Reg staff)

      Re: One clarification

      Absolutely correct. My dyslexia is showing through again.

    2. Anonymous Coward
      Boffin

      Bad attempt at maths

      To me an interesting question is how much of an improvement there is core for core. So at 133% of the number of prior cores you get 150% of the performance, which *I think* comes out to a 12.8% increase in performance on a per core basis... maybe?

      That said, understanding the improvement on a per socket basis is important... but in a world where more and more enterprise software licensing has been moving to PVU (Processor Value Unit) licensing (i.e. a combination of core counts and performance ratings for particular processor families vs. physical machine or socket licensing) that 33% increase in core counts, and the specific performance per core is important to understand. Maybe to try and explain that in English a little better... if I can get charged 100 PVU per core on processor X, and 100 PVU per core on processor Y ( typical in some of the licensing I've seen) and the per core performance on X > Y then (assuming no significant cost differences) I will go with processor X. A proper architecture design/CBA can be difficult these days when dealing with PVU licensing - it's not always just hardware performance per dollar or per watt... we have to throw in performance per PVU (which in fairness *shouldn't* be an issue as the vendors *should* be consistent in rating cores... but unfortunately doesn't seem to be the case).

      1. JF-AMD

        Actually, your math is wrong

        That "12.8% improvement" has been debunked dozens of times on the web.

        You can't do a "per core improvement" when looking at a fully utilized server. The performance numbers that we are quoting are for total throughput of 16 cores vs. 12 cores.

        to give you an idea of how that scales, look at two examples:

        AMD Opteron 4-core to 6-core - 50% more cores, 33% more throughput

        Intel Xeon 4-core to 6-core - 50% more ccores, 33% more throughput

        Compare to:

        AMD Opteron 12-core to 16-core - 33% more cores, 50% more throughput

        Obviously there is some architectural improvement that is boosting overall throughput. As for single threaded performance (the 12.8% often quoted), If you look at only 1 thread running on a 12-core and only 1 thread running on a 16-core, you will see significantly more performance increase than 12.8%.

        When people try to make this comparison it is like trying to determine how long it will take you to get to the office at 3AM if it takes 30 minutes during rush hour. I can guarantee you that when traffic is lower at 3AM you can get there a lot quicker.

      2. JF-AMD

        Wrong math

        That "12.8% improvement" has been debunked dozens of times on the web.

        You can't do a "per core improvement" when looking at a fully utilized server. The performance numbers that we are quoting are for total throughput of 16 cores vs. 12 cores.

        to give you an idea of how that scales, look at two examples:

        AMD Opteron 4-core to 6-core - 50% more cores, 33% more throughput

        Intel Xeon 4-core to 6-core - 50% more ccores, 33% more throughput

        Compare to:

        AMD Opteron 12-core to 16-core - 33% more cores, 50% more throughput

        Obviously there is some architectural improvement that is boosting overall throughput. As for single threaded performance (the 12.8% often quoted), If you look at only 1 thread running on a 12-core and only 1 thread running on a 16-core, you will see significantly more performance increase than 12.8%.

        When people try to make this comparison it is like trying to determine how long it will take you to get to the office at 3AM if it takes 30 minutes during rush hour. I can guarantee you that when traffic is lower at 3AM you can get there a lot quicker.

  4. David 132 Silver badge
    Stop

    Turbo on Intel chips

    From the article:

    "(To use Turbo Core mode on Intel chips, you have to shut down all the cores but one on the chip, and you only get a nominal increase in clock speed during the time the cores are sleeping)."

    I think this is incorrect, you may want to double-check your facts (or perhaps, take press releases from AMD with a pinch more salt?). My understanding is that on the Intel processors, the core(s)' speed(s) will be automatically increased, the only constraint being that the power consumption/thermal dissipation must remain below the maximum allowed for that processor. It doesn't matter how many cores are in use at the time, and no cores will be shut down whilst others are getting the speed boost. There's a summary page for the Xeon X7560 processor here (http://ark.intel.com/Product.aspx?id=46499&processor=X7560&spec-codes=SLBRD) that shows a 400MHz / 18% clock speed boost - I wouldn't call that "nominal", and in my experience it leads to a commensurate improvement in performance for workloads running on the faster cores.

This topic is closed for new posts.

Other stories you might like