back to article AMD's 'Seattle' 64-bit ARM server chips now sampling, set to launch in late 2014

AMD's 64-bit ARM-based server chips, code-named "Seattle", will come to market late this year – but don't expect to see them wrapped up in any SeaMicro Fabric Compute Systems at launch. "We've reached a significant milestone in our ambidextrous strategy," said AMD CEO Rory Read, using the company's term for its two-fisted x86 …


This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    "The industry's first at 28nm"

    They probably will be the first to 28nm, but by the time they launch Apple will be shipping A8s made in TSMC's 20nm process.

    1. Charlie Clark Silver badge

      Re: "The industry's first at 28nm"

      Looks like a typo to me both ARM and x86 have been <= 28nm for a while now.

      However, more impressive than the geometry is AMD's ability to integrate x86, ARM and GPU cores. If this works well then they will have very desirable products.

      1. Anonymous Coward
        Anonymous Coward

        Re: "The industry's first at 28nm"

        Name one ARM CPU made in a smaller process than 28nm. That process has only been available from foundries in the last few months. At any rate, Apple still has the only 64 bit ARM shipping.

        Intel is the only company shipping x86 smaller than 28nm (22nm, though perhaps a few 14nm are slipping out now) but that's irrelevant to a discussion of ARM.

  2. LeoP


    So Foxcon is now branching out into chip making?

    1. Anonymous Coward
      Anonymous Coward

      Re: Apple

      Eh? Apple gets their designs fabbed by Samsung currently although all the rumors point to them second sourcing parts from TSMC soon.

      1. Anonymous Coward
        Anonymous Coward

        Re: Apple

        The rumors seem to point to TSMC being the sole source for the A8 (with Samsung still making the A7 for the "last year" models) But maybe TSMC hasn't proven all that nice to work with, so they want to go with GloFo, and preserve Samsung as a second source since they've never worked with GloFo before and they may have trouble handling Apple's volumes.

  3. Anonymous Coward
    Anonymous Coward

    Not exactly

    Samsung and GloFo will be making 14nm/20nm FinFET Apple chips...

    AMD's starting to tap into some areas of good potential. They need to execute however and that has always been an issue in the past decade.

    1. Hans 1

      Re: Not exactly

      >AMD's starting to tap into some areas of good potential. They need to execute however and that has always been an issue in the past decade.

      Now, now, now, not so fast, Opteron hurt Intel pretty hard back in the day with the SledgeHammer, forced them to shift their vision in quite an opposite direction. It's only when Intel caught up with the 'Core' CPU's that they took the lead back.

      Now, I guess this will be wait and see ... I am pretty excited about these 64-bit ARM CPU's - dreaming of a desktop grade RaspberryPi.

      1. Bronek Kozicki

        Re: Not exactly

        I'm not so exited about ARM on desktop, but in the server farm (where more computing power per watt yields significant savings, as does more functionality per chip) this has great potential. Also, competition is a good thing, as x64 demonstrated.

        1. P0l0nium

          Re: Not exactly

          No ARM server part demonstrated thus far can touch Intel's Avoton for "computing power per watt".

          Avoton C2730 = 5.75 Specint2006 per core per watt

          Opteron A1100 = 3.2 Specint2006 per core per watt

          This thing (in fact NO ARM server part) is going to sell on performance per watt.

          Which leaves "cheapness" as its main selling point.

          Good luck with that! :-)

          1. Paul A. Clayton

            Re: Not exactly


            You do realize that the Operton A1100 has significantly higher per core performance than Avoton and that power use is super-linear to core performance?

            One might argue that its higher per-thread performance is unnecessary for the workloads where it will be used, but that is not obvious.

            (It might also be noted that the A1100 is using a Cortex-A57 primarily to more quickly enter the market. Since Cortex-A57 is a more generic high performance 64-bit ARM design, later designs by AMD may use custom cores more suited to the targeted workload.)

            Price is a significant consideration. While higher revenue per mm-squared allows Intel to invest more in processor design and (especially) manufacturing technology, the performance and power-efficiency return-on-investment is sub-linear for processor design. This means that a company can be profitable with lower prices even with lower volume in an area with high fixed costs.

            Alternative market targets can also provide a significant advantage. A one-size-fits-most design will be less optimal than a design more focused on a specific workload. If one is only seeking 10% of a somewhat large and diverse market (claims of ARM having 25% of the server market by 2019), there is usually opportunity for specialization. This may be particularly significant in components outside of the core.

            Intel's volume advantage (which allows more aggressive binning and reduces the impact of fixed costs which are significant for processors) and process technology advantage make competition difficult. However, it is not a foregone conclusion that Intel x86 is unassailable in the server market.

            1. P0l0nium

              Re: Not exactly

              A1100 is an 8 core part delivering 80 Specint2006 consuming 25W.

              Avoton C2750 is an 8 core part delivering 97 Specint2006 consuming 20W.

              Tell us more about this "significantly higher per core performance" of which you speak :-)

              The only reason to buy an ARM based server is "novelty value" ... Su said as much with:

              "There's general interest in ARM, there's interest in trying out the new workloads" without making any performance claims.

              And Xgene are reduced to touting their analog credentials in a bid to make their "fabric" sound more attractive to investors so that they don't go bust like Calxeda.

              There are comedians that would pay good money for this material!.

              1. P. Lee

                Re: Not exactly

                Sometime compute performance is not the over-riding issue. That's one reason why ARM has done so well in the mobile space - its very efficient at doing very small amounts of work or nothing at all.

                While virtualisation is still all the rage, virtualisation provides in-house techies with something similar to heartbleed - RAM you thought was private is not.

                Compliance can be an issue with PCI-DSS-type scenarios - you have to run different functions on different hosts - you aren't allowed to virtualise, so lots of smaller hosts are required.

                Another issue with virtualisation is that most servers are now overpowered for any given application. In that case, why not drop back to a less powerful and cheaper CPU. Now we are trying to consolidate to justify the cost of systems which we only bought to aid consolidation. Spreading out the workload also drops the cost of networking if you can get back down to 1G network ports rather than 10G/40G switching. If you need 10G, all well and good, but that's not usually a single app requirement.

                Then of course you have the issue that an virtual CPU which is idling doesn't translate to the real CPU idling. Virtualising workloads works best when you are actually doing useful work.

                Some applications can't just be given more CPU. I saw a Dell paper on running 4 VMs with lync instances on a single host, because lync doesn't scale linearly. I believe Asterisk is similar. In that case, why bother adding virtualisation costs - why not run on individual hosts?

                Then there are tasks which are memory- or network-bound rather than CPU-bound, or perhaps there's an instance where latency is more important that CPU throughput.

                It may not be just the customers who want ARM. Companies like HP would far rather take a larger slice of the profit than give it to Intel. Stack-em-high plays like Moonshot bring lots of price-advantage to one box. You probably won't run a database on it, but DC-based (or branch-office equipment room-based) desktops are a different matter.

                It is also early days for 64-bit ARM yet. We've seen a lot of Intel laptops go from quad-core to dual-core in the last year or so. CPUs are now a bit excessive and SSD and RAM have proved more useful. Perhaps Compute per Watt is no longer a useful measure in this scenario.

              2. Charlie Clark Silver badge

                re. Specint2006

                ARM has done well despite miserable Specint performance partly because the chips are small, cheap and have tiny power requirements (not least because of they're poorer performance); and partly because Specint performance doesn't reflect typical workload.

                The problem with the x86 architecture is that excels in certain general purpose computing areas, for which the Specint provides a good proxy, but is much less good in other areas (encryption, parallel processing, etc.). This is why, while x86 is better at parsing and manipulating the DOM of a website there is a move to displaying it using hardware (non-x86) hardware. ARM can come with hardware acceleration for encryption, etc.) and now AMD can is offering GPUs for parallel processing. With the right compilers and schedulers this may make some workloads magnitudes more efficient on such chips. If it doesn't it may succeed by making the market competitive again.

          2. Random Coolzip

            Re: Not exactly

            Which leaves "cheapness" as its main selling point.

            Good luck with that! :-)

            Seemed to work well for Intel! The reason we got stuck with x86 originally was because they were selling for tens of dollars versus hundreds of dollars for M68K / NS32K / etc.

  4. phil dude

    network latency...?

    So what is the zero-byte latency going to be ? This, surely is going to define what activities the architecture will work best before. The larger the zbl is , the computational density needs to go up, surely?

    Ok I pitched my tent on a patch of grass that is molecular simulation.... But it is all get very exciting, that COTS may make make it possible to build a ~Anton...


  5. Christian Berger

    We need a common plattform first

    We need something where it doesn't matter if you've got ARM system from Vendor A or B. You just slap on the ARM version of your favourite operating system just like you do on x86.

    Of course there is a big niche for ARM in hosting. Most people and organisations don't need a full x86 server. However they do not want to have a virtual server because of all the security problems involved.

    ARM servers might fill the gap, giving you a 10 Euro/month dedicated server.

  6. Glen Turner 666

    Common platforms, and where you might use this chip

    Christian, ARM Ltd are aware of the need for a common platform for operating systems software, and have proposed "Server Base System Architecture" (SBSA) as a standard systems architecture in this space.

    Where a AArch64 server chip fits is an interesting question, especially in the sort of quantity needed to make money on a chip. I rather see this as AMD putting their toe into the water, and I imagine that's how their initial customers will also be approaching the chip.

    Where AMD with ARM could competitively take on Intel is in offering a system-on-chip for servers: that would have to be funded by a Google or Amazon, but they might see sufficient mainboard simplification to make that worthwhile.

    Also, there's a considerable market for 64b ARM in network middleboxes and appliances. These are constrained by heat, so Intel has always been problematic. But ARM hasn't been an option due to the lack of 64b parts (ie, your middlebox can't have more than 2GB of RAM, which is pushing it if you need a full routing table with multiple routing instances).

This topic is closed for new posts.

Other stories you might like