How much?
$2999? Did they get the decimal point in the wrong place?
AMD today sheds more light on its "Seattle" 64-bit ARM architecture processor at the Hot Chips conference in Cupertino, California. Take one glance at this new Opteron A1100-series system-on-chip, and you'll realize it's aimed squarely at servers rather than the traditional ARM scene of handheld gadgets and embedded computing …
>to be a lot cheaper than Xeon in quantity
Yep Intel has much to fear here as it basically them against everyone else in the IC maker industry. Intel has never done well with low margin high volume products and unfortunately now being a generation ahead in fab technology which has always been Intel's saving grace is no longer as valuable as it once was. Moore's law is no longer opening up whole new apps and ecosystems. Its just making your phone battery last a little longer.
What good is a Xeon CPU when it sits idle most of the day? What good is an Atom when you can't feed it enough RAM to even make hay?
ARM promises to be what we seem to want: gobs and gobs and gobs of RAM with fair-to-middling (but not stellar) compute.
Wake me when Intel is shifting 20W Atoms that can handle 1TB of RAM or when the "uncap" the desktop/1P CPUs so that I don't need 400w of idle silicon in order to spin up enough VMs to make testing useful.
In the meantime and between time, these ARM beauties look to fill a very important niche that Intel has chosen not to. If you don't cannibalize your own products, apparently ARM will...
UEFI is an excellent set of specifications. Too bad too many manufacturers are just garbage and they don't care about quality so their UEFI implementations suck. They don't pay the programmers and don't hire the most expensive the programmers to do the job properly. So the end product sucks, with plenty of motherboards plagued by the worst UEFI BIOS bugs ever...due to bad programming.
UFI is fine, in itsself. The problem is implimentation. Many manufacturers do a quick job of it - so long as it'll boot Windows, they consider it job done. This results in all manner of nasty hacks and bugs to work around.
With the old BIOS system, there was lots of really ugly bodges involved in adapting a 8086-era boot process to modern hardware - but they were familiar bodges, and everyone knew how to handle them, and every system handled them in the same way.
I disagree, from all I've heard the specification itself is already far to complex to ever be implemented correctly. I mean the reference implementations are already larger than the Linux Kernel... and those implementations don't include any drivers.
It just seems to be a heck of an overhead just to do booting and hardware support. OpenFirmware did the same, much more cleanly with much less code.
Maybe we should stop comparing EFI with the IBM-BIOS and instead compare it to something that actually was "state of the art" at one point.
A fair comparison would have been to the Atom C2758; 8-Cores @ 2.4 GHz, 20 W and competing in the same space (low-power server chips)
And if space is a concern, SuperMicro managed to cram 112 of the bastards into 6U box: (http://www.supermicro.com/products/MicroBlade/)
I agree the Atom is a fairer comparison and I'm sure we'll start seeing how the two match up as people start running them head-to-head. The SCP and hardware acceleration stuff will certainly come in useful, especially as you can basically have whatever acceleration you want – Netflix for example might want some video stuff in hardware. 112 is pretty good but I think we might be looking at twice that density for final systems.
I've heard that companies are sticking with 28nm and waiting for 14nm rather than going with 20nm because 20nm has more problems than it solves.
Honestly I will admit what Intel has done with Atom is very impressive. Only somebody with Intel's money and expertise could get a x86 compatible (nearly native) chip anywhere near ARM's ballpark. It would actually be hard to design an instruction set more ill suited for super low power consumption and yet decent performance than x86. That instruction set has been traditionally profitable but still a curse around Intel's neck. Its why they have tried several times to move away from it unsuccessfully.
Well, not just Intel. The entire computing industry has been struggling under x86 since its inception; a horrible compromise of making a simple, yet powerful machine that a home user can work with. I predict that we'll be suffering under it for many years to come because of the extreme momentum it has.
ARM is doing a good job of replacing x86, but it'll be a long, brutal battle.
Intel has stripped the Atom down and ARM has beefed up its chips. For the same manufacturing process they're now very close: x86 is still better for single-threaded stuff but ARM is cheaper and more easily throw-in hardware optimisation.
Server boxes will have to be at a significant price discount and offer density / power benefits to be attractive. Intel has lots of cash and fat margins with which to respond but there is a difference between squeezing AMD out of the picture and taking on all the ARM licensees.
"Honestly I will admit what Intel has done with Atom is very impressive."
Certainly true. Staying competitive while dragging around the load of x86 has required amazing ability.
Imagine what we'd see if Intel got off their x86 hobby horse and used that same ability to build some ARM parts. That would be something to see.
Unfortunately Intel are structured as a high margin company. They would probably not cope if they were only making a dollar or two per chip.
Imagine what we'd see if Intel got off their x86 hobby horse and used that same ability to build some ARM parts.
That would require a massive volte face and would contain an implicit admission that the decision to flog off Xscale was indeed the massive cockup it appears to be to everyone else.
Presumably the shareholders would then ask some difficult questions about the cost of reinventing that particular wheel and expect anyone still around who was involved in that decision to fall on their swords.
"That would require a massive volte face and would contain an implicit admission that the decision to flog off Xscale was indeed the massive cockup it appears to be to everyone else....." Why? Xscale was not making enough money to justify the investment because ARM simply had not made enough of a market breakthrough by then, and x86 offered the option to produce low-power chips if required (as shown by Atom). Seeing as ARM licenses are cheap, a return to making a competitive Intel ARM design would not be an horrifically expensive exercise and a Intel would not have had to pay the losses of shouldering Xscale for the period up to now when ARM is finally making a breakthrough.
".....Presumably the shareholders would then ask some difficult questions about the cost of reinventing that particular wheel....." As mentioned above, ARM licenses are pretty cheap, so no great expense required. ARM is a much simpler design than x86 and so would be a relatively trivial exercise, especially with so many competitors' designs to reference.
".....and expect anyone still around who was involved in that decision to fall on their swords." Shareholders expect profits, they have no allegiance to any technology. If Intel returns those profits (as they currently do in spades) with x86 alone or a mix of x86 and ARM, Wall Street couldn't care.
".....112 is pretty good but I think we might be looking at twice that density for final systems....." Unless you are running software with a per-core charge (such as Oracle). At which point you will ditch the ARMs for traditional, hefty cores in CPUs like Xeon, as the savings in license costs will be much more than the ARM savings on datacenter cooling and power.
Unless you are running software with a per-core charge (such as Oracle)…
Huh? Where did that come from?
Data centres are about cramming as many cores into as little space and using as little power as possible. Apart from the fact that Oracle for ARM doesn't exist yet, it's bound to have new licence models for any new architecture like the new Xeons that can be configured to run with different numbers of cores.
".....Data centres are about cramming as many cores into as little space and using as little power as possible....." Webhosting datacenters maybe, but enterprise datacenters are about running business apps which tend to be a far sight more demanding.
Webhosting datacenters maybe, but enterprise datacenters are about running business apps which tend to be a far sight more demanding.
In which case ARM might not be suitable for them at all… At least that's what I read from the article.
Oracle, of course, has the choice between x86 and its own Sparcs. ARM + custom hardware might become more interesting. It's already working with Intel on getting some hardware acceleration that would benefit its software.
and the Sun server team, before Larry started p*ssing around, had AMD products that were easy to use, work with, and reasonably cost-effective (bundling a capabilities Dell or HP charge extra for, for example, like being able to actually manage the box). Cue Larry. Scrap the cost-effective stuff, get out the check book.
What would be interesting would be to take some ARM blades, throw them into a chassis with some fast connectivity, and see just what the old server team could come up with combined with the original ZFS team: low power, screaming fast NAS?
Intel tried non-x86 instruction sets before. Have you even heard of the i860 or i960? (Both died last millenium.) The Itanium was an unusual type of success: its announcement caused delays and reduced funding to improve existing competitive 64-bit RISC architectures. Intel won that battle before the Itanium was even delayed - let alone released as an over-priced low performance power hog. Although specialist applications were created for these CPU's, they never got economies of scale because the vast majority of customers had bought x86 binaries with no source code and did not want to buy them again - even if they could.
When AMD created AMD64, Intel copied it promptly and disabled the implementation. I think they did not want to encourage people to code for the architecture, but wanted to be ready in case it was successful. The first AMD64 CPU was released in April 2003. Linux support was ready in 2001, and X64 Windows was sold in March 2005. Sometimes new hardware even had 64-bit Windows drivers. Occasionally, developers would release 64-bit Windows software, but there was no sense of urgency.
Even if the RISC core inside Intel X86 CPUs was binary compatible from one generation to the next, Windows developers release software for new architectures slower than continental drift. In the free software world, Debian officially supports 11 different CPU architectures and has unofficial support for 9 more. (You can have confidence in AMD64, ARMEL and ARMHF. Expect anything from speed bumps to road blocks if you try to do anything useful with the other 17.)
New architectures only get Windows support five years after free software on them is so successful the Microsoft decide the need to compete. Free software targets architectures with a good price/performance ratio - especially at the low end of the market where hobbyists can pick up some cheap hardware are re-purpose it.
Intel are the world leaders in exorbitantly priced CPU's. From Intel's point of view, every cheap CPU sold means an expensive CPU isn't. They are the wrong people to introduce a new architecture, and decades of experience has hammered that lesson into their skulls.
Building it using ARM designed A57 cores means it will not have very impressive performance. ARM designs competent cores but it is not targeted at servers since that's not their market. Nor do they have the level of engineers that AMD does.
Using A57 let them get this out the door quickly, but this will be more of a developer preview and the real thing will follow in a year or so when they have an internally designed core ready. I'd expect a very significant performance boost over a compromise design like the A57.
It won't beat Xeon, but neither do AMD's x86 cores beat Xeon. AMD should however be able to handily beat the performance of Opteron at a similar power budget - because it'll be designed for that lower power budget, unlike Opteron, and there is no legacy cruft to drag along (I'm assuming AMD supports 64 bit ARMv8 only with their own design core, no 32 bit code or Thumb)
Cortex-A57 is actually pretty quick for a 3-way OoO design - about 30-50% faster than Cortex-A15 (similar gain vs Jaguar and significantly faster than Silvermont). So no, it won't beat Xeon on single-threaded performance, but that was not its goal. You could replace a low-power quad Xeon with the 8-core Seattle and get similar performance at lower power and cost. As Calxeda showed, you should be able to cram many sockets onto a server board and get significant gains in sever density.
I agree the custom ARMs coming from AMD, QC, Broadcomm, Cavium, NVidia, Samsung, AMCC soon are very exciting. Broadcomm for example is claiming 90% of single-threaded performance of Haswell and something like 150% of Xeon performance per socket. If they pull it off, that changes the landscape a bit - Intel beware, ARM is coming!
After driving practically everybody out of the server business, former great names like HP, IBM and SUN have hardly a visible market relevance with their CPU's nowadays, this is good news. At least for the next few years, Intel knows competition is looming around the corner, and they can not drive the prices up after IBM sold their fabs.
In worst case its fate would be comparable to Linux on the desktop, less than 5%, this meager number prevented Microsoft from jacking up the license fees without limit. So the world should be happy with this, for now the rise of a new monopoly seems to be prevented.
The fact that Intels market policies are sick, is easily to see by for instance trying to upgrade a laptop cpu from 2010 to a better spec version, the consumer pricing for Intel cpu's is such that buying a new machine is cheaper then just the faster cpu.
Disturbing to see that none of the professionals here see fit to comment on what this commentard see's as the most innovative feature. On chip hardware accelerated cryptographic functions which should empower widespread adoption of ipsec, dnssec and https. Wheeee ... hardware accelerated encrypted tcp\ip stack anybody ??? But then again IT engineers really don't give a ratz azz that government and corpratz criminals demand access to all of your bases and dpi all your metadata now do they.