Also...
The ARM instruction set is rather less pathalogical than the X86 instruction set, which helps somewhat with maintaining both efficiency and sanity levels.
Nvidia’s Analyst Day at its Santa Clara, CA headquarters on March 8 gave me new insight into the company and how it see the markets. Jen Hsun Huang, head and shoulders Jen-Hsun Huang, co-founder, president and CEO, kicked off the conference by baldly stating: “Creativity matters. Productivity matters,” This shouldn’t provoke …
Yes X86 is pretty bad. Especially the archaic boot-up methods from 8086 real-mode, frozen in stone DOS INT13,20,etc support, invisible to the OS SMM functions that make reliable real-time impossible and all the rest.
And yet, almost everyone including The Register make fun of Intel when they try to change their instruction set to something fresh and better like Itanium.
> And yet, almost everyone including
> The Register make fun of Intel when
> they try to change their instruction set
> to something fresh and better like Itanium.
You are confusing "new" or "different' or "not x86" with "better".
They are by no means the same thing. If anything proves that it's Itanium.
Well OK, it depends on your point of view. GPU+ARM = something quite exciting, provided that all your heavy duty sums are GPU-able. If you have a set of sums that are a mixture of GPU friendly and GPU unfriendly operations, then this gets you no where; you'd still need some separate, meaty CPU.
ARM's not bad, but it is first and foremost a design that is just about quick enough to support small-ish compute jobs (OSX, Android). Any kind of compute performance in ARM tends to be hardware acceleration of standard things (codec work?).
With my high speed sums hat on I would like to have seen a PowerPC core in there instead of an ARM. It's a bigger CPU so it can do more, it isn't an x86, and it can support more workloads in it's own right (for example Altivec is still pretty good no matter what Intel might have you think).
But alas there's not enough of a market to support that. Everything is mobile these days, so ARM it is.
Bazza wrote: "ARM's not bad, but it is first and foremost a design that is just about quick enough to support small-ish compute jobs (OSX, Android). Any kind of compute performance in ARM tends to be hardware acceleration of standard things (codec work?)."
There is nothing in the ARM instruction set (apart from not having 64-bit registers) that prevents a high-performance implementation. In fact, with equal implementation effort a 32-bit ARM would outperform a 32-bit x86; It has more registers, it has fewer cases where you need to use a specific register for, say, multiplication, it passes return addresses in registers instead of memory, etc.
ARM started out as outperforming x86 by a good margin, it was only when the focus at ARM shifted to embedded and mobile devices that performance took second place. At the same time, Intel had an arms race (no pun intended) with AMD and others to produce ever faster x86 CPUs with no effort to reduce power consumption. Hence, x86 evolved into power-hungry speed demons, while ARM evolved into lean processors with better performance than most other embedded processors, but not being able to compete against server-class processors in performance.
Now Intel focus more on power and ARM more on performance, so the gap lessens. And with multicore, the leanness of ARM is a clear advantage: You can put more cores on the same chip.
But I agree that ARM needs to move to 64 bits, and soon. And I'm pretty sure that they are working on it, but anyone who know any details are under serious NDAs, so nothing will be announced until the design is completed.
It could be anything from using register pairs for 64-bit values and adding instructions that operate on these pairs to an extra processor mode with a whole new instruction set (like Thumb was). But it will come, and I expect announcements this year (though real products will lag by a year or more).
“It could be anything from using register pairs for 64-bit values”
That would certainly be the cheap and cheerful method, though you do in effect lose registers. It is also backwards compatible. However, it obviously precludes PC, LR etc and to support code (as opposed to data) addressed from 64bit locations the registers themselves would have to get wider. A true 64bit mode need not look much different from an instruction set POV.
ARM was born from Acorn jumping from 8bit straight to 32bits, ignoring the 16bit generation entirely. I wonder if 64bits is enough in the long term. If you're going to allow registers to be wider, why not make them 128bits and be done with it?
is meaningless, i mean really, your [they are] comparing largely desktop/laptop x86 parts to ARM parts that are in just about every mobile now made
put it this way, every year i could get a new phone with ARM inside, pretty much everyone has a mobile now, and desktops may be shared between 2 or more people.
anyhow, im not complaining as such but thought id point out that ARM sales would be more, but im fairly sure if we all binned our desktops and laptops and got a new one yeach year im sure x86 parts would have a higher sales volume. also the estimated sales data going forward on the graph is laughable, but who knows maybe it will skyrocket!, i assume that was a slide from nvidia?
A CPU doesn't need to be "64 bit" (whatever that actually means) to address > 4GB addresses, but the address bus does. It might be easier for the programmer if the CPUs general purpose registers are also 64 bit and you don't have to use segments and offsets, but it's not a requirement.