Interesting thought...
It would be interesting to see an alliance/merger of ARM and AMD.
There are two reasons why Intel is switching to a new process architecture: it can, and it must. The most striking aspect of Intel's announcement of its new Tri-Gate process isn't the architecture itself, nor is it the eye-popping promises of pumped-up performance and dialed-down power. And it certainly isn't the Chipzillian …
This post has been deleted by its author
This post has been deleted by its author
My reading of the performance improvements are 22nm tri-gate will be faster than 32nm planer (just like in previous process shrinks) but more importantly, allow Intel to use a lower voltage without sacrificing performance which will be crucial for competing with ARM.
The real question is what are the performance differences between 22nm planer and 22nm tri-gate if any as this is where we will see the competition between AMD/Global Foundries and Intel.
Exactly my point :-) It really seems to me that they've merely found a way to make the die-shrink work out once again, i.e. once again somewhat in proportion to the basic geometry, before they finally have to give up any hope of dragging Moore's law any further using just silicon and lithography. 32nm->22nm = 0.6875^2 =~ 0.47 . By boasting just a 50% cut in power consumption, they're in fact admitting that that they've *almost* made the die-shrink work out up to the theoretical expectations :-D
Thanks for the _deep_ explanation BTW - if it wasn't for El Reg, I'd wade through Intel's PR fog blindfolded till the end of my days :-)
"before they finally have to give up any hope of dragging Moore's law any further using just silicon and lithography. 32nm->22nm = 0.6875^2 =~ 0.47 ."
I think you've hit the nail on the head. The question is how much *better* is this tech *above* what you would expect from a device shrink.
Not much seems to be the answer.
...I'm sure the problem of leakage increases with reduced nm parts due to the old uncertainty principle or something, and the tri-gate thing is mentioned as reducing transistor leakage due to the improved drain in the off state.
I'd love to see actual figures for planar 22nm, but regardless, I certainly expect them to be considerably worse than the 50% saving Intel has achieved.
@"The move to a 22nm Tri-Gate process architecture is an important step for Intel's entire microprocessor line"
It is important because once the ARM A15 design is here, Intel will start to loose the future server market on processing power per watt. (The ARM A15 will allow the design of servers with more processing power for less electrical power than an Intel CPU based design. That's win win for ARM and Checkmate for Intel's bloated x86 design).
Intel's market lead and dominance up until now has largely depended on Intel's ability to define what each new generation of x86 design should be, so they were always first to market with each new x86 generation. That meant each time the x86 design changed, AMD had to spend time playing catch up to add each new addition to the ever more bloated and increasingly complex x86 design. This constantly changing x86 design gave Intel the marketing lead over AMD, because Intel were always going to be first to market with each new generation.
Unfortunately for Intel, ARM are not playing that same game. ARM processor designs are more power efficient than x86 designs. So whatever chip making process Intel uses, they still can't win, as an ARM design based on that same chip making process will win over the x86 design. Ironically for Intel, their bloated x86 design that was so useful for holding back AMD, is now holding them back from competing with ARM.
Which leaves software (not hardware) as Intel's remaining x86 strength and even that is just in its legacy support. Its a pain to recompile programs and some old programs won't be recompiled (the companies who created the code may not even be around any more). But for all new code, its really not an issue. Plus ARM has a lot of software support. For example Linux has been on ARM for years. Android and iPhone already support ARM. Even Microsoft are looking to support ARM. Also ARM software development has had 28 years to evolve to a very high level of industry support with good free tools. So ARM is strong in software support as well as beating the x86 design in processing power per watt.
So Intel are in trouble. AMD isn't Intel's biggest competitor, its the over 200 companies that all license the ARM designs that are Intel's biggest threat, because together these over 200 companies could seriously harm Intel's market dominance. So Intel really are in trouble.
"ARM design based on that same chip making process will win over the x86 design"
The question is if Intel on this new process is faster/cheaper then ARM on the processes available to the rest of the world, not if ARM is better then Intel designs on the same process.
Since Intel is not going to teach the rest of the world how to cheaply implement Fin FETs in production (there has to be a reason why nobody else did, in spite of lab samples being available for about 8 years, per ElReg), ARM chips will continue to be made using existing processes, which puts them at a disadvantage compared to Intel chips.
Not to mention almost every piece of midrange and highend consumer electronics shipped in the last few years, from broadband routers to NAS boxes to TVs and much much more. Not as trendy as Android or iPhone but far far more ubiquitous.
Anyone remind me what recent successes Intel have had outside the Windows/x86 world? Anyone?
Great article, would have thought this reflects the challenge that ARM and devices like the Ipad are having on the computer market, which is massive.
Intel and Microsoft have had an effective duopoly on the business, timing the increase in Windows bloat and performance demands with processor releases so we all keep upgrading and buying more powerful machines. That cash cow is dying, the other great market was Servers and with the evolution of the cloud fewer standard server designs will evolve and the constant cyclic refresh of servers can be reduced as grid and virtualisation allow mixed capacity systems to happily share workloads and there is probably less spare capacity as workloads are more tightly managed. The economics of energy may push an initial refresh cycle and sales if its a lot more energy efficient , but you wonder about the economics of the size of investment and price competitiveness of efficient chips against ROI, particularly where the high CPU numbers are against low cost ARM cores and you need partner to switch platform to use your cpu/chipsets
Intel therefore is faced with reducing demand in its core markets and a new rapidly growing market that it is poor in. No surprise its throwing cash at the problem, can't wait to see what they do about graphics to partner the faster processors. ARM's advantage at the moment is in some ways tied to its better package as much as the faster CPU and the only strong driver for more powerful mobile devices is to push more engaging games or Augmented Reality at us and I suspect Google is best positioned in that marketplace, if it manages its platform well.
On the enterprise server front with the release of more powerful CPU's with lower energy footprints I think the sales volumes will be smaller than in the past as the overall shrinkage of data centre numbers continues.
Fastest way for Intel to grab market share of fab plant production and increase its stranglehold on the market would be to fab ARM/ATI packages with its 22nm processes ahead of the pack.
"Or, for that matter, Intel could license the ARM architecture and start buiding its own ARM variants in its own fabs, using its 22nm Tri-Gate process"
I'd have thought, more like a dead cert, than unlikely.
If Intel has the best process technology for low-power devices, ARM without question has a better CPU architecture for low-power devices like Smartphones. Put them together and what do you get? The best possible Smartphone CPU, that can either double battery runtime, or allow for a large cut in the weight of the phone without any loss of runtime.
If Intel suffers from the "not invented here" syndrome, Smartphone manufacturers will have to choose between i86 architecture running on the best Silicon, or ARM running on less good silicon. It won't be so long before TSMC or some other chip foundry catches up with Intel enough to put ARM back at the front of the pack. Best for Intel if it's Intel that makes the best mobile device chips.
A TERRIFIC review. And, it is interesting to read the responses involving the David & Goliath issues. In my humble experience, the semiconductor Goliath likely has a big edge. In a few years it'll be interesting to see if Intel can bottle the innovation rabbit: Apple. Thanks much !!!
Only justifiable if it gives benefit. Will ARM save 50% on power?
WRT the comments about x86 bloat - remember it was AMD who put the 64-bit extensions into x86 whilst Intel was preparing to bet the farm on Itanium.
I can understand why AMD did it, but it's argueable that the net result has been a 5 year delay on PC architecture moving to more efficient CPUs
Can't wait. New socket type, so time to redesign motherboards, so new RAM, new cards, new PSU with different power requirements, and so on for ever.
Changing the blasted socket every 9 months isn't progress. It's a royal pain in the bum. It's getting to the point where you can't get parts for a 2 or 3 year old machine.
Intel has already released the info on their Ivy Bridge sockets. Yep. Pin compatible with 1155. You could drop Ivy Bridge in a P67 mobo or a Sandy Bridge in the new 7-series chipset. (BIOS updates apply, as usual).
I guess they actually learned something about compatibility from AMD's AM socket. Although, I'd suggest doing your research before looking the fool by spouting based on your speculation. Perhaps you got suckered into buying a 1366 socket?
What most miss in these times of economic consciousness is that over time money is saved through energy efficiency per watt, and productivity per watt, all contributing to bottom lines. AMD, and future ARM (x86 designs, created by design partnership between AMD & ARM) have/will have, built into the line of chips a cost savings structure that adds to bottom line, as well as an increase in overall productivity- the chips pay for themselves- immediately.....Intel cannot compete against the above benefits of AMD's Fusion, or the threat of low power ARM designs.....22nn at what cost and yields?
asH
Actually this as much a disadvantage as an advantage.
For planar transistors you can (and people do) vary the width to get different drive currents, but for a FinFET the width (of the fin) is assumed fixed. So the only way to get the equivalent of variable width is to use multiple fins in parallel to get the equivalent of integer widths (1,2,3,...). But even here you lose the option of non-integer widths such as 2.4.
ARM would *happily* license their latest architecture to Intel (at the right price). They are a fabless design house. Licensing IP is what they do.
Intel don't *want* that.
Customer's would not *pay* Intel x86 prices for an ARM. Why should they?
They want you to buy their x86 *architecture* (as in proprietary, single source with *their* extensions) at their *proprietary* prices.
This is just a way of encouraging operators to do so.
They know if the workload does not *force* people to run Intel hardware (IE The only processor some core apps, including Windows run on) and people are not afraid that AMD will release a competitor with more bugs in than theirs why pay the Intel *premium* price?
The piece missing from this announcement will be how they will "encourage" MS to eliminate support for ARM in Windows versions. Not that they are likely to put that in a press release.
MS and Intel have benefited from having *near* monopolies on the desktop. Intel could *never* have funded this spending spree without the fat profits from controlling the architecture and a monopolistic core software supplier.
No one would pay Intel prices for an ARM architecture without *massive* significant additions to the core.
As for the tech itself.
This seems to be more about *leakage* current than switching. That's important because while you can slow clocks (use clockless methods in extreme cases) and power down sections of a chip you can's *stop* the leakage current (and IIRC it rises with temperature, although at a guess aircon costs more than the electricity you loose in leakage unless you're Google). This is a way of doing SOI levels of leakage (or near there) *without* the SOI issues of lattice strain or creating a high resistance substrate layer *inside* a wafer.
The first rule of monopoly is "We have *no* monopoly. It's a free market".
The second is "do *whatever* it takes to protect the monopoly."
This post has been deleted by its author