"I see this in the same category as Microsoft embracing Linux and the Bash shell, and Microsoft embracing iOS and Android, too."
Hmmm, i.e. we've lost, and if you can't beat them, join them.
Intel's chip fabs will roll out 10nm ARM-compatible processors starting next year. The state-of-the-art factories will produce the mobile system-on-chips for LG Electronics using a 10nm process, with the door left wide open for other ARM licensees to jump in and use the assembly lines. At the heart of the collaboration deal …
Except that M$ still didn't understand it, and continues the extortion (forcing down your throat an OS you don't want when you buy a new PC). And having failed to behave nicely, they are becoming more and more irrelevant everyday.
Figures talk: 1 Billion mobiles/year (growing, although less steeply recently), and 350 Million PC/year (at least 5 years declining now).
Intel did well to make this move. Should they have chosen to persist stubbornly doing the same things (as they ancient partner does!) they were facing short term irrelevancy.
Now they have the power to be one of the best (nobody else masters 10nm so far) on the mobile market, they only need to be decent on pricing!
This is an excellent news, and very brave for Intel to have done that.
Congratulations to them.
We don't need another chipmaker on mobile, the smartphone market is crashing because it's already saturated.
Why aren't we focusing on ARM for the data center? The smartphones are almost useless without access to cloud services. People may not be buying new phones, but their data still lives in the cloud, and electricity isn't getting any cheaper.
Intel produces 99% of data center compute processors, which are x86. I don't think it's in any hurry to build ARM SoCs for servers. With smartphones and tablets, it lost so now it's going a level lower to ruin Samsung's day.
I'm planning a piece or two on the state of ARM server chips v soon, after IDF ends, in fact.
C.
> Why aren't we focusing on ARM for the data center?
Because whilst ARM is good at low power, when you crank up the computation rate it can't compete with x86 in the power stakes.
Which is surprising as x86 used to be spectacularly inefficient compared to Alpha. Sparc, MIPS, etc
Anybody else remember when Intel got StrongARM as part of the death-throes of DEC? And promptly sold it off to Marvell (or was it Galileo)? How does that decision look now? Also, IIRC, that same deal got them a fab in Scotland that made Alpha's, and also some third-party chips (PowerPC? M88K? AMD?), with contracts still in force, so Intel fabbed competing chips for a while.
If the goal was simply to kill Alpha, paving the way for Itanium dominance, we can give them some points, but somehow ARM did not die as easily.
I do - Intel used the StrongARM blueprints for its XScale line of chips for phones, networking and storage chips. They didn't go anywhere either.
From what I can tell, within Intel, if it's not x86, it's not welcome. That also exists within AMD. It's going to push Zen for servers, not ARM.
C.
"That also exists within AMD"
Considering where AMD came from that's surprising.
Remember, to step forward from making licensed 486s they wrapped a x86 interpreter around their 29000 cpu to make the K5 (which was faster than equivalent intel chips in everything except FP and at the time FP didn't matter much)
I've wondered for a long time what kind of performance you'd get if you exposed the raw RISC chip inside AMD and Intel's products.
"I've wondered for a long time what kind of performance you'd get if you exposed the raw RISC chip inside AMD and Intel's products."
I think you'd get exactly the same performance, but only after persuading everyone to recompile their binaries. Remember that only 1% of the die area actually does instruction decode these days and it is massively parallel and its output stream of micro-ops is then executed out-of-order. OoO execution is what (in the Pentium Pro and successors) delivered the death-blow to the RISC architectures of the 1980s. x86 as an ISA hasn't been significant for performance for over 20 years now.
OoO execution is what (in the Pentium Pro and successors) delivered the death-blow to the RISC architectures of the 1980s. x86 as an ISA hasn't been significant for performance for over 20 years now.
Oh look! Wheelchairs exist! Smash up my legs with a baseball bat!
You are confusing palliative measures that Intel have used to engineer themselves out of a corner with substantive benefits. Out of order execution is not a benefit but a cost that has to be paid to get performance out of the x86 architecture. It's the same across the board - for example modern x86 chips have dozens of hidden registers that can't be accessed by the instruction set. Ask yourself which can do a better job of register allocation - a compiler that can take its time and do the job once considering the code as a whole, or a few transistors that have to do the job each and every time, in a matter of nanoseconds, and considering only a handful of instructions on either side? The answer is obvious.
Similarly ever longer pipelines are not something to brag about - they are themselves evidence of a real problem. As the pipeline gets longer the number of problem cases increases exponentially, problems which consume silicon and time to address. That's silicon and time that can't be used elsewhere.
Those and similar features are not benefits in and of themselves, they are the price that has had to be paid to wring an acceptable level of performance out of x86. That price is not just financial, it consumes design effort and surface area that could easily be used more profitably elsewhere. Why have only 4-8 cores on a chip? Why not fifty or sixty? It's perfectly possible if you don't piss away area on things which from an engineering view are unnecessary with a smarter design at the outset.
Case in point: Sun's Niagara ten years ago. Designed with a fraction of Intel's resources, fabbed on a more primitive process, the result was the fastest processor bar none at the time. It offered a level of throughput and parallelism x86 could only dream off. The opening premise was to throw away a lot of that complexity you cite as a good thing and see what could be put in its place. Intel have the deepest pockets in the industry and can invest to get themselves out of sticky situations if need be: that does not mean it is a good idea to get into those situations in the first instance.
Another X86 trick is their instruction pre-fetching. Intel cache is now big enough they just pre-fetch a ton of data and hope for the best. The length of the pipelines and canceled fetches makes pre-fetching very inefficient, but if they gain 1 or 2%, then they go with it. Not what I would call great design. I couldn't find the article again, but it was an x86 engineer lamenting that his job came down to statistical analysis of pre-fetch failure to success rates.
"Ask yourself which can do a better job of register allocation - a compiler that can take its time and do the job once considering the code as a whole, or a few transistors that have to do the job each and every time, in a matter of nanoseconds, and considering only a handful of instructions on either side? The answer is obvious."
The answer is obvious because the experiment has been done and OoO wins hands down because *it* has information about the actual data being used whereas the compiler can only guess. Intel bet the farm on your hypothesis with EPIC and Itanic. They spent *billions* trying to beat OoO and gave AMD their best years ever.
Back in the days when StrongARM was around I remmeber talking to someone else in the processor business who was at a company which had some links with Intel and he said they'd made some enquiries to see if Intel would be prepared to fab their next design .... this was around the time of intels peak dominance of the x86 world and the response was "we get $30-40k revenue per wafer or x86s... so how much are you going to pay us not to make more x86s". StrongARM just couldn't match x86 in revenue terms then and as they could sell all the x86's they could make then they were losing money doing anything else.
"$30-40K revenue per wafer"
I think that (with whatever numbers are now appropriate) is the key observation. Intel make most of their cash selling at the expensive end of the market. Anything that boosts ARM, which currently looks like an attractive alternative at the cheap end, damages AMD more than Intel.
That sounds quite paranoid.
Now.
But then Microsoft's licensing of it's OS dependent on the number of processors bought, not on how many ran it, seemed unbelievable at the time as well.
Although that turned out to be exactly what was happening.
MS per core licensing will be the holy grail to them then.
Think about a Snapdragon SOC. How many cores does it have? 4 or 6 or what?
Even though some are reserved for Graphics
Then apply a license per core at the same rate as they charge for an X86 per core license. Suddenly that cheap as chips (doh) ARM SOC becomes very, very expensive.
Could this be the huge pot of gold at the end of the rainbow for MS
or
Will it be the final stone that makes businesses stop using MS software for good.
This will get interesting. More popcorn please.
No it's not per the normal rules. That said, these days of FOUPs and mini-environments, it makes less of an impact than it used to with open cassettes all around the shop. Still not exactly good for the particle level though, especially if tools are open for PM (and doubly so for the people with their heads in the tools doing said PM's).
Last time I was in an Intel cleanroom it was surprisingly common to see exposed noses, especially given their normal American-style strictness of following rules and having everyone enforce them on everyone else.
built some Pi-based prototypes at work, for "need it now, whatever the cost" problems; the prototypes worked just fine. Then we went to look for production ready parts ....
The Pi2 and Pi3 failed some compliance testing, but there are ways to handle that.
The one thing - the biggest thing - that stopped anything going forward was Broadcom: to use anything based on those SoCs, we need guaranteed availability of supply for at least five years, but preferably ten. Broadcom wouldn't even commit to two. If Intel do that .. and they do it for some x86 parts ...
Intel is being forced to fab non-x86 stuff for the same reason that IBM had to fab game processor chips for Microsoft, Sony and Nintendo -- if your fab utilization falls below that required to cover your fixed costs you start losing money big time. In the case of IBM, it bought them a few years, but once Microsoft, Sony and Nintendo went elsewhere IBM getting out of the business (by selling to Global Foundries) was inevitable since they did not have enough volume of their own to cover the escalating expense. Intel is facing the same prospect. You've got to make a lot of chips to justify a $10B investment in a new fab. This was a pragmatic decision, nothing more.