back to article Intel fabs to churn out 10nm ARM chips for LG smartphones next year

Intel's chip fabs will roll out 10nm ARM-compatible processors starting next year. The state-of-the-art factories will produce the mobile system-on-chips for LG Electronics using a 10nm process, with the door left wide open for other ARM licensees to jump in and use the assembly lines. At the heart of the collaboration deal …

  1. bazza Silver badge

    "I see this in the same category as Microsoft embracing Linux and the Bash shell, and Microsoft embracing iOS and Android, too."

    Hmmm, i.e. we've lost, and if you can't beat them, join them.

    1. Zakhar

      Nailing the coffin on Wintel!

      Except that M$ still didn't understand it, and continues the extortion (forcing down your throat an OS you don't want when you buy a new PC). And having failed to behave nicely, they are becoming more and more irrelevant everyday.

      Figures talk: 1 Billion mobiles/year (growing, although less steeply recently), and 350 Million PC/year (at least 5 years declining now).

      Intel did well to make this move. Should they have chosen to persist stubbornly doing the same things (as they ancient partner does!) they were facing short term irrelevancy.

      Now they have the power to be one of the best (nobody else masters 10nm so far) on the mobile market, they only need to be decent on pricing!

      This is an excellent news, and very brave for Intel to have done that.

      Congratulations to them.

      1. Anonymous Coward
        Big Brother

        Re: Nailing the coffin on Wintel!

        Or have they simply been instructed to suck-it-up and start dropping the NSA opcodes into some ARM chips?

        ...just askin...

        1. Anonymous Coward
          Anonymous Coward

          Re: Nailing the coffin on Wintel!

          I love a good conspiracy theory

  2. Anonymous Coward
    Anonymous Coward

    No love for servers?

    We don't need another chipmaker on mobile, the smartphone market is crashing because it's already saturated.

    Why aren't we focusing on ARM for the data center? The smartphones are almost useless without access to cloud services. People may not be buying new phones, but their data still lives in the cloud, and electricity isn't getting any cheaper.

    1. diodesign (Written by Reg staff) Silver badge

      Re: No love for servers?

      Intel produces 99% of data center compute processors, which are x86. I don't think it's in any hurry to build ARM SoCs for servers. With smartphones and tablets, it lost so now it's going a level lower to ruin Samsung's day.

      I'm planning a piece or two on the state of ARM server chips v soon, after IDF ends, in fact.

      C.

      1. Anonymous Coward
        Anonymous Coward

        Re: No love for servers?

        "I'm planning a piece or two on the state of ARM server chips v soon, after IDF ends, in fact."

        Putting one of Gigabyte's 48 core Cavium 1Us through its paces C?

    2. Alan Brown Silver badge

      Re: No love for servers?

      > Why aren't we focusing on ARM for the data center?

      Because whilst ARM is good at low power, when you crank up the computation rate it can't compete with x86 in the power stakes.

      Which is surprising as x86 used to be spectacularly inefficient compared to Alpha. Sparc, MIPS, etc

      1. HmmmYes

        Re: No love for servers?

        x86 is probably still is.

        x86 is an example of how make a pig fly by throwing money at it.

        Imagine how a MIPS64 built on the shiniest Intel silicon.

        Imagine a 256 MIPs cores on a single chip!

      2. devangm

        Re: No love for servers?

        I think this is still true, maybe if Intel had really stuck with Itanic. Google is experimenting with POWER, which is also ironic considering they named POWER mainframes T-Rex. Not so extinct.

      3. Dinsdale247

        Re: No love for servers?

        I never in a million years thought I would be calling x86 the "high end" server architecture. But there it is, the pig HAS reached escape velocity...

  3. Mike 16

    StrongARM?

    Anybody else remember when Intel got StrongARM as part of the death-throes of DEC? And promptly sold it off to Marvell (or was it Galileo)? How does that decision look now? Also, IIRC, that same deal got them a fab in Scotland that made Alpha's, and also some third-party chips (PowerPC? M88K? AMD?), with contracts still in force, so Intel fabbed competing chips for a while.

    If the goal was simply to kill Alpha, paving the way for Itanium dominance, we can give them some points, but somehow ARM did not die as easily.

    1. diodesign (Written by Reg staff) Silver badge

      Re: StrongARM?

      I do - Intel used the StrongARM blueprints for its XScale line of chips for phones, networking and storage chips. They didn't go anywhere either.

      From what I can tell, within Intel, if it's not x86, it's not welcome. That also exists within AMD. It's going to push Zen for servers, not ARM.

      C.

      1. Alan Brown Silver badge

        Re: StrongARM?

        "That also exists within AMD"

        Considering where AMD came from that's surprising.

        Remember, to step forward from making licensed 486s they wrapped a x86 interpreter around their 29000 cpu to make the K5 (which was faster than equivalent intel chips in everything except FP and at the time FP didn't matter much)

        I've wondered for a long time what kind of performance you'd get if you exposed the raw RISC chip inside AMD and Intel's products.

        1. Ken Hagan Gold badge

          Re: StrongARM?

          "I've wondered for a long time what kind of performance you'd get if you exposed the raw RISC chip inside AMD and Intel's products."

          I think you'd get exactly the same performance, but only after persuading everyone to recompile their binaries. Remember that only 1% of the die area actually does instruction decode these days and it is massively parallel and its output stream of micro-ops is then executed out-of-order. OoO execution is what (in the Pentium Pro and successors) delivered the death-blow to the RISC architectures of the 1980s. x86 as an ISA hasn't been significant for performance for over 20 years now.

          1. the spectacularly refined chap Silver badge

            Re: StrongARM?

            OoO execution is what (in the Pentium Pro and successors) delivered the death-blow to the RISC architectures of the 1980s. x86 as an ISA hasn't been significant for performance for over 20 years now.

            Oh look! Wheelchairs exist! Smash up my legs with a baseball bat!

            You are confusing palliative measures that Intel have used to engineer themselves out of a corner with substantive benefits. Out of order execution is not a benefit but a cost that has to be paid to get performance out of the x86 architecture. It's the same across the board - for example modern x86 chips have dozens of hidden registers that can't be accessed by the instruction set. Ask yourself which can do a better job of register allocation - a compiler that can take its time and do the job once considering the code as a whole, or a few transistors that have to do the job each and every time, in a matter of nanoseconds, and considering only a handful of instructions on either side? The answer is obvious.

            Similarly ever longer pipelines are not something to brag about - they are themselves evidence of a real problem. As the pipeline gets longer the number of problem cases increases exponentially, problems which consume silicon and time to address. That's silicon and time that can't be used elsewhere.

            Those and similar features are not benefits in and of themselves, they are the price that has had to be paid to wring an acceptable level of performance out of x86. That price is not just financial, it consumes design effort and surface area that could easily be used more profitably elsewhere. Why have only 4-8 cores on a chip? Why not fifty or sixty? It's perfectly possible if you don't piss away area on things which from an engineering view are unnecessary with a smarter design at the outset.

            Case in point: Sun's Niagara ten years ago. Designed with a fraction of Intel's resources, fabbed on a more primitive process, the result was the fastest processor bar none at the time. It offered a level of throughput and parallelism x86 could only dream off. The opening premise was to throw away a lot of that complexity you cite as a good thing and see what could be put in its place. Intel have the deepest pockets in the industry and can invest to get themselves out of sticky situations if need be: that does not mean it is a good idea to get into those situations in the first instance.

            1. Dinsdale247

              Re: StrongARM?

              Another X86 trick is their instruction pre-fetching. Intel cache is now big enough they just pre-fetch a ton of data and hope for the best. The length of the pipelines and canceled fetches makes pre-fetching very inefficient, but if they gain 1 or 2%, then they go with it. Not what I would call great design. I couldn't find the article again, but it was an x86 engineer lamenting that his job came down to statistical analysis of pre-fetch failure to success rates.

            2. Ken Hagan Gold badge

              Re: StrongARM?

              "Ask yourself which can do a better job of register allocation - a compiler that can take its time and do the job once considering the code as a whole, or a few transistors that have to do the job each and every time, in a matter of nanoseconds, and considering only a handful of instructions on either side? The answer is obvious."

              The answer is obvious because the experiment has been done and OoO wins hands down because *it* has information about the actual data being used whereas the compiler can only guess. Intel bet the farm on your hypothesis with EPIC and Itanic. They spent *billions* trying to beat OoO and gave AMD their best years ever.

      2. Anonymous Coward
        Anonymous Coward

        Re: StrongARM?

        Back in the days when StrongARM was around I remmeber talking to someone else in the processor business who was at a company which had some links with Intel and he said they'd made some enquiries to see if Intel would be prepared to fab their next design .... this was around the time of intels peak dominance of the x86 world and the response was "we get $30-40k revenue per wafer or x86s... so how much are you going to pay us not to make more x86s". StrongARM just couldn't match x86 in revenue terms then and as they could sell all the x86's they could make then they were losing money doing anything else.

        1. Ken Hagan Gold badge

          Re: StrongARM?

          "$30-40K revenue per wafer"

          I think that (with whatever numbers are now appropriate) is the key observation. Intel make most of their cash selling at the expensive end of the market. Anything that boosts ARM, which currently looks like an attractive alternative at the cheap end, damages AMD more than Intel.

  4. Alistair

    This effectively puts AMD on notice.

    - ARM SoC as server is space they've been aiming at is it not?

    1. Wade Burchette

      Re: This effectively puts AMD on notice.

      Yes. They already have an ARM Opteron. Not a good one, but it is a start. And with FinFET, the next generation might actually be respectable.

  5. John Smith 19 Gold badge
    Unhappy

    So will Intel try to leverage this and get ARM to back down on servers?

    That sounds quite paranoid.

    Now.

    But then Microsoft's licensing of it's OS dependent on the number of processors bought, not on how many ran it, seemed unbelievable at the time as well.

    Although that turned out to be exactly what was happening.

    1. Steve Davies 3 Silver badge

      Re: So will Intel try to leverage this and get ARM to back down on servers?

      MS per core licensing will be the holy grail to them then.

      Think about a Snapdragon SOC. How many cores does it have? 4 or 6 or what?

      Even though some are reserved for Graphics

      Then apply a license per core at the same rate as they charge for an X86 per core license. Suddenly that cheap as chips (doh) ARM SOC becomes very, very expensive.

      Could this be the huge pot of gold at the end of the rainbow for MS

      or

      Will it be the final stone that makes businesses stop using MS software for good.

      This will get interesting. More popcorn please.

  6. Tom 64
    Pint

    wow

    This is going to give LG a huge advantage in smartphone processing power, and not exactly competition for intel is it? Smart move.

    1. Anonymous Coward
      Anonymous Coward

      Re: wow

      I'm not so sure about that. While Intel's 10nm process is (at least based on specs) better than TSMC's 10nm, it will probably not measure up to TSMC's 7nm. TSMC will start volume production of 10nm early 2017, of 7nm in H1 2018. Intel won't move to 7nm until 2020.

  7. Sebastian Brosig
    FAIL

    Nose out

    > Bunny suit ... An Intel Custom Foundry worker in the fab

    Is the intel worker complying with the clean room procedure? he has his nose out. When I worked in a clean room (before our UK fab was shut) that was a no-no, though certain individuals did it anyway of course.

    1. Measurer

      Re: Nose out

      So he can smell the PECVD plasma!

      1. Anonymous Custard Silver badge
        Headmaster

        Re: Nose out

        No it's not per the normal rules. That said, these days of FOUPs and mini-environments, it makes less of an impact than it used to with open cassettes all around the shop. Still not exactly good for the particle level though, especially if tools are open for PM (and doubly so for the people with their heads in the tools doing said PM's).

        Last time I was in an Intel cleanroom it was surprisingly common to see exposed noses, especially given their normal American-style strictness of following rules and having everyone enforce them on everyone else.

  8. Tom 7

    Makes you think

    the Pi3 has a 40nm SOC (as far as I can tell) stick a 10nm one in and thats some grunt.

    1. Anonymous Coward
      Anonymous Coward

      Re: Makes you think

      built some Pi-based prototypes at work, for "need it now, whatever the cost" problems; the prototypes worked just fine. Then we went to look for production ready parts ....

      The Pi2 and Pi3 failed some compliance testing, but there are ways to handle that.

      The one thing - the biggest thing - that stopped anything going forward was Broadcom: to use anything based on those SoCs, we need guaranteed availability of supply for at least five years, but preferably ten. Broadcom wouldn't even commit to two. If Intel do that .. and they do it for some x86 parts ...

  9. William K Kelley

    Its all about fab utilization

    Intel is being forced to fab non-x86 stuff for the same reason that IBM had to fab game processor chips for Microsoft, Sony and Nintendo -- if your fab utilization falls below that required to cover your fixed costs you start losing money big time. In the case of IBM, it bought them a few years, but once Microsoft, Sony and Nintendo went elsewhere IBM getting out of the business (by selling to Global Foundries) was inevitable since they did not have enough volume of their own to cover the escalating expense. Intel is facing the same prospect. You've got to make a lot of chips to justify a $10B investment in a new fab. This was a pragmatic decision, nothing more.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like