back to article Having swallowed its pride and started again with 10nm chips, Intel teases features in these 2019-ish processors

"We have humble pie to eat right now, and we're eating it," Murthy Renduchintala, Intel's chief engineering officer, said yesterday. "My view on [Intel's] 10nm is that brilliant engineers took a risk, and now they're retracing their steps and getting it right." Record scratch. Freeze frame. You're probably wondering how …

  1. hammarbtyp

    Two thoughts

    Arm-like big.LITTLE architecture

    Interesting because where I stand it appears that Intel have pretty well left the embedded space. If you want to run a fanless x86 processor of any reasonable power you have AMD and that's it. Intel just don't care. Whether such a SoC processor would allow them to get back into the game is questionable.

    Integrated GPUs no longer second-class citizens

    I may be wrong here, but I'm pretty sure it is largely Intel integrated GPU's that have sucked. AMD with their in-house Radeon expertise have always been better

    1. Anonymous Coward
      Anonymous Coward

      Re: Two thoughts

      Speaking of which... "Intel expects to launch a multi-core processor that has large and small x86 CPUs [..] Arm calls this big.LITTLE"

      Perhaps Intel can call theirs "Little and Large". Then again, Arm all but got that already, so I suspect that Intel will be forced to settle for "cannon+BALL" instead.

    2. diodesign (Written by Reg staff) Silver badge

      Re: Two thoughts

      "it is largely Intel integrated GPU's that have sucked"

      Yeah, fair point - I've made that distinction now. I pretty much meant that but didn't make it clear enough.

      C.

  2. Thomas Wolf

    TSMC not at 7nm until 2019? Really?

    "Meanwhile, TSMC and Samsung promise to ship 7nm components in 2019 and 2020" - could have sworn that the A12 in the current crop of iPhones (2018) already uses TSMC's 7nm FinFET. Must be a dream - The Register says it won't happen until 2019.

    1. big_D Silver badge

      Re: TSMC not at 7nm until 2019? Really?

      The Kirin 980 in the Hauwei Mate 20 Pro is also 7nm. And the Samsung Galaxy S10 next year should get the Exynos 9820 Mongoose 7nm chip.

    2. theblackhand

      Re: TSMC not at 7nm until 2019? Really?

      There will be different layout options targeting different designs at a given process node. Mobile SoC parts tend to utilize lower clock speed designs that involve larger gaps between components/interconnects and less aggressive design rules that allow for faster time-to-market for a new process (called CLN7FF by TSMC) but are less power efficient for complex designs. For more complex CPU's, higher density designs are required to achieve higher clock speeds and higher density but the associated design rules are much tighter, and take longer to develop/troubleshoot. For 7nm, this is what TSMC calls CLN7FF+.

      So you're right, there are 7nm parts out there in mass production, but not following the high performance design rules typically found in CPU's. The high performance TSMC designs are likely to land in Q1 2019 based on trial parts being in the channel already and TSMC is advising production is ramping during 2019 with full 7nm production capacity expected to be reached by 2020.

      1. Anonymous Coward
        Anonymous Coward

        Re: TSMC not at 7nm until 2019? Really?

        Given that Intel's first 10nm parts will be mobile and thus equivalent to Apple et al's phone SoCs in not being high clock / high power chips, the relevant point is still that TSMC is a full year ahead of Intel. The first AMD CPUs made in TSMC's 7nm high performance process will ship in products many months before Intel gets equivalent 10nm parts out the door.

        Not that it is impossible for Intel to catch up, especially if TSMC stumbles, but the fact is still that Intel went from having a 2-3 year lead to trailing by a year in 3-4 years!

      2. Sonic531

        Re: TSMC not at 7nm until 2019? Really?

        I learnt something new today. Thank you

        1. Anonymous Coward
          Anonymous Coward

          Re: TSMC not at 7nm until 2019? Really?

          You're assuming that "7nm" actually means anything. Why have TSMC and Samsung called their processes 7nm? Because it looks better than 10nm. 7nm isn't an industry standard and says almost nothing about how advanced the actual process is.

          1. Anonymous Coward
            Anonymous Coward

            Re: TSMC not at 7nm until 2019? Really?

            It is also important to note that TSMC chose to do the low power process first. Not because they couldn't have done it the other way around, but TSMC knows where they are making the large majority of their profit these days.

            The volumes of big GPUs and AMD CPUs are small potatoes compared to the 200+ million SoCs they make for Apple each year, and the hundreds of millions of SoCs they make for Qualcomm and Huawei.

            Similarly Intel usually puts out mobile CPUs first because they are higher margin than desktop, and benefit more from the reduced power draw of a smaller process. Server CPUs are always last because of the more lengthy validation time compared to client CPUs. They also tend to have really big dies, so having a more mature process with better yields is much more important.

    3. diodesign (Written by Reg staff) Silver badge

      Re: TSMC not at 7nm until 2019? Really?

      See the comment by theblackhand and DougS. There's production, and then there's production.

      We didn't say TSMC wasn't shipping 7nm in 2018 - the point was 2019 and 2020 are when it really kicks off for desktop and server-grade stuff, the things Intel makes and is the context of the piece.

      I've tweaked the sentence to make it clearer, cheers.

      C.

  3. Terje
    Mushroom

    For my home desktop system I care about one metric when it comes to the cpu and that is performance and price. As long as the heat dissipation is not significantly higher then my current 5930k cpu it's a non issue, exactly what the architecture looks like what lithography process is used in manufacturing, if the manufacturing requires a virgin or unicorn sacrifice is of no concern to me as long as it works and delivers performance. To me integrated graphics is just extra cores that's not on the chip as there's no conceivable way they will outpace a discreet card in the foreseeable future anyway. But I guess intel is desperately trying to convince anyone that will listen that it's not important.

    ofcourse the priorities are different when we talk about high density servers etc. but the conclusion is the same It's not important how it's done just that it is. Unfortunately for Intel they have been mostly standing still treading water for some years now.

    Mushroom cloud because it's about the correct temperature for a decently overclocked cpu!

    1. Roj Blake Silver badge

      Re: one metric when it comes to the cpu and that is performance and price

      Isn't that two metrics?

      1. theblackhand

        Re: one metric when it comes to the cpu and that is performance and price

        Our chief weapon is performance...performance and price...performance and price.... Our two weapons are performance and price...and compatibility.... Our *three* weapons are performance, price and compatibility...and an almost fanatical devotion to Intel.... Our *four*...no... *Amongst* our weapons.... Amongst our weaponry...are such elements as performance, price... I'll come in again.

      2. Anonymous Coward
        Anonymous Coward

        No-one expected *that*

        Edit- @theblackhand; You *git*! ;-(

        "My *two* metrics are performance, price and ruthless efficiency... my *three* metrics are performance, price, ruthless efficiency and an almost fanatical devotion to the Pope... my *four* metrics are... no... I'll come in again."

    2. MachDiamond Silver badge

      "To me integrated graphics is just extra cores that's not on the chip as there's no conceivable way they will outpace a discreet card in the foreseeable future anyway."

      For conventional desktops, I have to agree that I'd rather have a separate GPU/card to maintain at least some sort of upgrade path but there are many other applications where putting everything under one lid is an easier way to implement a product. Think of the infotainment displays in cars now. It's highly unlikely that CPUs and GPUs are going to be upgradable as separate units. You will be replacing the whole module for 20x the price of a computer with comparable processing power if an upgrade is ever available.

  4. Anonymous Coward
    Anonymous Coward

    Intel is still committed to producing a discrete graphics processor by 2020, but also wants to make Intel integrated GPUs not suck.

  5. Mage Silver badge
    Boffin

    Geometry shrinks soon a dead end?

    Also continued geometry shrinks gives progressively less increase in speed (capacitance), worse leakage (tunnelling and traditional leakage), less power saving and lower yield. Maybe lower life (drift). Perhaps 20mm to 35mm real geometry is the limit. Ten years ago Samsung started stacking low power CPU, RAM and Flash chips in one package, allowing less I/O pins and little change to package height (SC6400, actually in first iPhone). This is one route. Another is 4x larger chips with lower I/O pin count and better yield, expanding the SoC idea. Current route is doomed to a dead end shortly due to physics.

    The "System" on a wafer idea is 1970s. The idea was redundancy to allow for defects. Current wafer size would allow 19 giant Macro hexagonal shaped SoCs of far greater complexity than envisaged by Ivor Catt. A complete Laptop, server, tablet, phone, router, setbox etc on a single chip. SMD legs on six edges, no poor reliability BGA packages. Less I/O connections needed. Old school mask ROM plus laser cut fuses and RAM table based FPGA type tech to route around most chip defects. Concentrate on reducing defects than reducing geometry.

    1. cornetman Silver badge

      Re: Geometry shrinks soon a dead end?

      The only real answer is building out sideways for greater parallelism.

      > Concentrate on reducing defects than reducing geometry.

      AMD is sorta trying this out with their chiplet design. It might be less efficient but does solve the yield problems that Intel is probably experiencing. This way they can design intelligently around the defect issue. Some have made some major prognostications in this are for some time:

      https://www.youtube.com/watch?v=qgvVXGWJSiE

    2. Anonymous Coward
      Anonymous Coward

      Re: Geometry shrinks soon a dead end?

      They are only a dead end for current methods of making CPUs. There are various efforts like using an air channel instead of semiconductor, or magnetic spins like the spintronics Intel is investigating that would bring further shrinks and power reductions.

      I remember in the 80s hearing that the limits were fast approaching. There's so much money in continuing to improve performance that if there's a way to do it, someone will find it.

    3. MachDiamond Silver badge

      Re: Geometry shrinks soon a dead end?

      At some point, maybe now, shrinking the geometry is just a brute force way of increasing performance the same way as ever increasing clock speeds which also have a practical limit.

      The way forward may be with more optimized layouts of the guts and stretching the 3rd dimension so instead of a flat package, it would be more like a cube. I've always been miffed at OS's that always need more powerful processors to slog through the slap dash implementation and stack of useless features. Nobody takes a well matured OS and really starts to optimize it for performance. Half or more of the features in an OS sit unused since there is no documentation on much of anything these days and it just idles away sucking life out of the CPU.

      1. Anonymous Coward
        Anonymous Coward

        Re: Geometry shrinks soon a dead end?

        The problem with going to three dimensions is that it would take FAR longer to make the chips because it is bound in linear time. Making a chip with two layers would take roughly twice as long as a chip with one layer. Making a chip with 100 layers...well, when you started a wafer might have to wait a year or two before it was finished! This isn't the same thing as 3D NAND, which uses a comparatively limited set of steps as compared to logic.

  6. Big Al 23

    Not what Intel has been saying for the past several years

    Intel has crippled their 10nm process to ship a few chips to keep Wall Street from pulling the plug on them for being years late on their prized 10nm CPUs. When it was reported that Intel had stopped work on their defective and unsalvageable 10nm process Intel resorted to social media to claim that "yields were improving and all was well" which according to this story is complete poppycock. Intel is trying to buy time but it's too little too late and they are not going to be able to deceive investors forever. The other shoe is about to drop as AMD continues to steal Intel's lunch.

    1. diodesign (Written by Reg staff) Silver badge

      Re: Not what Intel has been saying for the past several years

      FWIW... 10nm v1 (Cannon Lake) is dead and buried. It was impossible to see it through to mass volume. The integrated GPU in the CL Core i3 was disabled because it didn't work.The metalization was not viable.

      Sunny Cove is v2 of 10nm, after going back to the drawing board.

      C.

      1. theblackhand

        Re: Not what Intel has been saying for the past several years

        "FWIW... 10nm v1 (Cannon Lake) is dead and buried. It was impossible to see it through to mass volume. The integrated GPU in the CL Core i3 was disabled because it didn't work.The metalization was not viable."

        Yields for a ~70mm2 chip were in the region of 30%-40% when they should have been at least double that for a new process, ignoring the 4+ years spent getting to that state. And it needed the GPU disabled to hit those yields.

        The root cause appears to be the EUV process Intel are using - it has significantly increased the number of process steps during etching which has led to significant slowdowns in producing chips AND significant drops in quality (and hence yield). Not a good place to be... While there were other materials issues (i.e.cobalt), fixing those would not have addressed the production speed/yield issues which would have meant Intel needed more fabs for the same chip volumes.

        In hindsight, Intel took the wrong path to 10nm - the question is why they refused to acknowledge that for so long when it looked like their competitors were going to beat them to market (i.e. late 2017 based on availability of etching equipment). And if they have actually learnt from their mistakes...

        1. Flakk

          Re: Not what Intel has been saying for the past several years

          In hindsight, Intel took the wrong path to 10nm - the question is why they refused to acknowledge that for so long when it looked like their competitors were going to beat them to market

          When you discover you've made a big mistake, you can either own up to it and correct the error, or CYA and double-down. The latter seems to be a fairly popular option these days.

        2. Greg 38

          Re: Not what Intel has been saying for the past several years

          Have you met the PTD development arm at Intel? Arrogance is equally matched by stubbornness

  7. Anonymous Coward
    Anonymous Coward

    Think that's cool?

    Seem some stuff inside TSMC fab 8 that would really blow your mind, absolutely can't talk about, even as AC..

  8. pavel.petrman

    "Open a Start menu"

    "Open a Start menu on Windows, and one of the performance cores will fire up in anticipation of starting an application, like Photoshop."

    Well, with Windows 10, the reality will be more like "Open a Start menu and all of the performance cores will fire up on the insane computational demand of the Start menu itself."

  9. Flywheel
    Trollface

    How about...

    ... instead of the faster/smarter/sexier/smaller chips they concentrate on getting rid of the back/side/front doors first? Oh and the as-yet undiscovered one that's mimicking a pantomime trap-door. You know where that one is, don't you?

    "BEHIND YOU!!"

    1. Roj Blake Silver badge

      Re: How about...

      They're doing that as well.

      The Cascade Lake Xeons due out soonish are meant to be Spectre-free.

  10. Spazturtle Silver badge

    "Chipzilla wants to introduce an abstraction layer called OneAPI, which means software developers can craft code that makes the best use of available hardware acceleration in the host machine's CPUs, GPUs, FPGAs, and AI accelerators."

    Is this complementary to HSA or a competitor?

    1. Richard 12 Silver badge

      There's an XKCD about it

      Something about standards...

      Frankly, we have way too many multithreading and multiprocessing libraries already, adding a new one is not going to help the fundmental problem that thinking up parallel solutions is hard.

  11. IJD

    TSMC 7FF/7FF+ and Intel EUV

    Saying that TSMC 7FF is "low-power" and 7FF+ is "high-speed" isn't correct. 7FF has both compact low-power libraries (and metal options) for mobile (which Apple/HiSilicon use) and bigger faster higher-power libraries (and metal options) for CPU/HPC (which AMD use) -- actually, they can be mixed on the same chip. 7FF+ uses 5 EUV layers and new libraries to get 15%~20% area shrink, a small power reduction, and an even smaller (a few percent) speed increase -- and it also has low-power and high-speed libraries, just like 7FF. The main reason for 7FF+ is to pipeclean EUV before TSMC use it in anger for 5nm (due next year), and to get some reduction in die size/cost/TAT.

    Intel's 10nm problems are not due to EUV because they don't use it -- they used quad patterned metal instead to push the metal pitch down (problem#1), cobalt interconnect instead of copper for the same reason (problem#2), contact over active gate to save more area (problem#3). All these together screwed the yield, and some or all are being removed from their "new 10nm" process due out next year.

    More to the point, they're now more than a year behind TSMC 7FF with a similar process instead of the 3 years ahead that they originally promised...

  12. msroadkill

    Just doing what intel and others have long done with perceived threats - wax lyrical about fanciful things on the drawing board, as if they are just around the corner, in the hope of spoiling sales.

    He i s really talking about where they may be heading now if they had 10nm - BUT THEY DONT.

    It is compounded by the need to reconstruct much of what they have done in the past.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like