back to article Having swallowed its pride and started again with 10nm chips, Intel teases features in these 2019-ish processors

"We have humble pie to eat right now, and we're eating it," Murthy Renduchintala, Intel's chief engineering officer, said yesterday. "My view on [Intel's] 10nm is that brilliant engineers took a risk, and now they're retracing their steps and getting it right." Record scratch. Freeze frame. You're probably wondering how …

  1. hammarbtyp

    Two thoughts

    Arm-like big.LITTLE architecture

    Interesting because where I stand it appears that Intel have pretty well left the embedded space. If you want to run a fanless x86 processor of any reasonable power you have AMD and that's it. Intel just don't care. Whether such a SoC processor would allow them to get back into the game is questionable.

    Integrated GPUs no longer second-class citizens

    I may be wrong here, but I'm pretty sure it is largely Intel integrated GPU's that have sucked. AMD with their in-house Radeon expertise have always been better

    1. Anonymous Coward
      Anonymous Coward

      Re: Two thoughts

      Speaking of which... "Intel expects to launch a multi-core processor that has large and small x86 CPUs [..] Arm calls this big.LITTLE"

      Perhaps Intel can call theirs "Little and Large". Then again, Arm all but got that already, so I suspect that Intel will be forced to settle for "cannon+BALL" instead.

    2. diodesign (Written by Reg staff) Silver badge

      Re: Two thoughts

      "it is largely Intel integrated GPU's that have sucked"

      Yeah, fair point - I've made that distinction now. I pretty much meant that but didn't make it clear enough.


  2. Thomas Wolf

    TSMC not at 7nm until 2019? Really?

    "Meanwhile, TSMC and Samsung promise to ship 7nm components in 2019 and 2020" - could have sworn that the A12 in the current crop of iPhones (2018) already uses TSMC's 7nm FinFET. Must be a dream - The Register says it won't happen until 2019.

    1. big_D Silver badge

      Re: TSMC not at 7nm until 2019? Really?

      The Kirin 980 in the Hauwei Mate 20 Pro is also 7nm. And the Samsung Galaxy S10 next year should get the Exynos 9820 Mongoose 7nm chip.

    2. theblackhand

      Re: TSMC not at 7nm until 2019? Really?

      There will be different layout options targeting different designs at a given process node. Mobile SoC parts tend to utilize lower clock speed designs that involve larger gaps between components/interconnects and less aggressive design rules that allow for faster time-to-market for a new process (called CLN7FF by TSMC) but are less power efficient for complex designs. For more complex CPU's, higher density designs are required to achieve higher clock speeds and higher density but the associated design rules are much tighter, and take longer to develop/troubleshoot. For 7nm, this is what TSMC calls CLN7FF+.

      So you're right, there are 7nm parts out there in mass production, but not following the high performance design rules typically found in CPU's. The high performance TSMC designs are likely to land in Q1 2019 based on trial parts being in the channel already and TSMC is advising production is ramping during 2019 with full 7nm production capacity expected to be reached by 2020.

      1. Anonymous Coward
        Anonymous Coward

        Re: TSMC not at 7nm until 2019? Really?

        Given that Intel's first 10nm parts will be mobile and thus equivalent to Apple et al's phone SoCs in not being high clock / high power chips, the relevant point is still that TSMC is a full year ahead of Intel. The first AMD CPUs made in TSMC's 7nm high performance process will ship in products many months before Intel gets equivalent 10nm parts out the door.

        Not that it is impossible for Intel to catch up, especially if TSMC stumbles, but the fact is still that Intel went from having a 2-3 year lead to trailing by a year in 3-4 years!

      2. Sonic531

        Re: TSMC not at 7nm until 2019? Really?

        I learnt something new today. Thank you

        1. Anonymous Coward
          Anonymous Coward

          Re: TSMC not at 7nm until 2019? Really?

          You're assuming that "7nm" actually means anything. Why have TSMC and Samsung called their processes 7nm? Because it looks better than 10nm. 7nm isn't an industry standard and says almost nothing about how advanced the actual process is.

          1. Anonymous Coward
            Anonymous Coward

            Re: TSMC not at 7nm until 2019? Really?

            It is also important to note that TSMC chose to do the low power process first. Not because they couldn't have done it the other way around, but TSMC knows where they are making the large majority of their profit these days.

            The volumes of big GPUs and AMD CPUs are small potatoes compared to the 200+ million SoCs they make for Apple each year, and the hundreds of millions of SoCs they make for Qualcomm and Huawei.

            Similarly Intel usually puts out mobile CPUs first because they are higher margin than desktop, and benefit more from the reduced power draw of a smaller process. Server CPUs are always last because of the more lengthy validation time compared to client CPUs. They also tend to have really big dies, so having a more mature process with better yields is much more important.

    3. diodesign (Written by Reg staff) Silver badge

      Re: TSMC not at 7nm until 2019? Really?

      See the comment by theblackhand and DougS. There's production, and then there's production.

      We didn't say TSMC wasn't shipping 7nm in 2018 - the point was 2019 and 2020 are when it really kicks off for desktop and server-grade stuff, the things Intel makes and is the context of the piece.

      I've tweaked the sentence to make it clearer, cheers.


  3. Terje

    For my home desktop system I care about one metric when it comes to the cpu and that is performance and price. As long as the heat dissipation is not significantly higher then my current 5930k cpu it's a non issue, exactly what the architecture looks like what lithography process is used in manufacturing, if the manufacturing requires a virgin or unicorn sacrifice is of no concern to me as long as it works and delivers performance. To me integrated graphics is just extra cores that's not on the chip as there's no conceivable way they will outpace a discreet card in the foreseeable future anyway. But I guess intel is desperately trying to convince anyone that will listen that it's not important.

    ofcourse the priorities are different when we talk about high density servers etc. but the conclusion is the same It's not important how it's done just that it is. Unfortunately for Intel they have been mostly standing still treading water for some years now.

    Mushroom cloud because it's about the correct temperature for a decently overclocked cpu!

    1. Roj Blake

      Re: one metric when it comes to the cpu and that is performance and price

      Isn't that two metrics?

      1. theblackhand

        Re: one metric when it comes to the cpu and that is performance and price

        Our chief weapon is performance...performance and price...performance and price.... Our two weapons are performance and price...and compatibility.... Our *three* weapons are performance, price and compatibility...and an almost fanatical devotion to Intel.... Our *four* *Amongst* our weapons.... Amongst our weaponry...are such elements as performance, price... I'll come in again.

      2. Anonymous Coward
        Anonymous Coward

        No-one expected *that*

        Edit- @theblackhand; You *git*! ;-(

        "My *two* metrics are performance, price and ruthless efficiency... my *three* metrics are performance, price, ruthless efficiency and an almost fanatical devotion to the Pope... my *four* metrics are... no... I'll come in again."

    2. MachDiamond Silver badge

      "To me integrated graphics is just extra cores that's not on the chip as there's no conceivable way they will outpace a discreet card in the foreseeable future anyway."

      For conventional desktops, I have to agree that I'd rather have a separate GPU/card to maintain at least some sort of upgrade path but there are many other applications where putting everything under one lid is an easier way to implement a product. Think of the infotainment displays in cars now. It's highly unlikely that CPUs and GPUs are going to be upgradable as separate units. You will be replacing the whole module for 20x the price of a computer with comparable processing power if an upgrade is ever available.

  4. Anonymous Coward
    Anonymous Coward

    Intel is still committed to producing a discrete graphics processor by 2020, but also wants to make Intel integrated GPUs not suck.

  5. Mage Silver badge

    Geometry shrinks soon a dead end?

    Also continued geometry shrinks gives progressively less increase in speed (capacitance), worse leakage (tunnelling and traditional leakage), less power saving and lower yield. Maybe lower life (drift). Perhaps 20mm to 35mm real geometry is the limit. Ten years ago Samsung started stacking low power CPU, RAM and Flash chips in one package, allowing less I/O pins and little change to package height (SC6400, actually in first iPhone). This is one route. Another is 4x larger chips with lower I/O pin count and better yield, expanding the SoC idea. Current route is doomed to a dead end shortly due to physics.

    The "System" on a wafer idea is 1970s. The idea was redundancy to allow for defects. Current wafer size would allow 19 giant Macro hexagonal shaped SoCs of far greater complexity than envisaged by Ivor Catt. A complete Laptop, server, tablet, phone, router, setbox etc on a single chip. SMD legs on six edges, no poor reliability BGA packages. Less I/O connections needed. Old school mask ROM plus laser cut fuses and RAM table based FPGA type tech to route around most chip defects. Concentrate on reducing defects than reducing geometry.

    1. cornetman Silver badge

      Re: Geometry shrinks soon a dead end?

      The only real answer is building out sideways for greater parallelism.

      > Concentrate on reducing defects than reducing geometry.

      AMD is sorta trying this out with their chiplet design. It might be less efficient but does solve the yield problems that Intel is probably experiencing. This way they can design intelligently around the defect issue. Some have made some major prognostications in this are for some time:

    2. Anonymous Coward
      Anonymous Coward

      Re: Geometry shrinks soon a dead end?

      They are only a dead end for current methods of making CPUs. There are various efforts like using an air channel instead of semiconductor, or magnetic spins like the spintronics Intel is investigating that would bring further shrinks and power reductions.

      I remember in the 80s hearing that the limits were fast approaching. There's so much money in continuing to improve performance that if there's a way to do it, someone will find it.

    3. MachDiamond Silver badge

      Re: Geometry shrinks soon a dead end?

      At some point, maybe now, shrinking the geometry is just a brute force way of increasing performance the same way as ever increasing clock speeds which also have a practical limit.

      The way forward may be with more optimized layouts of the guts and stretching the 3rd dimension so instead of a flat package, it would be more like a cube. I've always been miffed at OS's that always need more powerful processors to slog through the slap dash implementation and stack of useless features. Nobody takes a well matured OS and really starts to optimize it for performance. Half or more of the features in an OS sit unused since there is no documentation on much of anything these days and it just idles away sucking life out of the CPU.

      1. Anonymous Coward
        Anonymous Coward

        Re: Geometry shrinks soon a dead end?

        The problem with going to three dimensions is that it would take FAR longer to make the chips because it is bound in linear time. Making a chip with two layers would take roughly twice as long as a chip with one layer. Making a chip with 100 layers...well, when you started a wafer might have to wait a year or two before it was finished! This isn't the same thing as 3D NAND, which uses a comparatively limited set of steps as compared to logic.

  6. Big Al 23

    Not what Intel has been saying for the past several years

    Intel has crippled their 10nm process to ship a few chips to keep Wall Street from pulling the plug on them for being years late on their prized 10nm CPUs. When it was reported that Intel had stopped work on their defective and unsalvageable 10nm process Intel resorted to social media to claim that "yields were improving and all was well" which according to this story is complete poppycock. Intel is trying to buy time but it's too little too late and they are not going to be able to deceive investors forever. The other shoe is about to drop as AMD continues to steal Intel's lunch.

    1. diodesign (Written by Reg staff) Silver badge

      Re: Not what Intel has been saying for the past several years

      FWIW... 10nm v1 (Cannon Lake) is dead and buried. It was impossible to see it through to mass volume. The integrated GPU in the CL Core i3 was disabled because it didn't work.The metalization was not viable.

      Sunny Cove is v2 of 10nm, after going back to the drawing board.


      1. theblackhand

        Re: Not what Intel has been saying for the past several years

        "FWIW... 10nm v1 (Cannon Lake) is dead and buried. It was impossible to see it through to mass volume. The integrated GPU in the CL Core i3 was disabled because it didn't work.The metalization was not viable."

        Yields for a ~70mm2 chip were in the region of 30%-40% when they should have been at least double that for a new process, ignoring the 4+ years spent getting to that state. And it needed the GPU disabled to hit those yields.

        The root cause appears to be the EUV process Intel are using - it has significantly increased the number of process steps during etching which has led to significant slowdowns in producing chips AND significant drops in quality (and hence yield). Not a good place to be... While there were other materials issues (i.e.cobalt), fixing those would not have addressed the production speed/yield issues which would have meant Intel needed more fabs for the same chip volumes.

        In hindsight, Intel took the wrong path to 10nm - the question is why they refused to acknowledge that for so long when it looked like their competitors were going to beat them to market (i.e. late 2017 based on availability of etching equipment). And if they have actually learnt from their mistakes...

        1. Flakk

          Re: Not what Intel has been saying for the past several years

          In hindsight, Intel took the wrong path to 10nm - the question is why they refused to acknowledge that for so long when it looked like their competitors were going to beat them to market

          When you discover you've made a big mistake, you can either own up to it and correct the error, or CYA and double-down. The latter seems to be a fairly popular option these days.

        2. Greg 38

          Re: Not what Intel has been saying for the past several years

          Have you met the PTD development arm at Intel? Arrogance is equally matched by stubbornness

  7. Anonymous Coward
    Anonymous Coward

    Think that's cool?

    Seem some stuff inside TSMC fab 8 that would really blow your mind, absolutely can't talk about, even as AC..

  8. pavel.petrman Silver badge

    "Open a Start menu"

    "Open a Start menu on Windows, and one of the performance cores will fire up in anticipation of starting an application, like Photoshop."

    Well, with Windows 10, the reality will be more like "Open a Start menu and all of the performance cores will fire up on the insane computational demand of the Start menu itself."

  9. Flywheel Silver badge

    How about...

    ... instead of the faster/smarter/sexier/smaller chips they concentrate on getting rid of the back/side/front doors first? Oh and the as-yet undiscovered one that's mimicking a pantomime trap-door. You know where that one is, don't you?

    "BEHIND YOU!!"

    1. Roj Blake

      Re: How about...

      They're doing that as well.

      The Cascade Lake Xeons due out soonish are meant to be Spectre-free.

  10. Spazturtle

    "Chipzilla wants to introduce an abstraction layer called OneAPI, which means software developers can craft code that makes the best use of available hardware acceleration in the host machine's CPUs, GPUs, FPGAs, and AI accelerators."

    Is this complementary to HSA or a competitor?

    1. Richard 12 Silver badge

      There's an XKCD about it

      Something about standards...

      Frankly, we have way too many multithreading and multiprocessing libraries already, adding a new one is not going to help the fundmental problem that thinking up parallel solutions is hard.

  11. IJD

    TSMC 7FF/7FF+ and Intel EUV

    Saying that TSMC 7FF is "low-power" and 7FF+ is "high-speed" isn't correct. 7FF has both compact low-power libraries (and metal options) for mobile (which Apple/HiSilicon use) and bigger faster higher-power libraries (and metal options) for CPU/HPC (which AMD use) -- actually, they can be mixed on the same chip. 7FF+ uses 5 EUV layers and new libraries to get 15%~20% area shrink, a small power reduction, and an even smaller (a few percent) speed increase -- and it also has low-power and high-speed libraries, just like 7FF. The main reason for 7FF+ is to pipeclean EUV before TSMC use it in anger for 5nm (due next year), and to get some reduction in die size/cost/TAT.

    Intel's 10nm problems are not due to EUV because they don't use it -- they used quad patterned metal instead to push the metal pitch down (problem#1), cobalt interconnect instead of copper for the same reason (problem#2), contact over active gate to save more area (problem#3). All these together screwed the yield, and some or all are being removed from their "new 10nm" process due out next year.

    More to the point, they're now more than a year behind TSMC 7FF with a similar process instead of the 3 years ahead that they originally promised...

  12. msroadkill

    Just doing what intel and others have long done with perceived threats - wax lyrical about fanciful things on the drawing board, as if they are just around the corner, in the hope of spoiling sales.

    He i s really talking about where they may be heading now if they had 10nm - BUT THEY DONT.

    It is compounded by the need to reconstruct much of what they have done in the past.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

  • Intel is running rings around AMD and Arm at the edge
    What will it take to loosen the x86 giant's edge stranglehold?

    Analysis Supermicro launched a wave of edge appliances using Intel's newly refreshed Xeon-D processors last week. The launch itself was nothing to write home about, but a thought occurred: with all the hype surrounding the outer reaches of computing that we call the edge, you'd think there would be more competition from chipmakers in this arena.

    So where are all the AMD and Arm-based edge appliances?

    A glance through the catalogs of the major OEMs – Dell, HPE, Lenovo, Inspur, Supermicro – returned plenty of results for AMD servers, but few, if any, validated for edge deployments. In fact, Supermicro was the only one of the five vendors that even offered an AMD-based edge appliance – which used an ageing Epyc processor. Hardly a great showing from AMD. Meanwhile, just one appliance from Inspur used an Arm-based chip from Nvidia.

    Continue reading
  • TSMC may surpass Intel in quarterly revenue for first time
    Fab frenemies: x86 giant set to give Taiwanese chipmaker more money as it revitalizes foundry business

    In yet another sign of how fortunes have changed in the semiconductor industry, Taiwanese foundry giant TSMC is expected to surpass Intel in quarterly revenue for the first time.

    Wall Street analysts estimate TSMC will grow second-quarter revenue 43 percent quarter-over-quarter to $18.1 billion. Intel, on the other hand, is expected to see sales decline 2 percent sequentially to $17.98 billion in the same period, according to estimates collected by Yahoo Finance.

    The potential for TSMC to surpass Intel in quarterly revenue is indicative of how demand has grown for contract chip manufacturing, fueled by companies like Qualcomm, Nvidia, AMD, and Apple who design their own chips and outsource manufacturing to foundries like TSMC.

    Continue reading
  • Intel withholds Ohio fab ceremony over US chip subsidies inaction
    $20b factory construction start date unchanged – but the x86 giant is not happy

    Intel has found a new way to voice its displeasure over Congress' inability to pass $52 billion in subsidies to expand US semiconductor manufacturing: withholding a planned groundbreaking ceremony for its $20 billion fab mega-site in Ohio that stands to benefit from the federal funding.

    The Wall Street Journal reported that Intel was tentatively scheduled to hold a groundbreaking ceremony for the Ohio manufacturing site with state and federal bigwigs on July 22. But, in an email seen by the newspaper, the x86 giant told officials Wednesday it was indefinitely delaying the festivities "due in part to uncertainty around" the stalled Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act.

    That proposed law authorizes the aforementioned subsidies for Intel and others, and so its delay is holding back funding for the chipmakers.

    Continue reading
  • Intel ships crypto-mining ASIC at the worst possible time
    Chipmaker finally ahead of schedule only to find it arrived too late

    Comment Intel has begun shipping its cryptocurrency-mining "Blockscale" ASIC slightly ahead of schedule, and the timing could not be more unfortunate as digital currency values continue to plummet.

    Raja Koduri, the head of Intel's Accelerated Computing Systems and Graphics group, tweeted Wednesday the company has started initial shipments of the Blockscale ASIC to crypto-mining firms Argo Blockchain, Hive Blockchain and Griid:

    Continue reading
  • Intel demos multi-wavelength laser array integrated on silicon wafer
    Next stop – on-chip optical interconnects?

    Intel is claiming a significant advancement in its photonics research with an eight-wavelength laser array that is integrated on a silicon wafer, marking another step on the road to on-chip optical interconnects.

    This development from Intel Labs will enable the production of an optical source with the required performance for future high-volume applications, the chip giant claimed. These include co-packaged optics, where the optical components are combined in the same chip package as other components such as network switch silicon, and optical interconnects between processors.

    According to Intel Labs, its demonstration laser array was built using the company's "300-millimetre silicon photonics manufacturing process," which is already used to make optical transceivers, paving the way for high-volume manufacturing in future. The eight-wavelength array uses distributed feedback (DFB) laser diodes, which apparently refers to the use of a periodically structured element or diffraction grating inside the laser to generate a single frequency output.

    Continue reading
  • Intel demands $625m in interest from Europe on overturned antitrust fine
    Chip giant still salty

    Having successfully appealed Europe's €1.06bn ($1.2bn) antitrust fine, Intel now wants €593m ($623.5m) in interest charges.

    In January, after years of contesting the fine, the x86 chip giant finally overturned the penalty, and was told it didn't have to pay up after all. The US tech titan isn't stopping there, however, and now says it is effectively seeking damages for being screwed around by Brussels.

    According to official documents [PDF] published on Monday, Intel has gone to the EU General Court for “payment of compensation and consequential interest for the damage sustained because of the European Commissions refusal to pay Intel default interest."

    Continue reading
  • Intel to sell Massachusetts R&D site, once home to its only New England fab
    End of another era as former DEC facility faces demolition

    As Intel gets ready to build fabs in Arizona and Ohio, the x86 giant is planning to offload a 149-acre historic research and development site in Massachusetts that was once home to the company's only chip manufacturing plant in New England.

    An Intel spokesperson confirmed on Wednesday to The Register it plans to sell the property. The company expects to transfer the site to a new owner, a real-estate developer, next summer, whereupon it'll be torn down completely.

    The site is located at 75 Reed Rd in Hudson, Massachusetts, between Boston and Worcester. It has been home to more than 800 R&D employees, according to Intel. The spokesperson told us the US giant will move its Hudson employees to a facility it's leasing in Harvard, Massachusetts, about 13 miles away.

    Continue reading
  • Intel’s CEO shouldn’t be surprised America can’t get CHIPS Act together
    Silicon supremo warns he could prioritize expansion in Europe if Congress doesn’t approve subsidies

    Comment How serious is Intel about delaying the build-out of its planned $20 billion mega-fab site in Ohio?

    It turns out very serious, as Intel CEO Pat Gelsinger made clear on Tuesday, less than a week after his x86 giant delayed the groundbreaking ceremony for the Ohio site to show its displeasure over Congress' inability to pass $52 billion in subsidies to fund American semiconductor manufacturing.

    In comments at the Aspen Ideas Festival yesterday, Gelsinger warned Intel would prioritize building factories in Europe over the US if Congress fails to act on the long-stalled chip subsidies bill.

    Continue reading
  • Linux Foundation thinks it can get you interested in smartNICs
    Step one: Make them easier to program

    The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.

    The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.

    SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.

    Continue reading
  • Lenovo reveals small but mighty desktop workstation
    ThinkStation P360 Ultra packs latest Intel Core processor, Nvidia RTX A5000 GPU, support for eight monitors

    Lenovo has unveiled a small desktop workstation in a new physical format that's smaller than previous compact designs, but which it claims still has the type of performance professional users require.

    Available from the end of this month, the ThinkStation P360 Ultra comes in a chassis that is less than 4 liters in total volume, but packs in 12th Gen Intel Core processors – that's the latest Alder Lake generation with up to 16 cores, but not the Xeon chips that we would expect to see in a workstation – and an Nvidia RTX A5000 GPU.

    Other specifications include up to 128GB of DDR5 memory, two PCIe 4.0 slots, up to 8TB of storage using plug-in M.2 cards, plus dual Ethernet and Thunderbolt 4 ports, and support for up to eight displays, the latter of which will please many professional users. Pricing is expected to start at $1,299 in the US.

    Continue reading

Biting the hand that feeds IT © 1998–2022