back to article Intel couldn't shrink to 7nm on time – but it was able to reduce one thing: Its chief engineer's employment

Intel on Monday shook up its engineering management ranks after not only admitting its 7nm manufacturing pipeline had stalled due to defects but also that it is considering asking rival factories for help. Chipzilla said, effective Monday, August 3, chief engineering officer Venkata "Murthy" Renduchintala will exit the …

  1. CujoDeSoque

    SSDD and I don't mean disks

    Titanic. Deck chairs. Iceberg.

    Needs better Feng Shui!

    What could go wrong?

    1. Anonymous Coward
      Facepalm

      Re: SSDD and I don't mean disks

      Then again, what has gone right so far?

      I only wish they'd brought in an outsider who's done this already - like someone from TSMC.

      1. Anonymous Coward
        Anonymous Coward

        Re: SSDD and I don't mean disks

        Well they thought that they had brought in an outsider that had done this before, but then Murthy was probably just a bit too gung-ho about what he brought to the party.

        He was the initial force behind Qualcomm's purchase of CSR, but during the extended due-diligence process he suddenly disappeared from the map after he'd lost a pissing contest with another Qualcomm big-wig.

        Colour me surprised at his sudden departure from Intel.

      2. martinusher Silver badge

        Re: SSDD and I don't mean disks

        The problem stems from Intel being one of the first in the business. They had developed their own proprietary process, something that's served them well for years but was bound to lead to trouble eventually. They was literally two sorts of chip a decade or more -- "Intel" and "Everyone Else" (wiith TSMC being the "Everyone Else" front runner) and as you can imagine, as geometries got smaller and the overall market larger it left Intel as a bit of an outlier. A hugely successful outlier, but still a company that has to literally do everything itself to remain competitive.

        The smart move would have been to move to become more integrated with the industry as a whole years ago but institutional inertia is difficult to overcome especially if what you're currently doing is raking in the profits.

  2. aregross

    "...now the wings have fallen off that phoenix."

    Well put!

    1. SAdams

      I suspect the phoenix was a victim of a round of cost cutting where the business was cunningly trimmed of all the capabilities needed to actually do stuff, whilst at the same time the senior executives found new ways to pay themselves more.

      You don’t need to be left wing to be cynical about the mess that is western capitalism.

      1. Anonymous Coward
        Anonymous Coward

        By "mess", do you mean "execs pushing short-term profits over long-term prospects, and driving companies into the ground, allowing better-run companies to eat their cake"? Sounds all good to me.

        1. J. Cook Silver badge
          Joke

          mmm... Caaaaaaaake..... *drools*

        2. CharlieG
          Pint

          What could possibly go wrong with prioritising shareholder returns over investment in R&D and product development? That $25,000,000,000+ they had lying around to spend on stock buybacks in the last two years alone definitely couldn't have been used any better ($4B in Q1 2020, $13B in 2019, $10B in 2018).

  3. NetBlackOps

    Ummm...

    "Kelleher previously oversaw Intel's manufacturing work, including the ramp up of its disastrous 10nm node." So, the person responsible for the 10nm disaster is in charge of 7nm and 5nm. Right.

    1. Korev Silver badge
      Holmes

      Re: Ummm...

      Yippie for the Old Boys Girls club, it must be great to be a member...

    2. Anonymous Coward
      Anonymous Coward

      Re: Ummm...

      Not necessarily. If the design was flawed from the start, no amount of production optimisation could fix it and that might have been noticed.

      1. Anonymous Coward
        Anonymous Coward

        Re: Ummm...

        Maybe. On the other hand, in my experience, the workplace attitude that results in quality engineering and motivated staff comes from the top.

    3. eldakka

      Re: Ummm...

      "Kelleher previously oversaw Intel's manufacturing work, including the ramp up of its disastrous 10nm node." So, the person responsible for the 10nm disaster is in charge of 7nm and 5nm. Right.

      I don't think it means what you think it means.

      She was in charge of operations, not RnD:

      She is responsible for corporate quality assurance, corporate services, customer fulfillment and supply chain management. She is also responsible for strategic planning for the company’s worldwide manufacturing operations.

      She was in charge of the manufacturing in terms of building new fabs, scheduling and organising conversions of existing fabs (e.g. migrating a 28nm fab to 10nm), maintaining fabs, keeping them running (getting the consumable chemicals etc.), building new fabs as requred to meet manufacturing demands, etc. The physical infrastructure of fabbing.

      RnD say "these machines can do 20k/month with a defect rate of x", but when she puts them into a fab, they only do 5k/month with a defect rate of 10x ... She can't build fabs to meet demands if the technology given to her (the litho machines developed by RnD) are complete shite and can't hit their specifications.

      1. imanidiot Silver badge

        Re: Ummm...

        Intel doesn't build litho systems. They buy them. In Intels case afaik all from ASML.

        1. Yet Another Anonymous coward Silver badge

          Re: Ummm...

          The wafer steppers are ASML but there is a lot more to developing the whole process than buy a machine from ASML and plug it in

          1. Cynic_999

            Re: Ummm...

            Yes, and it was the tying together of the various machines to form a viable process that she was in charge of doing.

  4. Anonymous Coward
    Anonymous Coward

    For now, it doesn't matter yet

    Yes, it's terrible optics that Intel's 7nm fab process is so b0rked, but:

    - Apple doesn't make server chips.

    - core-i9 is still a good proposition for desktops/laptops.

    - NVIDIA doesn't make server chips.

    - Marvell - good luck to them with ARM64. So far, no-one's ARM64 is beating Xeon on SPEC performance.

    - AMD - doesn't beat Xeon on SPEC performance either.

    The relevant benchmarketing standard here being SPEC.

    Intel routinely exceeds a peak ratio of over 12 on SPECint 2017 speed, while AMD maxes out at 10.5. Link.

    Some people dislike SPEC because they say it's irrelevant. However: SPEC is not a collection of purely artificial programs written specifically for benchmarketing purposes, and with no connection to real-life software. All the benchmarks in SPEC are real, mostly open source, programs, that were originally written for clearly defined practical purposes. When in Rome ...

    Maybe nm bragging rights don't always translate into performance.

    1. Anonymous Coward
      Anonymous Coward

      Re: For now, it doesn't matter yet

      From this article in March, looks like the AWS Graviton 2 ARM instance beats Intel on SPEC 2017 (and comes in ~40% cheaper for equivalent performance):

      https://www.anandtech.com/show/15578/cloud-clash-amazon-graviton2-arm-against-intel-and-amd/7

      1. Anonymous Coward
        Anonymous Coward

        Re: For now, it doesn't matter yet

        > From this article in March, looks like the AWS Graviton 2 [ ... ]

        Only results submitted to, and published/verified by, SPEC are valid.

        Claims made anywhere else about SPEC performance results that aren't submitted to SPEC are just marketing bullshit. Certain criteria must be met in order for a SPEC benchmark result to be considered valid. One of the criteria is repeatability. There are several other criteria.

        If the claimed results weren't submitted to SPEC - and they weren't, because I searched for submissions on ARM64/AArch64, and there aren't any - that tells me everything I need to know about their validity.

        1. Qumefox

          Re: For now, it doesn't matter yet

          It's always amusing watching people try and move goalposts whenever presented evidence contrary to their beliefs.

          Keep on clutching at those straws, Intel fanboy. they get shorter with every passing day.

    2. GrumpenKraut

      Re: For now, it doesn't matter yet

      But, but, peak performance! Isn't this seriously getting long in the teeth?

      Intel's latest "can deliver peak performance for one whole minute" edition very much seems to indicate just that to me.

    3. Fading
      Stop

      Re: For now, it doesn't matter yet

      Erm isn't that a per-thread figure? AMD offers more threads for less money and less power than the equivalent Intel chips so I'm not sure what the win is supposed to be for Intel (and you also don't need to turn off SMT/Hyperthreading because of security concerns with AMD).

      1. Anonymous Coward
        Stop

        Re: For now, it doesn't matter yet

        > Erm isn't that a per-thread figure?

        Nope.

        The SPEC ratio is a number that is generated by SPEC software. It represents the relative performance index of the submitted result. For SPEC speed, the ratio takes into account the number of threads, if the benchmark contains OpenMP parallel blocks of code (i.e. threads). The higher the ratio, the better the relative performance. Not all SPEC speed benchmarks use OpenMP. Many do, but some don't.

        Security concerns, or cost of hardware or software aren't part of the SPEC benchmark parameters.

        1. Cuddles

          Re: For now, it doesn't matter yet

          "Security concerns, or cost of hardware or software aren't part of the SPEC benchmark parameters."

          Which is why you're getting all the downvotes for pushing SPEC as the only benchmark that matters. Even if Intel is still ahead on this one particular benchmark, no-one actually cares because a performance benchmark without taking other factors into account is completely meaningless. If some other part gets a slightly lower score, but is cheaper, smaller, more power efficient, offers more features, and so on, then that part is going to be the one people actually want. There are very few places that just want raw power at any cost.

          You're also missing the large picture. The question isn't whether Intel can still win on a benchmark right now, but what the trend is and what might be the case a few years down the line. Until the last couple of years, Intel had a massive, unquestionable lead in desktop and server parts. AMD fell way behind a decade or more ago, and ARM was just an upstart mobile maker that occasionally tried to dip a toe into real computing. Now, both AMD and ARM are fighting Intel for the top spot, and you can pretty much pick which one you want to claim wins depending on which benchmarks you prefer. And now Intel say that they're falling even further behind while their competitors carry on pushing ahead. So what do you think is going to be the case in two or three years time? Argue all you like that Intel is just about clinging to the top spot for certain workloads for now, it's just not important to pinpoint the exact moment someone else nosed ahead; it's the fact that they're very clearly in the process of being overtaken that is important.

          1. Anonymous Coward
            Facepalm

            Re: For now, it doesn't matter yet

            > Which is why you're getting all the downvotes for pushing SPEC as the only benchmark that matters.

            Or maybe it's because the vast majority of commentards here don't understand the difference between an industry benchmark based on some objective parameters, and a personal opinion based on hormones.

            That's like downvoting a blood test because you don't like the results.

            If SPEC was as irrelevant as you claim it is, why are there so many official SPEC submissions from the industry?

            > [ ... ] no-one actually cares because a performance benchmark without taking other factors into account is completely meaningless.

            What other factors? Care to enumerate them?

            If anyone has a better performance benchmark in mind, propose it, and have it accepted by the industry. Until that happens, SPEC is the only one we've got.

            1. seven of five

              Re: For now, it doesn't matter yet

              > Or maybe it's because the vast majority of commentards here don't understand [...]

              Yes. The alternative would be you being wrong, which obviously can not be.

            2. Cuddles

              Re: For now, it doesn't matter yet

              "If SPEC was as irrelevant as you claim it is, why are there so many official SPEC submissions from the industry?"

              I didn't say it was irrelevant, I said no-one cares about a single benchmark in isolation without considering context.

              "What other factors? Care to enumerate them?"

              I already did, as have numerous other people here. Cost, efficiency, size, features, and no doubt plenty of other things depending on the use case. You seem to be obsessed with the fact that Intel can get the same score as AMD while using fewer cores. But as has been repeatedly pointed out, those AMD cores cost significantly less, need fewer sockets, and use less power. So sure, Intel win a benchmark on a performance-per-core basis. So what? Why do you think this is such a big deal that we should all care about it to the exclusion of all else?

              1. Anonymous Coward
                Anonymous Coward

                Re: For now, it doesn't matter yet

                > Cost, efficiency, size, features, and no doubt plenty of other things depending on the use case.

                Cost is not a SPEC parameter. And it's not quantifiable anyway. Both Intel and AMD charge what the market will bear. And the price paid has nothing to do with the price advertised anyway. So, we don't know what cost even means here.

                Size is not a SPEC parameter. I don't even understand what size means in this context. Size of the die? Size of the chip itself? Size of the chip socket?

                Define efficiency. Did you mean power consumption? It's not a SPEC parameter. SPEC actually has defined an output for power consumption, but no-one ever reports it.

                Features? What features? What does features mean here? Is there a list of clearly defined features?

                no doubt plenty of other things depending on the use case: What other things? Can you list them? I use AMD chips as coasters. Is this a valid use case?

                You do not appear to have a minimal theoretical grasp of what a benchmark is. You keep mixing in nebulous and undefined terms -- features -- that appear to suit your confirmation biases of the moment. And when faced with the actual results of the benchmark -- namely numbers produced in a controlled environment that followed the evaluation specs -- you conveniently ignore them, if they happen to contradict your expectations bias. Case in point: So what if Xeon objectively produces better benchmark results?

                Or you counter them with undefined terms for which no information is available. Case in point: features.

                All of this tells me three things:

                - you've never run a benchmark of any kind.

                - you've never been tasked to run a benchmark of any kind.

                - you can't be trusted to run a benchmark of any kind because you are incapable of isolating your expectations bias / confirmation feedback loop from the benchmark results.

                Meaning: if you are faced with an outcome that does not match your expectations bias, you will intentionally skew the benchmark results just to confirm your bias.

                1. An ominous cow heard

                  Re: For now, it doesn't matter yet

                  Cost may not be a factor in the SPEC benchmark suites, but cost does appear in other widely recognised benchmark ratings - for example the TPC benchmark family, including for example the TpmC benchmark and the associated Price/TpmC number.

                  Readers who already know everything won't need this link, but others might find it (and others related to it) interesting:

                  http://www.tpc.org/tpcc/results/tpcc_advanced_sort5.asp

                  Readers who look *really* carefully may find there's even a specification for "energy" too, in terms of (e.g.) Watts per thousand TpmC

                  edit: the CoreMark benchmark family also has "energy efficiency" as part of the options to be measured.

                  Sometimes there's more to life than SPEC.

        2. Fading

          Re: For now, it doesn't matter yet

          So why only focus on the CPU 2017 integer speed? Epyc seems to do pretty well in FP rates, FP Speed and integer rates? Or was that the only metric that supports your view?

          1. Anonymous Coward
            Anonymous Coward

            Re: For now, it doesn't matter yet

            > Or was that the only metric that supports your view?

            No, that's not why I only mentioned SPECint speed. I only mentioned SPECint speed because mentioning all four benchmark sets would have taken waay too much space with links and all. And because SPECrate performance is much more dependent on the performance characteristics of the system as a whole, as opposed to SPECspeed.

            > Epyc seems to do pretty well in FP rates [ ... ]

            pretty well is besides the point. Does it do better, the same, or worse than Xeon? From what I can see, it still can't beat Xeon.

            1. Fading

              Re: For now, it doesn't matter yet

              Then you are not looking hard enough - the latest Epyc are regularly scoring in the 500 for FP Rate and near 200 on FP Speed. Now add in cost per core and Xeon is not looking good.

              1. Anonymous Coward
                Anonymous Coward

                Re: For now, it doesn't matter yet

                > [ ... ] the latest Epyc are regularly scoring in the 500 for FP Rate and near 200 on FP Speed.

                AMD EPYC 7662 SPECspeed 2017 FP

                Intel Xeon Platinum 8268 SPECspeed 2017 FP

                The AMD benchmark was run with 128 threads.

                The Xeon benchmark was run with 96 threads.

                Both benchmarks scored a peak ratio of 212.

                What follows from these results it that Xeon vastly out-performs Epyc on SPECspeed 2017 FP.

                The higher the number of threads, the higher the score in the SPEC ratio computation. So: if Xeon manages to score the same ratio with a lower number of threads compared to Epyc, it follows that running the Xeon benchmark with the same number of threads as Epyc would necessarily score higher.

                I didn't have the time to search through the hundreds of submitted SPEC results and find the absolute perfect optimal submission for either manufacturer.

                1. Steve Todd

                  Re: For now, it doesn't matter yet

                  Here’s the problem for Intel:

                  They don’t even come top of the table for SPEC performance. (that honour goes to a Fujitsu SPARC machine tested back in 2017, see https://www.spec.org/cpu2017/results/res2017q4/cpu2017-20171211-01435.html )

                  AMD are currently making CPUs that in 2 socket configuration need an 8 socket Xeon machine to beat them (see https://www.spec.org/cpu2017/results/res2020q2/cpu2017-20200525-22554.html ).

                  The Intel boxes are vastly more expensive to buy and to run (all those sockets need lots of power).

                  There is a limited market for “absolutely the fastest machine you can buy”, mostly companies want the best performance they can afford within their budget, or target a given performance level and then see how cheap they can buy it. Intel have a certain amount of inertia that they can rely on here, as it takes companies a while to test and qualify new hardware, but they are starting to come under fire as the AMD alternatives are looking increasingly attractive. They need to be cheaper and consume less power to compete, but to do so needs a smaller, more advanced process than 14nm, which they haven’t really got (even 10nm isn’t ready for server grade chips yet).

            2. EnviableOne

              Re: For now, it doesn't matter yet

              but you can buy 2 epycs with twice the cores for the price of one Xeon

              so if you compare like for like the Epyc is better bang for buck

              thats with Rome.

              Milan on Zen3 is on its way by the end of the year, and there is plenty fab space in the pipeline.

              With Intel not only faltering in 10nm and 7nm, but having limited capacity in their 14nm+++++++++++++

              node, OEMs not only will want to go for EPYC for speed and cost savings, but might have to due to lack of Intel chips available.

    4. Dave K

      Re: For now, it doesn't matter yet

      You're right that a reduced process size doesn't mean higher performance by itself. But it does mean lower power usage, a smaller die and hence lower manufacturing costs per chip. Or of course you can use that extra transistor budget you now have to add extra cores, more cache, or for a tweaked die design that uses additional transistors to increase the performance of the chip.

      Not saying it isn't possible to compete with an older process. AMD themselves managed it back in the early 2000s with the Athlon, Athlon XP and Athlon 64 that competed well whilst usually running a step or two behind Intel from a process size perspective.

      Still, a smaller process size gives you more options and flexibility, and AMD is currently reaping the rewards of this.

      1. Cynic_999

        Re: For now, it doesn't matter yet

        "

        You're right that a reduced process size doesn't mean higher performance by itself. But it does mean lower power usage, a smaller die and hence lower manufacturing costs per chip.

        "

        Correct on the first two points, but your third point (lower manufacturing cost) does not follow. The process may be inherently more expensive (making anything with tighter tolerances usually is), and/or there may be a lower yield per wafer. From my own (different but related) experience, shrinking a PCB by using smaller track & gap widths, smaller BGA parts and smaller vias may result in a smaller PCB, but rarely in a lower price per board.

    5. confused and dazed

      Re: For now, it doesn't matter yet

      I agree, it's mm2 of Silicon per versus performance, not there marketing fluff that is now "node bragging rights".

      The real danger here is that the West appears to have lost leadership in process technology, and it's a hard thing to regain ..... It going to be grim when TSMC become a monopoly for CPU fab

      1. Anonymous Coward
        Anonymous Coward

        Re: For now, it doesn't matter yet

        Especially if WW3 starts over a fight between the US and China for control of Taiwan and thus TSMC. The South China Sea is a scary place at the moment. Do not hold a firework display in the Formosa Strait.

        1. Anonymous Coward
          Anonymous Coward

          Re: For now, it doesn't matter yet

          If China invades Taiwan, you would assume that lots of experienced TSMC staff will flee, taking knowledge with them.

    6. Boothy

      Re: For now, it doesn't matter yet

      Quote: "core-i9 is still a good proposition for desktops/laptops."

      Not really, too costly, too hot, uses too much power. One of the few remaining technical benefit Intel has over AMD is single core performance, which is only by a small margin now with Zen 2. But this is also irrelevant for most people and for most software, where more cores is usually better. If you're a hard core gamer, with unlimited budget, then maybe go for Intel, but otherwise AMD all the way.

      Also the single core lead of Intel over AMD, which is basically achieved through raw clock speed for Intel, is quite likely to be lost with Zen 3, which is due before the end of the year, (and Intel have nothing to compete currently, unless they pull something unexpected from their hat before the end of the year). Zen 3 has IPC, clock speeds and internal optimisation (reducing known bottlenecks with the Zen 2 architecture) gains over Zen 2, and most analysis I've seen expect these combined to make Zen 3 at least on par, if not faster than Intel for the majority, if not all, single threaded workloads, and AMD already have the core count advantage, so are very likely to pull ahead of Intel on their last bench-marking advantage, namely gaming.

      Quote: "NVIDIA doesn't make server chips."

      Erm, yes they do. Their data centre revenues were around $3b last year, which is over a quarter of their business and growing.

      Granted they don't make CPUs (yet), but these are still 'server chips', and they've been sniffing around ARM, which for them would likely be a good purchase, as they'd be able to build complete server solutions then with an ARM based CPU plus nVidia GPU. I could easily imagine nVidia bringing out an ARM CPU, at 7nm or even 5nm, made by TSMC or Samsung, in 12 to 18 months time, main issue likely being getting space at a FAB to produce them.

      1. Anonymous Coward
        Facepalm

        Re: For now, it doesn't matter yet

        > Erm, yes they do.

        [ In response to NVIDIA doesn't make server chips ].

        Followed by:

        Granted they don't make CPUs (yet), but these are still 'server chips' [ ... ]

        First, you're contradicting yourself.

        Secondly, I said nothing about revenue. I don't care about revenue. The article is about Intel's 7nm fab process, not about revenue.

        Thirdly, these aren't 'server chips' anymore than they are 'laptop chips' or 'desktop chips'. These are GPU's. Do you know the difference?

        Lastly: do you work in marketing somewhere by any chance?

        1. Boothy
          FAIL

          Re: For now, it doesn't matter yet

          If a chip is designed and built to go into a server, then it is by definition, a server chip.

          A GPU is still a processing unit, it's even in the name, and these chips are specifically designed for high end number crunching in data centres, including super computers, so more specialised than a CPU, but again, still a processing unit that goes into a server, ergo server chip. It's not like you can plug a monitor into these things.

          And no, I don't work in marketing, I'm in IT, specifically a solutions architect. I help design and implement large scale enterprise solutions. Where do you work, the Daily Mail? Shelf stacker somewhere?

          1. Anonymous Coward
            Devil

            Re: For now, it doesn't matter yet

            > It's not like you can plug a monitor into these things.

            Really? You can't plug a monitor into an NVIDIA card? Or a mobo with an on-board NVIDIA GPU? Not even a little bit?

            > If a chip is designed and built to go into a server, then it is by definition, a server chip.

            Awesome! That clears it all up. Keep'em coming, mate.

            1. Steve Todd
              FAIL

              Re: For now, it doesn't matter yet

              The server targeted parts indeed lack video ports as they are intended for data center racks and will never see a monitor. What they are being used for is as a massively parallel vector co-processor, not as graphics engines like the consumer parts.

              1. Anonymous Coward
                FAIL

                Re: For now, it doesn't matter yet

                > The server targeted parts indeed lack video ports [ ... ]

                Nope. They don't lack video ports at all. They have at least two video ports.

                And thusly, you've just announced to the world that you have no clue what you're talking about. Evidently you've never seen one of those NVIDIA boards that's targeted for the purposes you describe. But you're an expert.

                Too late now, but why don't you go take a peek at NVIDIA's site - or Amazon. They have pictures of those video boards that are used as GPU co-processors.Yes, all the models have HDMI out ports. You can attach a monitor.

                As for myself, I installed one of those super-expensive NVIDIA boards just last week in one of our boxes. Because I'm playing with CUDA at work.

                Ta-ta.

                1. Steve Todd
                  FAIL

                  Re: For now, it doesn't matter yet

                  Really? Here's a link to a picture and specs of the nVidia V100 card designed for the data centre, could you point out the (minimum of) two video ports please?

                  https://www.techpowerup.com/gpu-specs/tesla-v100-sxm2-32-gb.c3185

            2. Boothy

              Re: For now, it doesn't matter yet

              My <insert deity here>, just how stupid are you!?

              You specifically brought up server chips, that was YOU, I am not talking at all about consumer GFX cards like the RTX 2080 etc. These are a completely different product.

              As Steve Todd has mentioned, the server GPU are not GFX cards, they are specifically built for server use, rack mounted, and use CUDA etc to run tasks.

              Here's a vid showing someone fitting some of them into a rack system...

              https://youtu.be/ipQXdjjAPGg?t=57

    7. Anonymous Coward
      Anonymous Coward

      Re: For now, it doesn't matter yet

      As a customer, the important thing is to know which benchmarks serve as an effective proxy for your application. For example when I worked in computational fluid dynamics there was a particular SPEC subtest that showed the same sort of variation between systems as the performance of our own code.

      Whatever methodology you use, the goal is to characterise real-world performance at a system level, and use that information to seek out price-performance sweet spots.

      And there are lots of variables here - not just the processor choice, but also the acceptability of different compilers in your organization and the level of willingness to use aggressive & compiler-specific optimization options.

      For example the relative performance of Intel and AMD systems will depend on whether the code is compiled with Intel's compiler or gcc, whether the application can make good use of AVX512 (not supported by AMD), etc.

      AC because now I'm tuning benchmarks for a hardware vendor. Interestingly we are allergic to making comparisons with the competition, rather we aim to "put our best foot forward" and show that our new kit outperforms previous generations.

  5. YetAnotherJoeBlow

    Mean while...

    In other news, Venkata "Murthy" Renduchintala has accepted a position with TSMC as lead architect for 3nm. A smiling Murphy quipped "you just can't make this shite up."

    1. Fruit and Nutcase Silver badge
  6. Anonymous Coward
    Anonymous Coward

    failed strategy

    Intel just totally failed to realize they're not meant to do silicon writing anymore and need to leave this to other companies.

    CPU architecture/design and production are 2 different businesses. Have been for years.

  7. Maelstorm Bronze badge
    Joke

    While Intel has been having a lot of problems shrinking their kit, their petard is impressively small as of late.

    1. J. Cook Silver badge
      Go

      Maelstorm wrote:

      While Intel has been having a lot of problems shrinking their kit, their petard is impressively small as of late.

      That's because they keep getting hoisted by it. :D

  8. Maelstorm Bronze badge

    As these chips keep getting smaller and smaller, we are pushing the bounds of physics. That's probably what the problem they are experiencing is. I have to wonder about the reliability of their competitor parts.

    1. david 136

      FUD. The competitors are fine.

      1. Anonymous Coward
        Anonymous Coward

        Maybe so, but the laws of physics are a harsh mistress and they don't give a rat's ass how good your competition is or how well they're doing.

        It's been a long time since I worked in the semiconductor industry, but even then people were chewing over the practicalities and problems associated with sub-10nm nodes - once you start getting down to atomic scale there are some very real problems to overcome.

        Memory is fuzzy, but 'state of the art' back then (early 2000s) was 65nm and the transition to 45nm was starting to gather pace, as was the use of 300mm wafers to increase yield - 10nm was still a way off, but it was still weighing on the minds of people far smarter than me.

        1. SAdams

          It’s ironic - half of those developing new CPU’s are trying desperately to harness and retain quantum effects, the other half trying desperately to avoid them...

          1. Julz
            Joke

            Schrodinger's CPU...

            1. Anonymous Coward
              Anonymous Coward

              You'll only know it worked after you destroyed it.

              1. Anonymous Coward
                Anonymous Coward

                Ultimately, it all comes down to Quantum[tm]

                That is/was closer to the truth than you can possibly imagine ...

                Back then, the principal question seemed to be "how can we mitigate quantum effects?" whereas now, as noted upthread, there seems to be a split between "how can we mitigate quantum effects?" and "how can we make quantum effects work in our favour?" - as node sizes shrink, these questions, and their answers, become rather more important.

                Of course, we all know that the real answer is "you never know until you look" ;-)

                1. Qumefox

                  Re: Ultimately, it all comes down to Quantum[tm]

                  But looking will change the results!

                  1. NetBlackOps

                    Re: Ultimately, it all comes down to Quantum[tm]

                    And as recently shown, two different observers can observe different results. I love quantum physics. It's a weird as I am.

    2. 9Rune5
      FAIL

      Reliability, security and all that tat

      I have to wonder about the reliability of their competitor parts.

      There is little reason to speculate. Instead you can read up on meltdown and spectre and simply extrapolate from there.

      TL;DR: Intel's shortcuts caught up with them.

      1. Anonymous Coward
        Anonymous Coward

        Re: Reliability, security and all that tat

        https://www.cnbc.com/2018/01/04/intel-ceo-reportedly-sold-shares-after-the-company-already-knew-about-massive-security-flaws.html

        https://www.fool.com/investing/2018/06/15/revisiting-intel-ceo-brian-krzanichs-huge-stock-sa.aspx

        Has there been any more recent news on Krzanich's alleged insider dealing?

        1. Anonymous Coward
          Anonymous Coward

          Re: Reliability, security and all that tat

          "Has there been any more recent news on Krzanich's alleged insider dealing?"

          Wondering, too. Maybe some US posters can tell us ? I thought the SEC was coming hard on people doing this ???? Unless the french AMF, who is still sleeping peacefully.

          This kind of shit, like I was made aware myself in the past, is the reason why I **never** buy any share.

      2. S4qFBxkFFg
        Pint

        Re: Reliability, security and all that tat

        "There is little reason to speculate."

        I see what you did there.

    3. AMBxx Silver badge
      Windows

      I seem to recall reading about problem with smaller transistor sizes 20+ years ago. Round about the time Pentium processors were being launched. Looks like we've managed so far...

  9. Anonymous Coward
    Anonymous Coward

    and the position will be eliminated from its corporate structure

    Bit harsh. Instead of one figurehead, you now have 5 heads who can all blame each other...

    1. jonathan keith

      Re: and the position will be eliminated from its corporate structure

      ... although there's little that's more effective than a circular firing squad.

  10. imanidiot Silver badge

    Node sizes are arbitrary

    There is no industry standard to denote node size and the Intel 10nm node is about the same overall feature size as the TSMC 7nm node afaik.

    Intel seems to have bet the boat on EUV litho but it wasn't ready for 10 nm and seems to be having problems getting it to work for 7 and 5 nm for it's processes. TSMC is now ahead of the curve by miles.

    1. _olli

      Re: Node sizes are arbitrary

      That is true, Intel have lost its prior technical advantage but perhaps is not yet that far behind the industry peers.

      However, missing their committed business-critical timeline by years not only once but twice indicates a severe problem in business execution. Given the nature of their trouble, it's concerning that the current CEO has background in finance, not in engineering or operations.

  11. jonathan keith

    Just desserts

    Ain't karma a bitch, Intel?

  12. hoola Silver badge

    Eye off the ball

    Intel have been slipping for some time now and this is becoming a bit or a downward spiral. For the last few refreshes we have been using Intel on our HPC clusters. This is now looking highly unlikely for the next as AMD is now a real contender again. Lower cost and power equates to more cores/threads/cycles and capacity for the same budget. Arm is also starting to look interesting for some workflows as well.

    1. Brewster's Angle Grinder Silver badge

      Re: Eye off the ball

      In fairness, at 7nm, it's a fairly small ball. You need the eyes of an eagle.

  13. IJD

    The problem for Intel is that nobody now believes that they can deliver on their roadmaps for any CPUs beyond 14+++++, compared to AMD where nobody doubts this any more as TSMC keep hitting (or beating) all their targets for rolling out new high-yielding process nodes.

    Intel stuck with inhouse fabs and lots of different big monolithic chips because they could afford it and this is what always worked for them in the past, and ignored the oncoming train wreck. AMD were forced to go to foundries because unlike Intel they had no choice, and they went to chiplets partly because they couldn't afford to do multiple foundry tapeouts for different monolithic-die SKUs.

    In hindsight, these look like two of the smartest decisions AMD had no choice but to make, and two of the dumbest decisions Intel made out of pig-headedness ;-)

    1. Boothy

      For the chiplets, one of the other reasons was apparently they were expecting lower yields from the new 7nm process. and the smaller you can create your chips, the better the yield results are overall. (i.e. Less wasted wafer from bad silicon, when using smaller die sizes).

      It also meant the IO die, which doesn't need to run at the same speed as the CPU itself, could be made on an older, mature and so cheaper node, and of course those didn't need to be made by TSMC.

      All of which helps keep the costs down of course!

    2. Brewster's Angle Grinder Silver badge

      "AMD were forced to go to foundries because unlike Intel they had no choice,"

      Wasn't it a conscious decision to divest themselves of what would become GloFlo? It's really worked for them. But if TSMC was struggling and Intel was marching ahead it would look a very different decision.

  14. Anonymous Coward
    Anonymous Coward

    Don't understand the rush to smaller Nano-meter architecture.

    The current silicon is done via light/laser and still hits problems around error. The smaller sizes just increase that error rate.

    They could keep it at the same size and just increase the die size?

    1. david bates

      Smaller processes as I understand it bring potential for faster chips and lower power usage.

    2. Cynic_999

      Increase die size and you increase power. Increase power in the rack and you have to increase power to the cooling also. In portable equipment increase power = decrease battery life between charges.

  15. ColonelClaw

    This right here is why I will never understand the markets; they fire the person responsible for years of mismanagement and incompetence, and their share price goes DOWN?

    1. Anonymous Coward
      Anonymous Coward

      Quite. Maybe investors know the big bosses who originally hired the person responsible are still at Intel.

      ... or it's just the usual churn and burn of the stock market flailing about. Buying opportunity if you're long NASDAQ:INTC, another reason to stay away otherwise.

    2. Qumefox

      The firing wasn't what caused the price to drop, it was the announcement of 7nm getting pushed back. I work in financial services, and the firing has barely gotten a mention outside of tech circles.

  16. Binraider Silver badge

    As a physics student in the late 90s... Uni lecturers were all adamant that Quantum Tunnelling represented a hard limit on CPU die shrinking - with the limit somewhere of the order of 5 to 12nm. 20 years on, those limits are being poked and prodded by CPU manufacturers. It's remarkable that AMD's supply chain has managed to pull anything working off at this range at all given the esoteric and incomplete knowledge we have of operation in that space. It is equally remarkable that Intel has missed out. One assumes the IP is being carefully corralled by the scientists and lawyers that "can" while intel missed the boat.

    Rather less esoteric - Intels recent decisions to lock out low-end motherboard chipsets from basic options that have been available since the very first Core CPUs are pure profiteering that serve to drive budget customers further away. Performance driven customers almost universally look to AMD now. I do wonder with ARM-on the desktop becoming reality whether they starting to retarget their efforts away from X86. Wintel isn't what it used to be and they may want to dis-associate.

  17. Anonymous Coward
    Anonymous Coward

    Management Smell

    I detect management severely interfering with the engineers here and deciding to fire the person working on the coalface but refusing to look amongst themselves as the true cause of the problems at Intel...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like