back to article Please, no Moore: 'Law' that defined how chips have been made for decades has run itself into a cul-de-sac

In 1965, Gordon Moore published a short informal paper, Cramming more components onto integrated circuits. In it, he noted [PDF] that in three years, the optimal cost per component on a chip had dropped by a factor of 10, while the optimal number had increased by the same factor, from 10 to 100. Based on not much more but …

  1. Lord Elpuss Silver badge

    Brilliant article, thanks. I learned a lot.

    1. fung0

      Hardware Isn't the Issue

      Intel has definitely been asleep at the switch. But that doesn’t mean that Moore’s law is entirely dead.

      AMD has continued to forge ahead. Also, Intel has recently (finally!) announced plans to move forward with new fab technologies, including increased use of techniques that could even lead to sub-nanometer fabrication.

      But, more importantly, development in fab technology has become largely irrelevant.

      The real bottleneck over the past two decades has been lack of innovation in software, not hardware. Windows and Office, in particular, continue on bloated, antiquated codebases that date back to the late 1990s. The monopolistic dominance of these two dysfunctional products has strongly discouraged more-aggressive hardware developments, since better silicon can’t make these most-used software applications more useful. (Far from exploiting newer chips, with Windows 8 Microsoft actually bragged that it had made the CPU do less work.)

      This stagnant situation in software has resulted in steadily declining demand for newer and faster hardware. That decline in turn has discouraged chipmakers from investing as much in new chip technologies as they might otherwise have done. It stands to reason that Intel has been the most lethargic, since it had the most comfortable market share to rest on. Competitors like AMD and ARM, predictably, have been somewhat more aggressive. GPU makers moved much more rapidly, largely because gaming is the one software category not dominated by mouldy Microsoft products.

      Apple, in particular, has recently proved that simply re-architecting today’s chip designs can lead to sizable gains in performance. But the PC world – thanks to Microsoft’s staggering lack of vision – has been stuck with an awkward hardware architecture that dates back to the early 1980s. MS should long ago have done what it did with Windows NT: introduce a parallel ‘advanced’ Windows track, portable to new silicon, and encourage gradual migration as compatibility issues are worked out. It also desperately needs to re-examine the functionality of its two core products, and start evolving them to exploit the full capability of today's processors, let alone tomorrow's.

      Bottom line, the toxic combination of failed vision and monopolistic dominance by Microsoft, post-Windows-2000, is largely responsible for the flattening of the Moore’s Law curve. Until the world finally demands something better than the stagnant MS codebase, evolution in chip technology cannot be properly exploited, and will thus remain largely irrelevant.

      TL; dr - we won't really know what Moore's Law can or cannot do, until we eliminate the software bottleneck.

      1. doublelayer Silver badge

        Re: Hardware Isn't the Issue

        Well that's an easy answer, isn't it? It's very simplistic to blame Microsoft for all the problems getting more performance out of chips. Yet I think you'll find that most of your points either don't mean what you think they do or apply as well to other software.

        "(Far from exploiting newer chips, with Windows 8 Microsoft actually bragged that it had made the CPU do less work.)": Would you like me to rephrase that to being correct? With Windows 8, Microsoft bragged that they had improved the efficiency of their code such that newer chips weren't needed to run it. And by the way, they actually did do that. Windows 10 runs better on old hardware than Windows Vista or 7 most of the time. That doesn't mean you'd necessarily want to use it there, but they did make the OS more efficient in its use of resources.

        "This stagnant situation in software has resulted in steadily declining demand for newer and faster hardware.": No. Quite the opposite in fact. If software is stable in its requirement for resources, then people will demand better hardware until their hardware is good enough. If it increases its need for resources, then people will continually need updated hardware to make use of it. If you want demand for newer hardware, you should ask Microsoft to bloat Office further (and please don't, it's bloated enough).

        "Apple, in particular, has recently proved that simply re-architecting today’s chip designs can lead to sizable gains in performance. But the PC world – thanks to Microsoft’s staggering lack of vision – has been stuck with an awkward hardware architecture that dates back to the early 1980s. MS should long ago have done what it did with Windows NT: introduce a parallel ‘advanced’ Windows track, portable to new silicon, and encourage gradual migration as compatibility issues are worked out."

        Are you aware of the Windows on ARM stuff? They're collaborating with Qualcomm on their own chip designs. They have encouraged hardware manufacturers to build the devices. They have working Windows versions for it. They now have emulation for X86 and X64. How is this sticking with the old awkward architecture? It's not their fault that ARM chips other than Apple's aren't very fast--that's on the chip manufacturers.

      2. John Jennings

        Re: Hardware Isn't the Issue

        It might be argued that its not that the software is outdated now - rather that computers do what people need them to do, for the most part.

        With the move to cloud tech at the users end, the improvements needed at the moment are in bandwidth and latency.

        Its not to blame any OS and I would agree that these OSes (for all of them) is based upon legacy 80's/90's thinking - rather it is a failure of imagination to develop a demand for improvements at the rate that were possible when Moores law was king.

        For desktop use, with Linux or windows, I cant really see a difference between an I5 and an I7 in most normal use. There is a bigger difference in performance with the peripherals attached to it - fast SSD/lots of ram/graphics card. Sure - for rendering a SCAD file or running a SQL command on 1,000,000 rows - but that isnt in the realm of the user.

        Or put it in another perspective. In 1986 I had an Atari ST. It was lightyears ahead of the Atari 400 I had before it in 79. However, it was blown away with my first PC in 92- with what I could do with it. Nowadays, a general PC does pretty much all I can think of doing with a PC- and has done for at least 5 years.

        Sure there can always be improvements in back office/big iron applications - though they do likely need to be on more specialized software stacks.

      3. Anonymous Coward
        Anonymous Coward

        Re: Hardware Isn't the Issue

        Short version: Well, ehhhhh, not really. It's much more complicated than that RE software, and HW has absolutely continued to advance at a staggering pace, even *with* transistor limits, but its still got its own issues.

        There are a lot of things being conflated and confused here, but I'll pick at one in particular.

        In many cases you have the stagnant hardware/software performance relationship backwards. E.g. software in some cases stalled because there was no way to gain any further performance because the HW didn't offer meaningful benefits, so why bother? Consider the flatlining of CPU core cycles per second. This means that many single-core processes simply cannot get much faster short of highly specialized instructions, and even those may be of limited utility. And many processes cannot be meaningfully parallelized for performance gains, and are not cost effective to port to specialized hardware.

        On the other side, cheap memory and storage resources that made software easier to develop, so we didn't have to be nearly as clever, which in turn disincentivized being particular efficient...

      4. Anonymous Coward
        Anonymous Coward

        Re: Hardware Isn't the Issue

        Whilst I agree that Microsoft Office is a massive anchor in terms of progress, I don't think it's the main problem.

        The bottleneck has been heat. Intel has tried to squeeze every penny out of the same fabs for way too long and as a result has crapped out some of the hottest chips ever. Sure, this is because the largest market is "it just needs to run Office" enterprise types.

        I think where we need to go is slower but more. Loads of stuff has ARM in it now, like appliances and so on and for the most part that compute power goes unused. Why in 2021 can I not pool all the compute power I have in my house?

        The M1 chip is an important step forward (it pains me to say this given as much as I detest Apple). I'm actually watching closely for a similar specced device that will run Linux which can't be far off because Linux has had ARM baked in for aeons.

        It's surprising that it took this long. When 5G is properly rolled out, I won't need huge power in my laptop, as I will be able to establish a low latency connection to my 64 core server should I need the grunt.

        The Onenetbook A1 is an amazing piece of kit but the Intel CPU in it stops me buying it, because battery, heat etc. Whack a solid ARM in that and I'll be all over it.

        1. doublelayer Silver badge

          Re: Hardware Isn't the Issue

          "I think where we need to go is slower but more. Loads of stuff has ARM in it now, like appliances and so on and for the most part that compute power goes unused. Why in 2021 can I not pool all the compute power I have in my house?"

          Because those chips are not as powerful as you think they are. A lot are microcontrollers which do not have enough resources to take on extra tasks even if they do run idle most of the time. Some have faster ones, but at best it's a single-core Cortex A processor similar in performance to the Raspberry Pi Zero. That's not going to speed up most of your tasks especially as those cores won't have any of your dependencies and would need to use remote disk and memory. A few more advanced devices have more cores, but that's only IoT stuff which is using that performance (probably for an overly bloated software stack). You would also need to start networking them, and I'm guessing you don't currently run network cables to your refrigerator or washing machine.

          "When 5G is properly rolled out, I won't need huge power in my laptop, as I will be able to establish a low latency connection to my 64 core server should I need the grunt."

          You can do that now. 5G is not critical for it. A home network is likely to have less latency already. 4G is good enough to run remote protocols on as long as you have a good signal. If you don't have a good 4G signal, it will be a long time before you get a good 5G one because you're probably on the low end of your mobile provider's area coverage plans and it takes more infrastructure to deliver 5G.

        2. rg287

          Re: Hardware Isn't the Issue

          The M1 chip is an important step forward (it pains me to say this given as much as I detest Apple). I'm actually watching closely for a similar specced device that will run Linux which can't be far off because Linux has had ARM baked in for aeons.

          It's surprising that it took this long. When 5G is properly rolled out, I won't need huge power in my laptop, as I will be able to establish a low latency connection to my 64 core server should I need the grunt.

          It should be noted that the M1 isn't magic. Whilst it is undoubtedly quick, much of the initial speed improvement comes from the move to a SoC architecture rather than the architecture of the M1 chip itself. x86 would get a speed-up if you coupled the memory and storage as closely as Apple has with M1. Unfortunately outside the Apple ecosystem, general purpose computing demands some semblance of standardised buses so that hardware firms can mix and match components for different use cases and run reasonably standardised software without needing a new build for every hardware configuration.

          Undoubtedly software support also plays a role such as macOS properly utilising the big.LITTLE architecture and ensuring that things like the UI and mouse always remain responsive, even if other processes are churning away.

    2. Scott 26

      > Brilliant article, thanks. I learned a lot.

      agreed... must find a Youtube video that shows this in graphic form as well..... very interesting.

      (How can logic gates make these characters appear on the screen?!?!?!?!)

      1. Graham Dawson Silver badge

        Same way as getting a crowd of ten thousand people to hold up black or white cards depending on whether or not you electrocute their bottoms at any given moment.

        1. Kane
          Joke

          "Same way as getting a crowd of ten thousand people to hold up black or white cards depending on whether or not you electrocute their bottoms at any given moment."

          You sound like you speak from experience?

          1. Graham Dawson Silver badge

            It was certainly difficult to find that many masochists.

            1. Michael Wojcik Silver badge

              Masoch's Law got a boost thanks to Rule 34 but has plateaued since.

        2. Bruce Ordway

          That reminds me of the Three Body Problem

          https://gizmodo.com/if-you-love-computers-this-novel-should-be-next-on-you-1686656889

          Just got around to reading the series... and I did enjoy the "computer"

          "Using hundreds of thousands of soldiers who act as bits (they hold a flag up or down to indicate 1 or 0), they create logic gates, a CPU, a bus (people racing on horses between different parts of the human motherboard), and more. "

    3. Steve Todd

      Actually I was going to state the opposite

      It’s full of glaring holes. As examples:

      Current (non EUV) processes rely on a technique called multi patterning. They create features smaller than the wavelength of the light being used by using multiple exposures with different masks designed to produce interference patterns. Current EUV systems can do the same job faster as they need many fewer masks/exposures. They are however horribly expensive (which limits the number that any given manufacturer can buy) and in short supply, even for those with deep pockets (ASML, the manufacturer, can only build a couple of dozen per year)

      Second, no modern process exposes the whole wafer in one go. They expose one chip at a time, stepping from one to the next. This places one of the limits on the maximum size of a single chip, known as the reticle limit. The EUV (or, more correctly, soft X-ray) laser needs a lot of power to produce an exposure, but keeping an even level across the reticle is a solved problem.

  2. Cragganmore

    Yes but...

    Moore's Law was of course founded upon the physical nature of transistor sizes with a direct correlation into 'processing power'. But today, there is probably no agreed definition of 'processing power'. It means different things depending on your aims. And so, isn't it more useful to look at the chip architectures and how software is applied to result in 'processing power'. Maybe Moore's Law is dead - but only because it is effectively broken rather than any constraint around physical sizes?

    I remember reading Stanford University's 2019 AI Index Report that claims the demands of AI is now out-pacing Moore's Law ability to keep up. Apparently every three months the speed of AI computation (demand) doubles...

    1. confused and dazed

      Re: Yes but...

      Hence the massive growth in data centres and the likes of Graphcore to solve this by more elegant means than pure brute CPU numbers

      1. Steve Channell

        Moore core

        The original Moore's law ran out decades ago, since then the computational power of CPU has relied on adding more cores and tricks with speculative execution (trick because of Spectre/Meltdown).

        Aside from the excellent insights in the article, the other ceiling is the ability to break a problem into small enough chunks to keep graph processors busy with work - most algorithms can't be broken down into small tasks. Graphcore works because it focuses on AI

        1. Dasreg

          Re: Moore core

          Moores law is dead because the physical properties of metals have been reached for binary signalling

        2. Steve Todd

          Re: Moore core

          Moores law states that the number of transistors on a given sized piece of silicon will double every 2 years. The speed of those transistors wasn’t mentioned.

          Clock speed has, of late, gone up rather more slowly, and the number of transistors in a CPU core has hit diminishing returns, so the way manufacturers have chosen to use the available space is to replicate their cores a number of times (in addition to adding custom hardware blocks to offload common tasks from the cores to hardware better suited, like vector processing or encryption/decryption)

          Given the demands of a modern OS to run multiple concurrent processes (applications, drivers, services etc) then this makes perfect sense, even if developers are bad at using multiple cores for a single application.

    2. Tom 7

      Re: Yes but...

      AI is going the same way as CPUs went at the moment - throw more devices at it. I think it will change radically over the next decade or so as people look to nature and pull apart animal brains of increasing complexity. Someone has modelled a nematode brain in AI and it used a few % of nodes to achieve similar OCR results. Someone is doing the same to honey bee brains and we will eventually grasp how evolution has achieved necessary and sufficient functions in the wild and this will enable us to make complicated AI devices that achieve what we want with significantly less silicon than we do now.

      1. Cragganmore

        Re: Yes but...

        Memrister technology could be applied to biological computing. As far as I understand it, these devices essentially combine logic and memory into a single device. And they can be very small.

        Biological computing does seem like a logical evolution for processing capabilities. However, it may not lend itself to applications that require a high degree of assurance and repeatability. Would you trust a self driving car for example powered by a computing system that is inherently opque in how it reaches decisions?

        1. YetAnotherLocksmith Silver badge

          Re: Yes but...

          They already are opaque.

          That isn't the biggest issue though. When driving a car along a road, you literally have skin in the game. The oncoming car driver does too. And so you know, and they know, that you're going to pass them, not ram them, and that works both ways.

          An AI car has no such self preservation imperative. If it crashes, it lives on. You don't, though.

          1. doublelayer Silver badge

            Re: Yes but...

            If you think that self-preservation matters to a device doing statistical calculations with no consciousness, you're already starting in the wrong place. Also, it would not survive; the chips which run it are likely to be damaged and if they're not, they will be removed for log checks then scrapped or recycled. For the same reason, we don't expect your break system to function correctly because it doesn't want to be scrapped. It follows the laws of hydraulics and does what it is going to.

            Self-driving cars will operate on devices programmed not to crash. They will be tested to ensure they are programmed correctly. If they crash, that's due to a programming error, sensor error, or there not being an alternative, not due to a sentient AI which is suddenly apathetic about your survival. Given that any human driver could turn suicidal, your chances of that are lower with an AI at the wheel.

          2. MachDiamond Silver badge

            Re: Yes but...

            "An AI car has no such self preservation imperative. If it crashes, it lives on. You don't, though."

            More that the AI has no innate concept of death due to not being self aware. It also isn't thinking that a crash could lead to painful injuries and permanent disabilities. And there's me with a pain in all the diodes on my left side.

            1. badf

              Re: Yes but...

              And just to spin off to an ai tangent..

              This is why we need to program AI/ robotics to feel & understand pain - as a control mechanism, as well as a method of immediate/ physical feedback sensing. After all what is physically triggered pain other than a highest priority feedback message that can cause involuntary action (and emotional pain an evolved analog from physical + empathy).

              Of course given we (generally) abhor inflicting unnecessary pain on others how well does that match up with creating a sentient device?

              I wonder if Google has done any research on the topic...

              1. doublelayer Silver badge

                Re: Yes but...

                Why do we need to do that? For self-damage, we can set the goal to not perform actions that cause the damage. For intentionally-caused pain as a signal that we don't like what the AI did, we don't have to implement the pain system to tell it that we're displeased. Implementing synthetic pain is basically useless because we have existing methods of obtaining the same goals, enforcing things we want the AI to accomplish and things it should avoid. Moreover, building a separate system is just adding another point of failure where something about the pain handling goes wrong and our reliance on it proves problematic.

                Consider how crude and almost useless pain is in biological systems. Yes, it can indicate things that are dangerous which helps people to know their limits, but other than that it has several downsides. It cannot be configured, so it continues to hijack signalling when there is no need to do so. Sometimes, it's turned off or dampened. It activates instincts which can be detrimental (automatically retreat from causing agent works great for fire, not so well for combat). It also weakens a lot of other conscious mental processes which could better solve the problem. Pain is a rudimentary signalling system that works on dumb devices, but there are lots of improvements we could make when building a signalling system for something else.

      2. YetAnotherLocksmith Silver badge

        Re: Yes but...

        The day we model a human brain...

        Can you imagine an intelligent human that has read the top 50% of the internet and most of the books, and hopefully wasn't completely insane?

        Sadly, it's likely to be trapped in a basement by someone like Steve Baboon and used to keep the status quo.

      3. DS999 Silver badge

        Re: Yes but...

        Someone has modelled a nematode brain in AI and it used a few % of nodes to achieve similar OCR results

        The problem is that's creating a fixed function neural network. That's fine if you want to do some type of image recognition, but if not it is of no use.

        What passes for "AI" today is laughable, it relies on brute force and is not intelligent by any reasonable measure (unless you define intelligence as being able to beat a top human player at Chess/Go by examining many billions more alternatives than the human does)

        Figuring out how a human brain (or animal that qualifies like a corvid or octopus) is flexibly intelligent will take a lot longer than any of our lifetimes, so brute force is our only option for now.

        1. Nick Ryan Silver badge

          Re: Yes but...

          I don't agree that it will take a lot longer than any of our lifetimes, it's quite possible to happen in less.

          However, the current bullshit that is marketed as AI is nothing of the sort.

          1. DS999 Silver badge

            Re: Yes but...

            Well obviously that's my opinion. But since I've been hearing about promises of "AI" since the 80s, and the main thing that's changed is how much computational power that can be brought to bear it is pretty clear there is little or no progress being made in understanding how the brain "thinks" in the large scale.

            We're still experimenting with brains many orders of magnitude less complex than a human's, like the nematoad example above. While I guess you can never rule out a massive breakthrough, I'd place my money on no real progress in general AI within the next three or four decades. Just improvement based on brute force for as long as we can get more computational power per watt.

            We've done this AI hype cycle before, and if and when Moore's Law does stall out it'll be fun watching the AI industry freak out when they can no longer count on doubling the number of transistors every 2-3 years like they have for the past decade during this most recent "AI has finally arrived!" hype cycle.

            1. Nick Ryan Silver badge

              Re: Yes but...

              A human brain is not fast in serial throughput terms, however a human brain has about 100 billion cells. Emulating these using brute force while good for prototyping purposes isn't going to effectively emulate 100 billion parallel processes.

              A fruit fly has about 200,000 brain cells. This isn't an impossible number to emulate but emulating them in parallel with anything approaching the efficiency of the fruit fly brain? That'll be a challenge. We are still experimenting on just how these cells link to each other, their trigger levels, the chemical side-signals and so much more. Nothing that can't be discovered but it's slow, hard work.

              Also, while a fruit fly has about 200,000 brain cells there are many different types of this which have evolved for specific purposes and are not just 200,000 generic neurons dumped into a mass and expected to work.

              The bullshit marketing that is current AI is nothing of the sort. While some "commercial" projects do use a few neurons, few would use anywhere near the number required to do anything particularly insightful. A few neurons for edge detection on images isn't AI. Most of what is markeiing AI is nothing more than carefully curated machine learning at best. This doesn't mean that it's not clever and isn't useful, but it's not AI.

    3. NoneSuch Silver badge
      Devil

      Re: Yes but...

      Quantum computing would seem to be the obvious avenue for processing power. Yet, it cannot support Crysis, so a bit of a step back really.

      1. jonathan keith
        Joke

        Re: Yes but...

        Surely a quantum computer both can and can't run Crysis at the same time?

      2. Michael Wojcik Silver badge

        Re: Yes but...

        Quantum computing has nothing to do with general-purpose computing power.

        There is a family of algorithms in the BQP complexity class which experience a benefit in terms of computational complexity in a (general) quantum computer.1 That's a complexity improvement – they might not actually be faster at identifying a solution than typical already-available classical computers until the problem size gets large enough that the required number of qubits becomes infeasible.

        There are other complexity classes which describe hypothetical QC algorithms, such as PDQP, but they're probably not possible to achieve in real machines. And there are classes which tweak BQP (such as PostBQP, which just means "if you get the wrong answer, try again"; it's most notable because it's been proven to equal PP, but then so has PQP so eventually it's all turtles), but they don't fundamentally change what sorts of problems general QC addresses.

        Quantum computing isn't "MOR SPEEDZ!". It's a very particular thing which addresses certain computations that, while applicable to many problems, by no means cover all computable functions. And QC in practice is vanishingly unlikely to ever be fast enough to be interesting for small problems. It will only apply when N is large enough to be intractable for classical machines.

        1Non-general QC approaches, such as the quantum-annealing approach that D-Wave's machines might be using (last I looked, there was still some debate about that), are even more limited. QA machines can't implement Shor's or Grover's algorithm (except in the sense that they could emulate them using classical computing, just like any other digital computer); they just solve spin-glass problems using annealing.

  3. Tom 7

    Moore's law expired in 1975

    so why do we still pretend its valid?

    I have a feeling we will move to bio-computing after ten or fifteen years of plateauing in 5 years or so. We will extract structural information from natures neural networks and learn why evolution chooses these structures and then be able to apply them to our own silicon and then move on to biochemical brains that can self organise like our brains do and after that they will learn to teach us and its game on!

    1. Anonymous Coward
      Anonymous Coward

      Re: Moore's law expired in 1975

      Why? Random mutations that ended up making them "the fittest".

      Nothing more than a lot of trial and error!

    2. katrinab Silver badge
      Meh

      Re: Moore's law expired in 1975

      Compare a 10 year-old iPad to the Air 4. Definitely a huge improvement.

      Obviously if you are looking at desktop computers, a 10 year-old machine is still very usable, and the reasons you might want to upgrade it mostly relate to IO rather than the CPU.

      1. Doctor Syntax Silver badge

        Re: Moore's law expired in 1975

        the reasons you might want to upgrade it mostly relate to IO rather than the CPU S/W dropping support for older but viable H/W.

        FTFY

      2. rg287

        Re: Moore's law expired in 1975

        Obviously if you are looking at desktop computers, a 10 year-old machine is still very usable, and the reasons you might want to upgrade it mostly relate to IO rather than the CPU.

        Indeed. My "gaming rig", which I built as an educational exercise in 2011 still runs fine. It's had a new GPU and I swapped the 64GB SSD for 256GB, but that's enough for KSP and the odd bits I muck about with (no, I am not a "serious" gamer and my laptop is my daily driver!).

        An i5-2500 is more than enough for most people. I've just set a sports club up with a couple of refurbed i5-4xxx boxes that cost £80 each. They run SUSE (to support a rather picky bit of software running some specialist measuring hardware). Quad-core and 4GB RAM is frankly more than they need.

        Also worth noting of course that even a lowly i3-6300 will match the CPUMark score of an i5-2500k, and outright flatten it for single-thread performance. All for half the TDP.

    3. vtcodger Silver badge

      Re: Moore's law expired in 1975

      "We will extract structural information from natures neural networks and learn why evolution chooses these structures and then be able to apply them to our own silicon ..."

      And then what? We'll end up with a complex device that we only vaguely understand that exhibits the vast problem solving skills of a human hairdresser or sports fan? That would be a remarkable achievement actually But it's difficult to see what it would be good for. I doubt even Elon Musk would let it drive a car or sceen Xrays for abnormalities.

      Perhaps we should consider the possibility (which I think is actually rather likely) that AI is a dead end -- a way to sink vast resources and produce little or nothing useful.

      1. katrinab Silver badge
        Megaphone

        Re: Moore's law expired in 1975

        I said this 25 years ago, and nothing has changed in that time to change my mind:

        In terms of actual intelligence of computers, nothing has changed since Unix came out in the 1970s. Sure, late 1990s computers were a lot faster than early 1970s computers, and that is even more the case now. But not more intelligent.

        I don't think human intelligence can be expressed in boolean algebra. I rarely use the word "impossible" but I think it is appropriate here. Computers can do boolean algebra very quickly, and that is useful, but it is not intelligence.

        1. doublelayer Silver badge

          Re: Moore's law expired in 1975

          Of course the computers aren't more intelligent. If intelligence is possible, it's the software which will produce it. The computer is just a slate on which the complex stuff is written. Defining intelligence is another issue, but there are now programs which are capable of doing things which in a human require intelligence. That might not be it, but it's not very productive to declare that computers can't be intelligent without defining what they would have to do to be declared so.

          1. katrinab Silver badge
            Megaphone

            Re: Moore's law expired in 1975

            What I'm saying is that intelligence isn't possible, because ultimate any software you write compiles down to boolean algebra, and it isn't possible to express intelligence in boolean algebra.

            1. doublelayer Silver badge

              Re: Moore's law expired in 1975

              I don't know if I agree or not, and I certainly can't counter that argument. The problem is that you can't prove it either. Without a definition of intelligence, we don't know whether something can do it with only mathematics. So far, it sounds as if your belief is just that. You intuit that boolean logic is insufficient for the task, but without knowing what the task is, you don't know it.

              The problem of defining intelligence has long been tripping up computer science theorists and philosophers alike. I've found, however, that when people are specific about what they think computers won't be able to do, a program eventually accomplishes it. That's why defining intelligence, though it's tricky and subjective, is so important to discussions like this.

              1. katrinab Silver badge

                Re: Moore's law expired in 1975

                Yes it is a belief. The statement could only be proved wrong, not proved right; but so far, it hasn't been proven wrong.

                1. doublelayer Silver badge

                  Re: Moore's law expired in 1975

                  I don't think it can be proven wrong either. Whatever a computer does succeed in doing via boolean logic, you could decide afterwards that it doesn't count. Therefore, no matter how impressive a simulated human brain gets, you could always say that it's not real intelligence. It can only be proven wrong if we can decide on what wrongness would look like. More simply, if a computer can successfully complete set of tasks S to set of standards T, it would be intelligent. Without that, we don't know what failure is and therefore cannot fail.

                  1. katrinab Silver badge

                    Re: Moore's law expired in 1975

                    Well, a human can, with varying degrees of success, complete tasks that have never been defined before. They can, again with varying degrees of success, identify that such a task needs to be completed without being prompted or asked to do it.

                    Once you’ve identified the task, you can probably program a computer to do it.

                    Also, think for example how a human identifies whether a picture is a cat or a dog, and how a computer does it.

                    Does the human need a library of millions of photos of cats and dogs? Do they need to look through all of them to see which is the closest match? The computer is probably faster, and might even be just as accurate. But it is not doing it anything like how a human does it.

                  2. Michael Wojcik Silver badge

                    Re: Moore's law expired in 1975

                    Whatever a computer does succeed in doing via boolean logic, you could decide afterwards that it doesn't count.

                    This is precisely why the question is one of philosophy, not of science or engineering. Any definition of "intelligence" is always going to be substantially subjective.

                    There's a substantial body of work in theory of mind that considers the question of non-human minds in general and mechanical minds in particular. Some of this is well-known, though often misunderstood – Turing's Imitation Game and Searle's Chinese Room are the two leading examples.

                    The Imitation Game is a proposal in the Pragmatist tradition which suggests that intelligence ought to be defined by evaluating its surface attributes. (The exact definition of the game is irrelevant; it's simply an illustration. The obsession in some parts of the computing community with actually conducting it is a bit embarrassing.)

                    The Chinese Room is a refutation of one particular approach to machine intelligence, the one Searle described as "symbolic manipulation". It's essentially in the Logical-Positivist tradition (though not in a strict sense – it's allied with ideas from some of critics of LP), at least in being motivated by the critique of language. In the original paper Searle says, in effect, that while he's not sure what intelligence is, he's pretty sure that it's not symbolic manipulation. Some people take this to be a rejection of any possibility of artificial intelligence, but Searle himself wrote, in one of his responses to critiques of the Chinese Room paper, that he believes mind to be a phenomenon produced by physical processes, and thus some sort of machine mind is possible.

                    Neither of these positions considers the distinction between anthropic (human-like) machine intelligence and non-anthropic. It's entirely possible (and this has been considered by others working in philosophy of mind, in a wide variety of schools) that the first, or even only, machine minds we ever create will be fundamentally so alien that there will never be consensus on whether they ought to be called "mind" or "intelligence" at all. Nor do they deal with many other problems in philosophy of mind such as the p-zombie question.

                    I would suggest, though, that to draw a hard line between "intelligence" (even if some attempt is made to define it, which few commentators bother to do) and any given technology is to commit a category error. It is possible, if you're very careful and thorough, to draw a distinction between instantiation of a process and simulation of a process – Searle tries to do this elsewhere in arguing for his biological naturalism – but not everyone finds such arguments convincing, and they are not as sweeping as simply declaring that some technology (whether physical or formal) is incompatible with the requirements for a foundation for intelligence (or mind, or sapience, or whatever handwaving term you prefer).

            2. Juillen 1

              Re: Moore's law expired in 1975

              Technically, any system comes down to quantisation. That includes the brain.

              Philosophically speaking, there's no reason that intelligence couldn't arise in computation, especially as with adaptive systems, the complexity evolved is far, far past what a human programmer could implement.

              1. Michael Wojcik Silver badge

                Re: Moore's law expired in 1975

                Philosophically speaking, there's no reason that intelligence couldn't arise in computation

                This is wrong as stated – philosophy is precisely the domain where such reasons have been proposed.

                Personally I am not convinced by them, but they exist.

      2. stiine Silver badge
        Happy

        Re: Moore's law expired in 1975

        Why did you leave out telephone sanitisers?

  4. Primus Secundus Tertius

    Transistor physics

    Solid state electronics depends on electronic features: forbidden bands, conduction bands, etc. These depend on the existence of a regular repeating structure, i.e. a crystal. You can't make a transistor from a single atom. So we are probably at or near the end of Moore's Law.

    1. Pascal Monett Silver badge

      Re: Transistor physics

      At the same time, it is quite logical that the returns of research diminish as the precision increases.

      Yes, Moore's Law is dead, but it served its purpose.

      Now we are going to branch out to 3D chips at 1nm and that will likely be the end of CPU research for a long time.

      If computers have continued to increase in execution speed, it is not only because the CPU has evolved, the entire concept of a computer has evolved.

      The IBM PC had an 8086 at 4.7Mhz, and everything worked at that speed.

      Today, we have computers with a frequency for the CPU, another for RAM, another for magnetic storage, etc. The computer itself is a vast multi-tasking environment, and that is where we've increased its efficiency.

      Now we're looking at stacking CPU layers to eeke out more performance. We'll soon be doing that with RAM as well (if we haven't already).

      There will, however, come a time when we've explored all the combinations, and made all the enhancements.

      It's inevitable.

      1. ThatOne Silver badge
        Devil

        Re: Transistor physics

        > If computers have continued to increase in execution speed

        ...it was mostly to keep up with the ever-increasing OS bloat.

        1. katrinab Silver badge
          Meh

          Re: Transistor physics

          Fire up Windows 98 in a virtual machine, and see how fast it is.

          It isn't actually that fast due to everything running on a single thread. Any IO delays hold up the entire system.

          1. ThatOne Silver badge

            Re: Transistor physics

            Funny you should mention it, it happens I have a still fully functional Win98 SE computer from "back then". Despite sporting a weak original Pentium, it isn't slower to use than my fairly modern Core i7 system. It does take longer to boot, but I guess that would be mostly the (also original) old IDE hard drive.

            I obviously wouldn't do any number crunching on it, but for the intended task (in this case writing serious, long stuff in Word 97) it's more than adequate. Horses for courses.

            1. katrinab Silver badge

              Re: Transistor physics

              I have Windows 98 running in VMWare on an i9-9980HK machine with 64GB RAM

              Other VMs(except MacOS) give me about 95% of native speed.

              Only 1 CPU thread allocated to the VM, because Win98 doesn’t do SMP, you would have needed NT4 for that, and 256 MB RAM. I believe from memory the maximum it could cope with without crashing was 384MB, but that was waaaaaaaaaay more than any computer of that vintage could dream of having. I had 96MB at the time, and typical machines in that era had either 8 or 16.

              My VM is fast, much faster than a computer of that vintage. But still overall slower than Windows 10 / MacOS / FreeBSD etc running on the same hardware. Linux boots fastest with Windows in Second place and MacOS a very distant last place. In operation, FreeBSD is fastest, then MacOS (native), Windows 10, Windows 98 and MacOS(vm) a distant last place.

              1. ThatOne Silver badge

                Re: Transistor physics

                > boots fastest

                Well, old Windowses did take their time to boot, IIRC Microsoft tried to improve that starting Win7, so obviously Win98 won't win any boot time trophies, not back then, and even less now...

                This being said, it does only boot once a day (if at all). What I do care about is the (subjective, I admit) user experience, meaning the time I spend waiting for it to execute the orders I gave it, or the time it took me to give those orders. It's an eminently subjective metric, I admit it, but then again it's the only one important to me: I use my computers to do stuff, and they shouldn't waste my time.

                Note that old computer was optimized to death, its Windows was streamlined and has no junk and TSRs weighing it down; It was a high end racehorse back then, and IMHO still works fine for Office work. IMHO, YMMV and all that.

        2. bombastic bob Silver badge
          Unhappy

          Re: Transistor physics

          I would say the OS bloat is an unintended consequence of the faster processors...

          * refusal to build off of existing (efficient) code rather than re-re-re-inventing the language and the run-time libraries it uses (i.e. .NET vs Win32, MFC becoming exceedingly bloated, C-pound using P-code rather than native code, Javascript being at the core of major applications, etc. and don't even get me started on that UWP crap)

          * a "we have faster CPUs now" excuse for adding inefficiencies so that coders no longer have to think about efficiency - just go ahead and write crap-code like there's no tomorrow

          * a focus on monolithic "objects" that do every possible thing, WAY too often unnecessarily, in lieu of efficient "unix principle" thinking in the basic design (example listing files in a directory must take 10 times longer because EVERY! SINGLE! FILE! "needs" a full analysis while adding it to the list, making a 'file open' box take a MINUTE to load 100 file names, because, "object oriented" now)

          * a focus on "rapid development" so that bloaty crapware fills the entire userland and makes processors take twice as many cycles to do the same task

          * a perception of what is acceptable for performance that never seems to improve (in fact, gets slower with new releases of the OS in spite of hardware improvements)

          anyway that's a small list of gripes that has been buiiding in my head since Windows 2003 Server...

      2. Loyal Commenter Silver badge

        Re: Transistor physics

        Once you get to the smallest scale you can make a transistor at (which, in order to be reliable, is probably not going to be smaller than ten atoms or so), you will have reached a hard limit of how many you can etch onto a silicon wafer, end of.

        After that, the only improvements will be from either using something that's not a transistor, such as qubits, which will mean an entirely different programming paradigm, or moving into three dimensions, to produce something that is no longer a chip, because it isn't flat; let's call it a "block".

        Once you do that, conventional lithography is dead, and we'll need some sort of other nano-fabrication technology to build those. That means those really, really expensive fab plants that Intel and their ilk have will be redundant, so don't expect that to happen any time soon, at least not driven by those players.

        If you move into three dimensions, you also have to contend with cooling issues. At the moment, drawing heat from the top of a flat surface is a relatively simple job - a bit of thermal paste and a heatsink, and you're away. Thankfully, silicon is a reasonably good conductor of heat, but I reckon there's still a problem there in getting heat from the middle of a "block" to the edges, especially if those "blocks" are comprised of lots of layers of different materials with different thermal properties. Electrically insulating layers also have the tendency to be thermally insulating.

        1. Anonymous Coward
          Anonymous Coward

          Re: Transistor physics

          "...something that is no longer a chip, because it isn't flat; let's call it a "block".

          Once you do that, conventional lithography is dead..."

          Chips are not flat. Even a mature 7 or 9 metal layer 90nm planar process of yore cannot be considered a simple flat structure.

          And this is why in the last decades many advances and techniques have had to be used to pattern them. From self-aligned structures, OPC and phase-shift masks to dual-patterning, end-if-line rules and forbidden spacings. Rule manuals went from a booklet of pages to a good solid volume.

          1. Loyal Commenter Silver badge

            Re: Transistor physics

            Indeed, modern chips are not flat, but as the article points out, modern lithography techniques mean that subsequent layers have to not interfere with the ones already laid. This puts a limit on how "high" a chip can be built, and you're still ending up with structures that look flat to the human eye, which are maybe a few hundred atoms tall. That's 2.5 dimensional at best, far from true 3D structures.

            1. Anonymous Coward
              Anonymous Coward

              Re: Transistor physics

              "structures that look flat to the human eye"

              You've never been able to make out LSI structures with the naked eye. Areas of a floorplan perhaps but not individual structures as such. ;-)

              They are most certainly "3D" structures. >12 layers of metal is not what you'd call "flat". Go check out the cross section of any modern FinFET process. And the steps required to fabricate it. Very cool.

        2. Alan Brown Silver badge

          Re: Transistor physics

          "Thankfully, silicon is a reasonably good conductor of heat"

          Not good enough though. This is already proving to be an impediment for stacking.

          Diamond is one of the best substrates for heat transfer and is surprisingly cheap as well as being an excellent electrical insulator

          1. bombastic bob Silver badge
            Boffin

            Re: Transistor physics

            I have to wonder if carbon could be electrostatically deposited onto wafers as a diamond layer...

            kinda like a TV picture tube that 'scans' the picture but using carbon atoms instead of electrons. Or an electron microscope that uses carbon atoms instead of electrons. Same basic idea (this is not a new tech, it's decades old).

            1. Nick Ryan Silver badge

              Re: Transistor physics

              I have to wonder if carbon could be electrostatically deposited onto wafers as a diamond layer...
              From memory this has already been done - not sure about the method (electrostatically) but it was done. I think the challenge was getting the crystaline lattices to line up (diamond and silicon crystals are different sizes/shapes) and given that the silicon isn't pure as it's doped to improve efficiency, which essentially changes the atom alignment, this adds a further challenge to adding a cohesive layer of carbon on top - or any other element.

            2. Loyal Commenter Silver badge

              Re: Transistor physics

              Back when I was at uni in the tail end of the previous millennium, they had a lab doing just that. - Thin film diamond vapour deposition.

              See those stylish chemists from the '90s

            3. Anonymous Coward
              Anonymous Coward

              Re: Transistor physics

              Not diamond. But sapphire has been used.

              And you deposit the silicon on the substrate, not vice versa. (Through the decomposition of silane iirc.)

          2. Loyal Commenter Silver badge

            Re: Transistor physics

            "Thankfully, silicon is a reasonably good conductor of heat"

            Not good enough though. This is already proving to be an impediment for stacking.

            Indeed, so the next problem becomes how to remove heat from stacked structures.

            "Microfluidics" I hear you say, perhaps? "Just pump coolant through the middle!"

            Sadly, on the nanometre scale, most liquids tend to not act very much like liquids at all, and the pressures required to pump them through tiny little channels are immense.

            What is probably needed is a network of highly thermal conductive elements to conduct heat away, whilst either having zero electrical conductivity, or remaining electrically insulated from the things they are trying to cool, by something that is electrically insulating, but thermally conductive, like boron nitride perhaps.

    2. picturethis
      Childcatcher

      Re: Transistor physics

      I agree re: Moor's Law, but Moore's Law was framed in the known transistor physics at the time. At the simplest level, the transistor is just a switch, 1 or 0. That's it. Yes there are billions of them and they switch very fast, but it is still just a 1 or a 0..

      I think (hope) that in the future (hopefully years, decades and not centuries) that a new technology will be discovered that can provide a switch that holds a 1 or 0 state. Maybe somewhere along the way, the (fundamental?) bulding blocks of atoms (quarks?) are able to be manipulated and their states/spins are then used. I suspect that at this point Unified Field Theory will be reality or at least better understood.

      I don't think "quantum" computers are the answer right now either. Biologic computers - maybe, but I suspect that solving/understanding the Unified Field Theory will come first.

      Of course there is always the possiblity of trinary (ternary) computers becoming mainstream, along with their theoretical efficiency improvement, but Industry manufacturing inertia will likely prevent that.

      If I had to do it all over again, I would have entered the material sciences, room temp. superconductors, graphene, carbon nanotubes - cool stuff and much more yet to be "discovered", invented.

    3. elsergiovolador Silver badge

      Re: Transistor physics

      You can't make a transistor from a single atom.

      Are you sure? Prof. Thomas Schimmel who made one in 2004 would probably disagree.

      1. The First Dave

        Re: Transistor physics

        OK smart arse, how about making a transistor from half an atom then?

        1. elsergiovolador Silver badge

          Re: Transistor physics

          Researchers have demonstrated the first single-photon transistor using a semiconductor chip.

          A single-photon switch and transistor enabled by a solid-state quantum memory. Science, 2018; 361 (6397): 57 DOI: 10.1126/science.aat3581

          Photons are smaller than atoms.

        2. bombastic bob Silver badge
          Devil

          Re: Transistor physics

          quantum state transistors, multiple devices per atom, using large atoms with lots of electron orbital shells. Hmmm... sounds science-fictiony

        3. stiine Silver badge
          Mushroom

          Re: Transistor physics

          The only thing you can make from 1/2 atom is radiation.

          1. snowpages

            Re: Transistor physics

            ..and two smaller atoms perhaps?

      2. katrinab Silver badge
        Paris Hilton

        Re: Transistor physics

        I seem to remember from my school days back in the dim-distant past when transistor sizes were measured in mm, that transistors were made from three blocks of silicon, each mixed with stuff like germanium and arsenic. Which would suggest an absolute minimum of 6 atoms; and I doubt it would work with that few.

        1. Dave314159ggggdffsdds Silver badge

          Re: Transistor physics

          And yet, it has been done.

  5. BlokeInTejas

    About time too

    Moore's Law has been the ugly luxury that has lead to the death of computer architecture. What was the point producing something which was 10x better than an x86 when simply waiting three to four years gave you the same performance without having to rebuild toolchains/recompile/redesign/rewrite?

    But now we live in interesting times. With a bit of luck, two things will happen:

    - mainstream computing will realize that even a 1-time massive performance/power gain through improved architecture is a big win

    - folk who think that the way you should classify things seen by humans is by using tera-operations per second to build unreliable "AI" systems will realize that's nuts, and find a much better way of doing it.

    This may have an effect on how software's written, too

    1. J.G.Harston Silver badge

      Re: About time too

      The final point is a good one. For too long the answer to "shall I improve my code?" has been "nah! next month's hardware will do the improvements".

      I grew up writing transient utilities that absolutely had to fit into 512 bytes.

      1. Electronics'R'Us
        Windows

        Re: About time too

        This.

        Many in the software community have relied upon the ever increasing capability of processors to achieve better performance.

        My view is that as soon as a new, more capable, architecture is released then all the increased capability is eaten up by (unnecessary) bloat. After all, memory and computing cycles are cheap, right?

        Tightly written code (without enormous frameworks) would probably do far more to boost performance than fresh advances in computing itself.

        There is a certain beauty in code that is efficient, including the fact that it will now use less power to achieve the desired result.

        I am not advocating going back to assembler (well, that depends on what is being done) but the frameworks and libraries we have are simply far too bloated in many cases.

        I fondly remember adding a test to a smartphone to detect the proper operation of a flash memory (it was an option that could be inserted into a socket - they were expensive at that time). It added a grand total of 33 bytes. Given that the total code memory available was 32K and already very tight that was what was necessary.

        1. Timbo

          Re: About time too

          The problem over the last (say) 40 years of the PC is that Intel/AMD and Microsoft were in cahoots such that any performance gain in the hardware (ie 80286 > 80386 > i486 > Pentium etc) was soaked up by further versions of OS (DOS 3.3 > DOS 5 > DOS+Windows 1/2/3 and so on).

          So, we all got on the "upgrade" bandwagon when last years hardware didn't perform as well, once the new software was loaded. So, we upgrade the mobo/CPU/RAM and then the following year the software absorbed up the extra hardware performance....and we were back to square one.

          I can even remember back a while, when I was forced to upgrade my email client (Outlook) due to its 2Gb PST file limit...only to find that the new version of MS Office (bought so I could upgrade Outlook) then made my PC display run at a snails pace as it was crippling the GPU (which was running another maths-intensive task) - even though this PC had a 3 GHz multi-core CPU and huge amounts of RAM.

          I have thought for years that the way forwards was always using parallel computing or at least an OS that the user could pre-define how many cores the USER can dictate each application can use.

          So, with a 16-core CPU, 2 could be used for the OS, 1 for an email client, maybe 4 for a spreadsheet or database, and so on.

          But we cannot do that with Windows (yet), as *IT* dictates how many cores can be used (for each task) and it just switches between them when *IT* determines which program has priority - and we have to sit and wait while Windows locks the screen and figures out what is going on. :-(

          1. katrinab Silver badge

            Re: About time too

            With Windows Server, you kind-of can, because the recommended approach is to run each workload in a separate virtual machine, and you can allocate resources to the virtual machines.

            I think Windows Desktop will go the same way. We've seen the first signs of that with WSL-2 running a virtual machine and getting improved graphics performance.

          2. Loyal Commenter Silver badge

            Re: About time too

            But we cannot do that with Windows (yet), as *IT* dictates how many cores can be used (for each task) and it just switches between them when *IT* determines which program has priority - and we have to sit and wait while Windows locks the screen and figures out what is going on. :-(

            Open task manager. Switch to the Details tab, find the process you are interested in, right-click, set affinity.

            You can explicitly set which cores are used, at the process level, so the exact opposite of what you said. In fact, if I remember rightly, and I'm going from memory here, it's one of the configuration settings in SQL Server to be able to set the processor affinity, so you can tell it explicitly NOT to use certain cores, if you want them free for something else.

            Of course, if you wanted to set the affinity of all OS processes to one core, the affinity of an email client to another, and so on, apart from taking you a while to work through all those processes (the OS itself is going to have quite a few), all you're going to achieve is the underutilisation of some cores. You'd be better off setting the process priority down for things that you don't want hogging your resources (you can do that in the same place, too).

            Complaining that a multiprocessing OS is splitting processor time to various tasks according to its own rules is basically blaming it for what it is designed to do. If you haven't set those rules to your liking, then it's not the fault of the OS for not being psychic. I don't know of any other OS that would handle processor sharing any differently, and we are a long way past the point where you would assign a single task to a single core and set it running. A multi-purpose OS with a UI is not an embedded device.

            Don't get me wrong, there are plenty of valid criticisms of Microsoft, and of Windows, but making up bullshit doesn't help.

        2. NXM Silver badge

          Re: About time too

          I totally agree about the code bloat problem. I write real-time stuff in assembler for cheap processors because it's fast and small. Also, if I save £1 per chip that's extra profit in my back pocket, where it belongs.

          I wouldn't get away with it in C because it just doesn't run fast enough. The downside it that it takes bloody ages to get it right.

          1. bombastic bob Silver badge
            Devil

            Re: About time too

            the semi-traditional way of dealing with this in the microcontroller and embedded world appears to be simple: write it in C (or even C++) anyway, then hand-tweek the things that make the biggest difference, like inner loops and ISRs.

            it's what _I_ do. Just check the assembly output and then embed tweeked assembler into your C code (as an example)

            then you can go ahead and use the cheaper 8-bit CPU instead of an ARM Cortex-M (let's say) for the simpler things, at any rate. Last I checked these 8-bit CPUs are super-cheap in bulk, and come in packages (like TQFP) that are hand-solderable [VERY good for prototypes].

            Sometimes, a careful application of hand-tweeked C code, one simple/basic operation per line, can get you almost the same results without any embedded assembly.

            So you do something like:

            var1 = var2 + 5;

            var1 &= 7;

            the_array[var1] = something;

            and so on. That kind of thing results in some very predictable assembly language sequences. Then you dump out the disassembly after building it to see if it can be tweeked even more, etc. (I have a script that does "objdump -D -t -z -x $@ | less" for that very purpose).

            1. genghis_uk

              Re: About time too

              The art of disassembly seems to have been lost in the days.of.dotty.code.writing()

              It is not a case of C not running fast enough - you do not 'run' C code - it is that the compiler does not produce the most efficient machine code.

              As Bob said, that is fixable if you know what you are doing and means that you don't have to hand code everything. Alternatively the basics can be done in C and the time sensitive or space sensitive parts can be assembler and linked in.

        3. l8gravely

          Re: About time too

          I want to say that most software bloat is down to two reasons, exception handling and abstraction. Both of which are critical to making successful programs that anyone can use. Most people don't realize how hard exception handling is in programming. 90%+ of all the code I write is handling exceptions, error cases, etc.

          Go outside and pick up sticks from the lawn. Now tell me how you did it and how you decided what was a stiick and what wasn't. Computers are super fast as *simple* things. But the world is complex.

          So abstractions help you manage the world, and reduce the load, but take up processing time. Watching my Nephew writing a discord client in node to return pictures of a dog doing various things was fun. Asking him if it handled other cases... we ran right into the exception handling problem.

          So all the speed and performance we have in computers lets us do amazing things which are simple, but repetitive. But driving a car isn't simple, even though it feels repetitive.

          1. Anonymous Coward
            Anonymous Coward

            Re: About time too

            While I'm sure exception handling and abstraction are causes of software bloat, an awful lot is just bad programming. Importing/copying an entire library to use one small function they could have written themselves. Extracting small pieces of data from a data structure in such a way that it forces the ENTIRE structure to stay in memory long after it's needed (http://thecodelesscode.com/case/209). Similarly, requesting and passing huge amounts of data around just to do one very small test on one very small piece (http://thecodelesscode.com/case/110). Using a linear search or bubble sort on large data.

            And then there's fancy graphics that take 10x the processing power of the software itself; well do I recall a single Flash ad that would bring my computer to its knees every time it appeared (which is when I installed an adblocker).

            1. katrinab Silver badge
              Megaphone

              Re: About time too

              I disagree.

              Most of the time, the third party library will be better written than anything you can write yourself.

              Also, why spend all your time re-inventing the wheel rather than doing whatever it is you want the program to do?

              1. Anonymous Coward
                Anonymous Coward

                Re: About time too

                I was thinking more of, for example, importing/loading an entire library of string manipulation tools just to change one string to capital letters, without using any other tools. Depending on the language, that can be very cheap (only the needed function gets loaded) or very expensive (the entire library has to be downloaded and stored in memory to use a 5-line function).

              2. bombastic bob Silver badge
                Thumb Down

                Re: About time too

                Most of the time, the third party library will be better written than anything you can write yourself.

                think of the readership of El Reg. Then say that again, imagining who it is you are saying it to.

                why spend all your time re-inventing the wheel

                car makers "re-invent the wheel" every year. There's always a better way (though the cost may not be worth doing it - that's to be determined as a part of the process).

                you young whippersnappers, when I was your age computers used punch cards and either every bit of debug info was on a printout in binary, or you'd have to be really good at reading das blinkenlights.

                nothing builds code analysis skills better than having to repeatedly wait hours for the printout after submitting the stack of cards, and then having to deal with the aftermath.

          2. Warm Braw

            Re: About time too

            90%+ of all the code I write is handling exceptions, error cases

            If it were just your code, it wouldn't matter - if it's truly an exception or an error, that code will rarely be invoked and will not impact typical performance: it shouldn't even be necessary to load it until it needs to be used.

            One of the drawbacks of the multi-layered code we have is that the same potential errors are being checked for repeatedly and redundantly because one layer doesn't know what the next is doing. That's likely to have a genuine impact on performance: wider still and wider shall your bounds be checked.

            I think there is scope for a combination of more useful memory protection in hardware and more explicit declaration of contracts in software to simplify things - as well as reducing the opportunity for programming errors.

            Not that anything will change soon: in the time it's taken for Moore's law to come and go, software's major innovation has been the random incorporation of card games (Design Patterns, based on Top Trumps, CRC from role-playing games and scrum poker) into otherwise straightforward processes in order, seemingly, to test everyone's Patience.

            1. bombastic bob Silver badge
              Linux

              Re: About time too

              one layer doesn't know what the next is doing

              design top-down rather than in a scrum, and maybe the function's I/O description "contract" will handle this by itself...

              at the very least a library function should NEVER throw an exception. it should return an error code, and let the caller decide whether to do something different. This is the basis of an important part of the C language, where an error generally does not stop the program (nor throw some kind of exception), but returns an error code.

              My C code often has a catch-all at the end of a large function, where you do the cleanup. It often has a label 'error_exit'. You set error codes beforehand [based on where/what the error was], then go to 'error_exit' where it cleans up the resources, then returns whatever the error code was.

              (I actually started doing this after seeing Linux kernel code doing similar things)

              1. Charles 9

                Re: About time too

                "at the very least a library function should NEVER throw an exception."

                But there are some situations where you have no choice but to throw an exception, such as in a math function where the return value is expected and cannot have an expected range. How else do you send an error code in a math function of unbound output range that cannot possibly be misinterpreted?

                1. Michael Wojcik Silver badge

                  Re: About time too

                  As a rule of thumb, if a programming dictum is short enough to fit on a bumper sticker, it's probably naive and generally unhelpful.

                  (That rule doesn't apply to itself, because it's at a higher layer of abstraction.)

              2. doublelayer Silver badge

                Re: About time too

                There are good reasons for exceptions rather than error codes. The major reason is how easy it is to improperly handle the error. For most return types, there is only one indicator for an error: null. For some types, there is none at all. So you have to figure out the fragile system of different error codes and hope that nobody ever changes it. For example, a function which returns null and sets errno on an error but eventually changes the value it sets errno to or has a path you didn't find where it returns null but doesn't set errno, leaving you with an old value.

                If you're particularly starved of resources, then skip exceptions as they bring some overhead along with them. In most cases, however, the benefits of exceptions as an error-handling technique outweigh the minor overhead. For the same reason, when you're operating in very little memory, you can do hackish structures to cram data into small spaces, but in anything else, use normal types which don't need to be chopped up every time they're used. This isn't just to make the process of writing the software easier, but also because a program that embraces simplicity is easier to debug than one which strives always for efficiency. Programs with straightforward structure are easier for someone new to edit than ones where you have to understand the original coder's spaghetti but it does run faster. If the original coder has any flaws, that ease is important to fixing things.

          3. Alan Brown Silver badge

            Re: About time too

            "Now tell me how you did it and how you decided what was a stiick and what wasn't. "

            I picked up everything that was "brown and sticky"...... :/

            1. Charles 9

              Re: About time too

              But what made you realize it was brown? And precisely what shape is a stick versus, say, a rod or a twig?

              The catch is that a lot of our thinking is SUBconscious, so we can't fully understand how we ourselves think.

      2. Anonymous Coward
        Anonymous Coward

        Re: About time too

        So, so this.

        I grew up writing programs for a TI-82 graphing calculator. Incredibly small storage and memory footprint and exceptionally slow processor, when compared with even the computers we had at the time. It FORCED me to be super space- and processor-efficient. It was good training.

        Contrast with modern software. My desktop computer is definitely showing its age, but it has no trouble whatsoever running Half-Life 2, with graphics so good I can read the characters' lips when they talk. Yet browser games on Fakebook are extremely slow, with a noticeable delay just moving the mouse. Hmm. Maybe somebody needs to teach them how to write software properly.

        1. Loyal Commenter Silver badge
          Pirate

          Re: About time too

          That's because every time you move the mouse, those mouse movements are getting sent off to FB central to be analysed to see if there's an appropriate ad they can pop up.

    2. Bitsminer Silver badge

      Re: About time too

      when simply waiting three to four years gave you the same performance...

      Naw. In my experience, the developer's _insisted_ that they must have the latest, fastest, smoothest, graphically superior machines "for their productivity". And to their eyes, the software was fast enough (only because it was tested on a fast machine).

      And the end users, stuck with cheap standard-issue office-level machines with half or quarter the RAM, two-year old CPUs and small and slow magnetic disks had to run the software built on super-fast machines.

      And of course the software performance sucked.

      And that led to demands for "improved" and newer standard-issue office-level machines.

      Rinse. Repeat.

      1. stiine Silver badge

        Re: About time too

        I've been making this complaint for 25 years. And its not just the CPU that matters. The monitor, network speed, etc all impact an applications 'perceived' performance and running on a top-of-the-line computer with a gigantic monitor on a 10Gig lan will differ greatly from anything that the users are going to be using.

    3. bombastic bob Silver badge
      Unhappy

      Re: About time too

      This may have an effect on how software's written, too

      not as long as Micros~1 has any say in it

  6. Anonymous Coward
    Anonymous Coward

    What's next? How about we reverse the fact that "every aspect of daily life now depends on computers that are ever faster, cheaper, and more widely spread", as it doesn't seem to be doing us any good. In a very real and practical sense, it's also not sustainable, unless there are plans somewhere to power all of these processes with solar, wind, and hydro.

    On the upside, it's very interesting reading about the current cutting edge of technology.

    1. DS999 Silver badge

      There's no need to reverse that "fact"

      Just because the author of the article claimed that doesn't make it so.

      Sure, it would be a hit to the consumer electronics industry if things stopped getting faster and/or lower power. The smartphone industry would slow down as it would mainly be about replacing broken devices. Ditto for PCs - that's actually already happened outside servers, since for the average consumer the PC of 10 years ago was already fast enough so many people are running theirs until it breaks. While PC sales slowed over the past decade (until the pandemic) it isn't as if the industry went away. It just was no longer the growth area it was before.

      It isn't going to make as much difference outside that sphere. Most other products use chips that are several if not many generations behind the leading edge. No one uses the latest and greatest CPU in their thermostat, or fridge, or water heater, or other consumer products that have gone "smart" on us, so if Moore's Law completely stopped it wouldn't hurt most products at all even if they have chips in them.

      If the consumer electronics industry as a whole slows down like PC industry that will be a hit to economic growth to be sure, but there are other places in need of investment like green energy that aren't going to impacted by this. Solar cells don't depend on Moore's Law, nor do wind turbines. Nor do battery packs for grid storage or electric cars. While self driving vehicles require a lot of processing power, current technology provides more than enough processing power - the problem deploying them is software (and non technological things like liability law)

  7. Will Godfrey Silver badge
    Linux

    That's a relief

    I don't suppose folks could now get off the merry go round, and start thinking of overall performance of complete systems - including power efficiency.

    1. Prst. V.Jeltz Silver badge

      Re: That's a relief

      Exactly.

      I dont want to come over all "Bill gates" with his' 640k is enough for anyone' gaff

      but by now surely we have enough!

      So much computing power we invent bitcoin to for it to pointlessly slave over.

  8. Anonymous Coward
    Anonymous Coward

    No skynet worries

    Will this be what saves us.

    AI too thick to destroy us due to the finite developments going forward.

    Obviously this doesnt account for the massive twats that will believe everything a flawed AI will tell them needs doing.

    Maybe we are suffering that now with uk governments chopping and changing rules etc.

  9. Man inna barrel

    What do we do with all this processing power?

    For most home and office computing, adequate performance was achieved years ago. Games might be an exception, where serious processing power is needed to improve graphic rendering quality. I use a bit of 3D modelling for engineering work, but I doubt my machine is under any strain rendering it.

    I think what has been happening for some time is that small increases in functionality, as seen by the user, are purchased at the expense of great increase in resource usage, e.g. RAM and CPU cycles. I think basic economics will dictate that trend, because the customer does not incur the costs of increased computing usage. If all machines are more than adequate in raw computing power, software suppliers will use the surplus in whatever way suits their business, which often leads to inefficiency.

    Of course, this does mean that older machines fail to keep up with these trends, and their performance will be judged inadequate, because they don't run the latest (less efficient) applications. This suits hardware suppliers, because it motivates new hardware purchases, because there is a form of built-in obsolescence.

    1. elsergiovolador Silver badge

      Re: What do we do with all this processing power?

      There are plenty of areas where current computing power is severely lacking.

      Just because you are running your spreadsheets fine (I had spreadsheets that put my i9 to its knees...) does not mean other people don't need more power to run e.g. simulations.

      If you need low latency accurate simulations, even the fastest PC you can buy now, will not be up to scratch in many cases.

      1. Man inna barrel

        Re: What do we do with all this processing power?

        I run SPICE simulations of electronic circuits with no problems. If I see suspect results, I do the usual thing of doubling the number of points and running the simulation again. So what if a 100k point simulation takes a few seconds to run? The same happens with antenna analysis, using NEC2. Bear in mind that these applications date from the days of FORTRAN, with data entered on punched cards. My home computing environment is luxury compared to when these applications were originally developed, and a single run would have to be overnight on a mainframe.

        I dare say I could be a bit more smart about how I use my number-crunching applications, and save on CPU cycles. The point is, I don't have to be that smart to get the job done. In fact, trying to save CPU cycles smacks of premature optimisation, i.e. a real waste of time and money.

      2. DS999 Silver badge

        Re: What do we do with all this processing power?

        Sure, but you're increasingly an edge case. No one is saying that further improvements are meaningless, just that for the vast and ever increasing majority of people further improvements are meaningless.

        That's why PC sales have been falling for a decade (until the pandemic) Most people are keeping their PC until it breaks, not replacing it after 3-4 years because the new one is twice as fast like they used to. They may still be "twice as fast" in some sense, but it is only due to twice as many cores, and thus only benefits tasks that can be parallelized. For the average person browsing the web, reading email, watching Youtube, etc. more performance is pointless.

        Servers and "workstations" (which I'd define as a PC for someone like you who can always make use of additional performance) are the only sectors of the PC market where Moore's Law matters. Everything else is replace as necessary only anymore.

        1. Nick Ryan Silver badge

          Re: What do we do with all this processing power?

          Unfortunately, many of the common applications while they have some minor productivity enhancements compared to versions that were written 20 years ago, they run like an absolute dog unless on the latest hardware. Something has gone very wrong and while some of this will be excused by additional security checks, most of these should not be necessary in the first place.

          Step through code down to the assembly level and it's immediately obvious that common practices such as treating every variable as if it's a variant are adding 100s of extra processor instructions for the simplest of tasks. Add in the x86 CPU instructions which often feels like most CPU instructions revolve around juggling registers and not doing anything overly useful and it's pretty evident why all the incredibly clever CPU hardware optimisations are necessary (and unfortunately why corners were often cut on the security side).

          1. Steve Todd

            Re: What do we do with all this processing power?

            Treating every variable as if it were a variant? You’re using the wrong languages. Some tasks are not speed critical, so ease of programming wins out (JavaScript or Python being classic examples). Where reliability and speed are vital then strongly typed languages rule.

        2. Dave314159ggggdffsdds Silver badge

          Re: What do we do with all this processing power?

          'PC sales' doesn't include laptops. It's more amazing the sector hasn't all-but vanished.

          1. DS999 Silver badge

            Re: What do we do with all this processing power?

            'PC sales' doesn't include laptops.

            Yes it does. Whether you look at Gartner, or Canalys, or Statista, they all include both desktop and laptop.

            1. Dave314159ggggdffsdds Silver badge

              Re: What do we do with all this processing power?

              It can do, to be fair. When it does, it isn't a number in decline, by any means. When it doesn't, it's a hook for hacks to hang fake news on.

    2. JoeCool Bronze badge

      Re: What do we do with all this processing power?

      At the individual / personal level, computing power is largely irrelevant.

      But as a civilization, it is becoming a foundational utility. We build a data center, load it up with apps, feed in electrical power, and get out banking, commerce, transportation management, communications, and functioning power grids.

      To be clear, the resource being used isn't cpu cycles or memory silicon, it is electrical power. It is specmarks per watt.

      1. fung0

        Re: What do we do with all this processing power?

        "At the individual / personal level, computing power is largely irrelevant."

        True... but, I would contend, mainly because we've become accepting of software that doesn't do all it could do to be helpful. Computers have essentially added no new end-user capabilities since the early 2000s.

        My own ancient 4-core CPU is 90% idle over 90% of the time. I try to use that capacity, running things in the background. But I continually bump into things the OS could have already anticipated. I continually find myself adding bits of software to provide capability that should long ago have been integrated into the OS.

        More specifically, I recently switched to Scrivener from MS Word, and was astonished to discover just how much better a word processor can be - if it's been properly re-designed any time in the past couple of decades. And I still see endless ways Scrivener could be better, more helpful with the ubiquitous task of creating meaningful, coherent text documents (not simply "processing words"!).

        We've become conditioned to expect nothing more than new PCs that run the same old brain-dead software a little quicker. When there's a universe of other possibilities that's not even being considered by the monopolistic corporations that have come to dominate the digital realm.

        Once upon a time, it was those possibilities that drove chip innovation. Today, we're happy with a marginal increase in speed, or a few more (woefully under-utilized) processing cores. Because we know that a new PC will not make our lives fundamentally better.

  10. Bartholomew

    magnetocaloric effect ?

    Since ~2005 clock rates have stalled because of cooling. In theory if there was a solution we would be in the era of 10GHz to 100GHz CPU's now!

    https://www.researchgate.net/figure/Historical-growth-of-processor-performance-in-terms-of-clock-rate-in-comparison-to_fig9_281003089

    But we have yet to invent a cheap technology to remove thermal energy at a power density that is the same as the surface of the sun. As you make things smaller the density is higher, which in theory means that you could switch them faster - if you could solve the cooling problem. With silicon it is not like you can reduce the voltage below 0.6 volts and still have the silicon semiconductor function as switches. There is a lower limit to the voltage, all you can do is slow down the clock so that less energy is used for switching which ultimately ends up as heat that needs to be removed. Or (for better benchmarks) you can overclock the hell out of them and bring them to near the melting point and then under-clock them for a long-time until they cool down enough to run at a rate that is matches the available cooling.

    Maybe it is time to start running plumbing pipes through the middle of the silicon for cooling or put the silicon in the middle of a cooling pipe. Moore may be dead, but we could have THz CPU's if only there was enough cooling. Maybe gadolinium could be used for direct magnetocaloric effect cooling on internal layers of 3D structures within the chips. Cool the cores in one cycle, and overclock them once cold enough the next thermal cycle.

    1. Anonymous Coward
      Anonymous Coward

      Re: magnetocaloric effect ?

      "Or (for better benchmarks) you can overclock the hell out of them and bring them to near the melting point and then under-clock them for a long-time until they cool down enough to run at a rate that is matches the available cooling."

      Hmm. What about having multiple processors (not cores) that trade off - one runs overclocked until it gets too hot, then it shuts down to cool while the other runs overclocked, then when that one gets too hot it switches back, etc. Takes twice as much hardware, but would allow for faster speeds.

      1. Hurn

        Re: magnetocaloric effect ?

        Why stop at 2 sockets?

        Go Gatling Gun style - use 6 sockets, arranged in a hex ring, so that each socket gets 5x the time to cool down, before being used again.

    2. Anonymous Coward
      Anonymous Coward

      Re: magnetocaloric effect ?

      "Maybe it is time to start running plumbing pipes through the middle of the silicon for cooling or put the silicon in the middle of a cooling pipe."

      I think at some point the gates themselves start to cook, meaning no cooling you can provide will help as their own heat will melt them faster than you can shunt that heat away.

      1. Bartholomew

        Re: magnetocaloric effect ?

        > no cooling you can provide will help as their own heat will melt them faster than you can shunt that heat away

        But that is my point of using the magnetocaloric effect inside the layers of actual silicon chip, that you pre-cool the chip before pulsing its for a short duration at an insane clock rate.

        e.g. It is difficult to boil a cup of water in under 45 seconds if the water starts at -40 °C/°F.

        Lowering the starting temperature means that the heat does not need to be shunted away as fast. Basically you would build additional refrigeration inside the layers of the silicon. You would still need all the external cooling that we use today.

    3. DS999 Silver badge

      Re: magnetocaloric effect ?

      Moore may be dead, but we could have THz CPU's if only there was enough cooling

      That's absolutely not true. If that were the case people using LN2 for overclocking would be running at hundreds of GHz instead of reaching maybe 40% faster than stock.

      There's a limit to how much power a transistor can handle, and smaller transistors can handle less power. If you could pump a few kilowatts into a CPU that had sufficient cooling, it would probably not last long enough to complete bootup.

      1. Bartholomew

        Re: magnetocaloric effect ?

        > people using LN2 for overclocking would be running at hundreds of GHz

        That is because all the cooling is currently very very far away from the junction ( TJ could be as high 175°C ), and glass (silicon dioxide) is a very good insulator. I'm talking about pre-cooling inside the silicon while that part of the chip is powered off, or in a low power state. That the internal temperature inside the silicon is lowered below room temperature. I'm not talking about continuous operation at hundreds of GHz. Power (heat) that needs to be dissipated is roughly related to clock rate squared. So what I mean is to cool for what would be relatively a long time and then and pulse the clock at an insane rate for a miniscule length of time. You will still have to stop the silicon from becoming lava. And you would need a lot of cores to make it useful....

        LN2 is not practical for everyday use, but cooling based on the magnetocaloric effect may be.

        1. DS999 Silver badge

          Re: magnetocaloric effect ?

          Still can't do it because no matter how cold the transistors are, modern transistors are severely limited in the amount of power they can handle. There's a reason you have to use very different (and much larger) structures for RF.

    4. Steve Todd

      Re: magnetocaloric effect ?

      The important factor you’re ignoring is the speed of light (or, more correctly, the speed of electric fields in a wire). The current generation of CPUs are already running so quickly that a signal doesn’t have time to make its way from one side of a chip to the other in a single clock cycle, never mind switch a gate at the far end of that wire.

      CMOS transistor pairs draw power when they switch state as there is an instant when they are both, at least partially, on when the logic level changes. The current per gate is small, but multiplied by billions of gates it becomes significant. There are other semiconductors that switch faster (gallium nitride for example) and thus run cooler, but you can’t make them as dense so silicon is faster for large scale processes.

      The second problem is in disposing of the heat in a home/office environment. Data centre CPUs are already past 200W using industrial cooling solutions. Standard desktop systems run closer to 135W. You CAN run them faster with the right cooling (anyone remember Intel showing of a high core CPU at over 5GHz, only for it to be found out they were using a high power industrial chiller). They don’t get that much faster, not because of temperature but because the signals don’t move fast enough across the chip.

      1. jtaylor

        Re: magnetocaloric effect ?

        The Joe Patroni approach to computing.

        Any problem can be solved with a square jaw, a cigar, and MORE POWER!

  11. Mage Silver badge
    Alert

    It was never a law

    It was an observation and in real terms probably dead nearly 20 years ago.

    15nm is not a 9th of 45 nm process which is not really 4x a 90nm process (by area for same number of transistors or x9 devices or x4 devices per same size chip). They are now fudging it to mean smallest feature rather than typical size. Also in usability for spreadsheet, wordprocessor, email and a web page how much better is an i9 with win10 today vs 2.2 GHz P4 portable with 1600x1200 screen and XP in summer 2002?

    But 1981 to 2001 saw massive increases every 6 months to a year. Even 1995 vs 2002 was a huge difference. I had computers since 1979 but no laptop till 1998. My next was in 2000 and massively outclassed by the 2002 model which I used till November 2016, though it had DVD drive, WiFi card, HDD, RAM and GPU card all updated over the 14 years.

    Clock rates haven't just stalled due to cooling. Obvious computing is all portable. Not just laptops, but phones, ereaders and tablets. But also now lightbulbs, washing machines, routers, TV sets, BD players, Consoles, microwaves, toys, streaming sticks etc all have cpus.

    There is no point in faster clocks and more chip power consumption, that is the x86 dead end. Even in the 1970s and 1980s it was clear that eventually there would be no point to faster clocks, and physical limits to clock speed and per transistor shrinking. Multiprocessing, wafer scale integration and transputers were discussed. By 2007 Samsung was layering ARM CPU, Flash and RAM in one package to save board area, reduce i/o pads and simplify PCB design. Not possible with x86. Intel tried downscaling back to a PIII style design with the Atom for netbooks and tablets. The performance compared to regular power hungry intel CPUs or ARM was abysmal.

    There are also issues with yield as you increase the number of devices per chip and reliability decreasing geometry and increasing clock.

    1. Yet Another Anonymous coward Silver badge

      Re: It was never a law

      >There are also issues with yield as you increase the number of devices per chip and reliability decreasing geometry and increasing clock.

      Yes that was the point of Moorse's 'law' - the optimal number of transistors per chip (or 1/size) increases geometrically. In spite of the increased cost of the fab to make smaller components, the increased processing time and steps, the increased failure rate with smaller components = it's still cheaper over time to make smaller transistors.

      This is what has actually run out. It's now more expensive per transistor to make them on a 5 marketing-nm fab than on 7 or 10, but if you want to put skynet on your write you have no choice.

      1. Yet Another Anonymous coward Silver badge

        Re: It was never a law

        on your "wrist" - damn phone autocorrect

  12. Yet Another Anonymous coward Silver badge

    Optimal cost

    ", the optimal cost per component on a chip had dropped by a factor of 10"

    Surely the optimal cost per transistor is zero ?

    1. Jim Mitchell

      Re: Optimal cost

      Not for the seller.

  13. Anonymous Coward
    Anonymous Coward

    Oh FFS enough already!

    "Intel putting itself at a particular disadvantage by accurately but unhelpfully labelling successive iterations that didn't involve a step change as 10+, 10++ etc. while performance-wise the designs were equivalent or better to the 7nm of its competitors."

    STOP IT! Just STOP!

    This is a lie, it's known to be a lie, and there's no excuse for repeating it. If this article was sponsored by Intel, please disclose that. Otherwise, please stop repeating their marketing rhetoric as fact.

    Intel's best 10nm process is inferior to TSMC's 7nm process in price/performance and power/performance. All you have to do to know this is to go look at delivered performance and power dissipation at any price point you like, look at core counts and IPC and cache sizes (omg the cache sizes), and see that AMD is presently mopping the floor with Intel and has been for at least the last 3 years. In many cases, Intel's latest parts aren't even competitive with the previous generation from AMD (who fab on TSMC 7nm). Ampere also fab on TSMC 7nm and also beat Intel on every imaginable metric, which is to say that this is not just the result of some AMD-specific x86 magic, either. This isn't because features sizes are necessarily the most meaningful measure of a process's worth in a 3D world. That part's true. But what you're ignoring is that Intel's and TSMC's processes are both 3D. TSMC's is both smaller and better on a competitive basis, whether or not the nominal feature size is meaningful. And the resulting products prove that out. You could call these two processes Fred and Barney and it wouldn't change anything: TSMC's is better because it yields better products at the same or lower cost.

    So enough already. Intel have lost the last 3 generations because they couldn't get their 10nm (+/++/etc) and then their 7nm processes to work. They are losing the current generation because although their 10nm process now works, it doesn't produce parts that perform as well as TSMC's 7nm process. They are all but certain to lose the next generation as well, because they've already announced that the next generation will also be built on a 10nm process while everyone else either continues with 7nm or moves to the next node. If Intel could build processors that outperformed AMD's on this "superior" process, why don't they? Answer: because they can't. This doesn't necessarily mean Moore's Law is dead -- though I agree with Dr. Fuchs that it is -- but it probably does mean Intel is dead. Perhaps they'll come back to life sometime after Sapphire Rapids hits the market late, slow, hot, and expensive -- they've done so before -- but for now and the next couple of years the only game in x86 town is AMD's, because Intel's process is inferior.

    1. Anonymous Coward
      Anonymous Coward

      Re: Oh FFS enough already!

      > Intel is dead

      Intel are rich, they will build a fab (which incentives will pay for) in the USA and buy a couple of politicians to ensure that the government can only buy Freedom chips made in their Freedom fab

      They will buy some interconnect or enterprise network company so that any server/data-center maker who consider AMD will be cut out of a vital component.

      They will go into an HP/IBM spiral. Fire everyone expensive/competent, lose market share, become a patent troll

    2. Geez Money

      Re: Oh FFS enough already!

      It 100% is not a lie. The most useful single metric for comparison is probably transistor density, and it's easy to look it up and see that Intel 10nm is actually more dense than TSMC 7nm. The TSMC 7nm+ processes may be comparable.

    3. Anonymous Coward
      Anonymous Coward

      Re: Oh FFS enough already!

      I'm no Intel fanboy, but you're clearly an AMD one.

      "Perhaps they'll come back to life sometime after Sapphire Rapids hits the market late, slow, hot, and expensive"

      Let's just ignore that the early benchmarks already show 12th gen Intel cores are actually quite dominant and none of the above descriptors - except probably expensive - actually apply. This rant sounds more hopeful than anything, and I suspect it may have been written with one hand.

  14. FIA Silver badge

    Are we sure?

    Don't want to cause a fuss, but has someone double checked??

    I mean it wasn't looking well in 2009, but appeared to have perked up again by the end of 2013, mind you, that may have been a brief respite, as it aparently died in 2018.

    Are we 100% sure someone isn't just trying to fiddle the social??

    "Moore's Law?? Nah... you've just missed it, it's popped out for some more black light bulbs..."

    1. Geez Money

      Re: Are we sure?

      You can go into the last century and keep finding these things, too.

      Its death is old enough to drink in the states according to this article: https://www.technologyreview.com/2000/05/01/236362/the-end-of-moores-law/

  15. aerogems Silver badge

    How about a focus on efficiency?

    Instead of just trying to cram more features onto a chip, most of which only a handful of people will ever use, how about focusing on making the instructions per clock cycle ratio higher and performance per watt as well? The days of relying on brute force to improve performance are gone and now we have to do the hard work of improving designs.

    With some states in the US effectively banning certain gaming rigs because they're too power hungry, and battery tech moving along at a rather slow pace, being able to do more with less (clock cycles and/or electricity) seems like the next logical low hanging fruit to tackle. If you can increase the effective performance of a system by 25% without having to boost the clock speed, just by improving the efficiency of internal systems, that is a pretty worthwhile effort. Same as if you can manage to get the same performance out of a system that uses 25% less power. If you can combine those two, then you're really cooking with butter! ARM's big.LITTLE design, where you have some low power cores that can be used for routine background system tasks that don't need a lot of performance, and then beefier cores for all the userland stuff seems like a good place to start.

    1. Yet Another Anonymous coward Silver badge

      Re: How about a focus on efficiency?

      >making the instructions per clock cycle ratio higher and performance per watt as well

      Don't these tend to be opposite?

      Super-scaler Itanium style processors do a lot in each instruction but the extra transistors this needs uses a lot more power than a RISC chip like ARM

      1. Steve Todd

        Re: How about a focus on efficiency?

        The two are not necessarily opposed. The Apple M1 gets high IPC at low power, for example. X86 has a disadvantage in superscalar design because of the variable length instructions it uses (up to 15 bytes IIRC), which makes the fetch/decode steps complicated.

        Modern X86 designs translate native instructions to RISC like micro ops and then execute these on a RISC like core. Getting this core up to 8 wide to match the M1 is doable, but then feeding the micro op pipeline becomes a problem. ARM definitely has the upper hand here.

    2. Anonymous Coward
      Anonymous Coward

      Re: How about a focus on efficiency?

      "how about focusing on making the instructions per clock cycle ratio higher and performance per watt as well? "

      As someone who works in this field, they do aim for both higher IPC and higher performance per watt. You only have so much chip area, system power, time, money, people, and brains, etc, all of which limit your solution while trying to meet your performance targets.

  16. Anonymous Coward
    Anonymous Coward

    Static Power

    "This means they only use power when switching, not when they're holding a state..."

    If you've a transistor that doesn't leak, you're in the money. Patent it now!

  17. Geez Money

    Like Clockwork

    Every few years you can expect another iteration of 'Moore's Law is Dead' articles, and yet the semiconductor breakthroughs keep coming. I first ran into one of these clickbait pieces in the 90s and asked some of the more experience engineers about it to a chorus of laughter, they'd been hearing about it since the 70s. I suspect the first 'Moore's Law is Dead' article was likely published within days of Moore first saying it out loud.

    Yes, R&D costs more than it did in the 70s, so does pasta that doesn't mean spaghetti sauce is dead.

    Yes, EUV processes have proven very problematic, so do a lot of engineering challenges until they're fully commercialized. Something being hard isn't a reason to not do it for most people.

    Yes, we do already have many successor candidate processes that will be able to shrink further and we have many candidates to replace silicon entirely which have been proven in lab settings.

    No, Moore's law - to the small extent that it ever deserved to be called a law rather than an idle observation - is not dead.

    This is your second extremely purple old man + clouds article in a week Rupert, after yelling yourself out that kids are evil for not using ZX Spectrums you went to this? Are you ok?

  18. Doctor Syntax Silver badge

    Industry (not necessarily Moore) treated the observation as indicating an exponential curve. The early stages of a sigmoidal growth curve look very much like they're exponential. It's usually sigmoidal growth that best describes what can be achieved in the real world.

  19. Danny 2

    Are we approaching peak computing? What are the alternatives?

    Adult colouring in books.

    Gardening.

    Mutual masturbation.

    Football.

    Discotheques.

    Jigsaws.

    That's how I remember the seventies at least. Until the Atari.

    You know how you've just made the latest computers mine crypto-currency? When they get sentient we'll all be working for them.

  20. DerekCurrie
    Headmaster

    "Light"

    Technically, the term "light" refers only to the visible portion of the electromagnetic spectrum. Therefore, it is erroneous to refer to ultraviolet or infrared radiation as "light".

    "Electromagnetic radiation that is visible, perceivable by the normal human eye as colors between red and violet, having frequencies between 400 terahertz and 790 terahertz and wavelengths between 750 nanometers and 380 nanometers. Also called visible light."

    https://www.thefreedictionary.com/light

    1. Anonymous Coward
      Anonymous Coward

      Re: "Light"

      "In physics, the term 'light' sometimes refers to electromagnetic radiation of any wavelength, whether visible or not."

      https://en.wikipedia.org/wiki/Light

  21. Scene it all

    It is easier to expand sideways

    Maybe if everyone learned Erlang as their first programming language we would be looking at this differently. When you design in terms of having multiple, as in tens of thousands, of independent processes working on a problem, the need for any single piece of kit to be blindingly fast becomes irrelevant. Then the focus could instead shift to inter-core and inter-chip communications.

    We need to think of problems holistically rather than algorithmically. It is all Donald Knuth's fault. :)

  22. Richard Pennington 1
    Boffin

    There's another limiting factor

    Cranking up the cycle speed introduces another limiting factor.

    At 1GHz, the cycle time is 1 nanosecond; 1 light-nanosecond is about 30 centimetres.

    At 10GHz, the cycle time is 100 picoseconds; 100 light-picoseconds is about 3 centimetres.

    Eventually you get to the point where the light travel time across your machine is longer than your cycle time. Beyond this point, you cannot keep the cycle synchronised across your machine.

  23. Binraider Silver badge

    One obvious alternative - allow transistors to function in three states (positive, negative and neutral) so a negative number can be natively stored. As opposed to binary that needs a logical overlay to flag negatives. Arithmetic can be simplified by going ternary.

    The objective is to simplify the instruction set further to crank out more calculations per second.

    Ternary systems were the subject of a few academic studies but never really went anywhere. Some laboratory implementations were done ages ago by the Soviets.

  24. frankyunderwood123

    Is there an inverse law for software?

    The inverse law being, that as the power of processors increase, the optimisation of software decreases?

    Anecdotally, it would certainly seem so.

    I recall the absolute wonders that creative coders would apply with so little processing power and memory on a ZX Spectrum, back in the 80's.

    Entire games with fairly complex graphics for that era, squeezed into 8k of RAM, optimised to an incredibly efficient point, using machine code.

    That dedication is incredible, as machine code is ... very very difficult for humans.

    Now we are seeing games of a similar complexity - little indie platform type games, that are consuming 100x the storage and processing power.

    Sure, things are far more complicated than this comparison. A lot of those games are created on the back of game engines and I'd argue it is probably an order of magnitude easier now, to create the same level of graphical detail and logic complexity, than it was 40 years ago.

    But the end result of that, is software that requires more raw processing power.

    This is a simplistic completely anecdotal point I'm making, but I'm confident the argument holds.

    When software developers are spoiled with more raw processing power, they often optimise less - and sometimes, the tools in the software toolchain aren't optimising, that developers rely on.

    This is why I love projects like the "1k programming challenge", as they stretch the imagination of coders to find the most optimal way to run code.

    Sure, the size of an application doesn't equate to how much processing power it uses - you can create a program of just a few bytes, that'll melt a CPu.

    But it does encourage optimisation - and ultimately, optimisation can result in using less processing power.

    I'll get my coat...

  25. Anonymous Coward
    Anonymous Coward

    Rather than trying to make the things smaller

    Given the power of current computers, as a consumer, I don't need faster CPUs. I'd very much like greater reliability and security in my computing experience though, and that starts with processor design. Less power consumption would be great, too, for the future of the planet, which is a tad more important than the size of anyone's electricty bills.

  26. karlnapf

    An alternative?

    Less f*ck*ng bloated Software.

  27. Big_Boomer Silver badge

    Moores "Law" was NEVER a Law, merely an observation.

    Over the last 10 years I have been involved in 2 road traffic accidents. Therefore, over the next 10 years I will be involved in 2 more RTAs? There are so many variables at play that at best that is a wild guess and more likely is just plain bovine excreta.

    Moore made a prediction that just happened to come to pass, so everyone decided it must be a "Law", and then extended it way beyond what he originally stated, both in subject and timespan. The shrinking of transistors was always going to hit boundaries of physical size due to atomic physics, which is why his prediction had a limited timespan.

    There has been and will be no motivation to develop alternative means of computing until we have exhausted the capabilities of semi-conductor based switching. Once that comes to it's inevitable limit, then sufficient investment will finally materialise and we will get the next great leap, or it may be 50 years before a big breakthrough in Physics leads to better switches. Most likely it'll be something we just haven't considered before.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like