back to article Death notice: Moore's Law. 19 April 1965 – 2 January 2018

DEATH NOTICE Long beloved by both engineers and computer scientists because of ongoing performance benefits ceaselessly and seemingly effortlessly achieved. From the age of 50, Moore's Law began to age rapidly, passing into senescence and then, at the beginning of this month, into oblivion. Moore's Law leaves a thriving …

  1. Anonymous Coward
    Anonymous Coward

    The slowdown happened years ago: when Java and outsourcing to cheap code shops both became popular.

    1. HmmmYes

      Nope.

      Nothing cheap about Java and Indian outsourcers, trust me.

      A couple worthy of each other.

    2. Destroy All Monsters Silver badge
      Windows

      "Because Java is slow"

      "Ring Ring!"

      "Hello, Anonymous Retard here."

      "This is 2001. I want my marketing memes from Microsoft back!"

      Or you can use OCaml. It generates C directly.

      (Btw, probably one of the worst attempts at prediction in IT ever: "The Java Virtual Machine: a passing fad?" IEEE Software ( Volume: 15, Issue: 6, Nov/Dec 1998 ). Sadly paywalled.

      1. Christian Berger

        Seriously, outside of Android, smart cards and the mentally insane, the JVM is kinda dead.

        1. HmmmYes

          Android doesn't run JVM.

          Last time an AT Mcrashed, it appeared to be runnign OS2/

          1. Anonymous Coward
            Anonymous Coward

            I think you're misunderstanding something here - it's the smartcards themselves that run Java, not the ATM.

          2. PastyFace

            or Windows NT

        2. Anonymous Coward
          Anonymous Coward

          Server-side Java is hugely popular and an massively in-demand skill - what are you talking about? Silly boy.

        3. Name3

          > Seriously, outside of Android, smart cards and the mentally insane, the JVM is kinda dead.

          You forgot server. Lot's of enterprise and open source server software is written in Java. Think of all the Hadoop and Lucene tech. And Oracle, IBM, and SAP, have lots of Java code bases.

          But dotNet is in the same boat. And while Java has a open source community with millions of projects, there is a wasteland and a dead zoo (former CodePlex, a failed Github competitor by MS). And dotNet Framework 4.x is labeled as "legacy", and dotNetCore 1 is already unsupported, and only the unfinished dotNetCore2 with the missing APIs and lack of open source projects is "the future". Beside enterprise server hardly anyone cares about them anymore. If Java is dead, what is dotNet then?

          Can someone Archive Codeplex: https://www.codeplex.com ... they will turn it off any minute :(

          1. Craigie

            There is (currently) no plan to turn off the read-only archive at Codeplex. Source view doesn't seem to work in Chrome already mind you.

        4. sisk

          Seriously, outside of Android, smart cards and the mentally insane, the JVM is kinda dead.

          The most popular branch of the most popular game in the world still uses it. So long as the Java edition remains the main focus of the Minecraft team I think the JVM is going nowhere.

          All the more reason for them to change it over to C++ IMO.

        5. collinsl Silver badge

          And Puppet and Cognos and Jira and loads of other web tools and HP iLOs and Dell iDracs and IBM iDross (whatever) and most implementations of IPMI etc etc etc

      2. Citizen99
        Coat

        "... Or you can use OCaml. It generates C directly. ..."

        Doesn't that belong in this thread ? https://forums.theregister.co.uk/forum/1/2018/01/24/saudi_camels_disqualified_from_beauty_context_for_using_botox/

    3. FIA Silver badge

      The slowdown happened years ago: when Java and outsourcing to cheap code shops both became popular.

      The slowdown happened because the IT industry has grown far faster than the ability to train competent software engineers.

      Not being able to 'see' the Heath Robinson machines that comprise most software applications helps with this too.

      If you asked someone to build a bridge, and the result was constructed out of twine and empty kitchen roll holders, even if it was demonstrably able to cope with the weight you'd still be wary of using it. Software doesn't have a physical manifestation that you can inspect so it's much harder to tell.

      In 'traditional' engineering you wouldn't hire an enthusiastic DIYer, you'd hire a trained engineer, no matter how well the shelves were put up in their house. In IT we tend to hire the DIYers as there's not enough engineers to go around.

      I've more than one occasion seen programming jobs advertised with 'Programming experience would be nice but not essential'. I've also worked with the result of this policy too.

      1. Anonymous Coward
        Anonymous Coward

        "If you asked someone to build a bridge, and the result was constructed out of twine and empty kitchen roll holders, even if it was demonstrably able to cope with the weight you'd still be wary of using it."

        Unless, of course, you were in a hurry, which unfortunately is the standard state of most businesses these days: get there before the competition does. Doing it fast is more important than doing it right because missing a deadline is obvious; you can fast talk your way out of something wrong most of the time.

      2. Doctor Syntax Silver badge

        "The slowdown happened because the IT industry has grown far faster than the ability to train competent software engineers."

        To say nothing of the ability to find candidates capable of becoming software engineers. However there's the equivalent of Moores Law at work here. You just add more cheaper, part trained, not necessarily talented engineers.

        1. AlbertH

          Educational Issues....

          To say nothing of the ability to find candidates capable of becoming software engineers.

          This is very true. Since education in much of the world these days has been devalued to its current nadir, we're unlikely ever to see a truly competent, educated, able software engineering workforce. Schools today seem to believe that indoctrinating children with the latest PC "values" and deluded left-wing nonsense is an education. It isn't.

          I've had the recent misfortune to want to employ a couple of school-leavers in trainee positions that would give them further education (at a local college) and a reasonable rate of pay. I was only able to find one lad who was sufficiently able to fill one of the posts, and he'd been home-educated. The other 80 applicants were all equally ill-equipped for life outside the lower reaches of the civil service! None were sufficiently numerate, and most had the literacy abilities of an 8-year-old. Many had never read a book, and all were simply interested in getting paid for menial work, rather than receiving any kind of further education.

          Unless education is actually reinstated in UK schools, we're going to end up with the most ignorant, intellectually crippled populace in the western world. We already lag much of the world in basic engineering skills, and this will only worsen with the current crop of "teachers".

      3. veti Silver badge

        The slowdown happened because the IT industry has grown far faster than the ability to train competent software engineers.

        Or to put it another way: "Competent software engineers have repeatedly failed to deliver on their promises to create tools that would put the great majority of computing tasks within reach of any reasonably educated layperson."

      4. Dagg Silver badge
        Flame

        It's called Agile

        'Programming experience would be nice but not essential'

        Even enough time and enough monkeys can't keep up with the changes so you just get what they could throw together as long as it looks good.

      5. meadowlark

        PERMANENT SLOW MOTION REPLAY ?

        At the age of 74, I was born slightly too early for the digital age. I can still remember seeing those giant spools of computer tape spinning in massive cabinets in the background on the TV show: "The Man From Uncle." And apparently, when the Americans first went to the moon when I was 25, the memory of the on board computer systems was about a tenth of what is now in a smart phone.

        However, I'm fascinated by all things IT wise even if I'm not up to speed with all the subjects discussed by the very highly qualified experts on this site. But the analogy you've given is spot on and I can't believe that this problem wasn't anticipated years ago. If I've got it right, then even if hundreds of thousands of software engineers, programmers etc were trained up to the degree required, it would take years before any difference was noted. And because of this deficit, all things IT are going to be relatively sluggish from now on.

        Trevor.

        P.S. Does this also mean that 'A.I.' will come grinding to a halt and so we can all sleep sounder now that super advanced Androids won't take over mankind after all.

        1. GrumpyOldBloke

          Re: PERMANENT SLOW MOTION REPLAY ?

          No Meadowlark, the super advanced androids will still take over mankind. The difference now is that they won't know what to do once the takeover is complete. Mankind will start referring to the androids as politicians and we will lament how nothing ever seems to change for the better. Expect the androids to one day start serving adverts and legislating for donors in the absence of any real AI.

          1. peterjames

            Re: PERMANENT SLOW MOTION REPLAY ?

            The fundamental issue being the increasing anti-intellectualism in the west - for some reason believing you can be super dumb and successful at the same time.

            Lacking human intelligence, what chance is there of artificial ever appearing.

            We'll just take a swipe at politicians instead - because we are the angry mob, we read the papers every day...

            Politicians, btw, with the lack of general brain capacity they represent, are doing a spectacularly good job.

            1. sisk

              Re: PERMANENT SLOW MOTION REPLAY ?

              Politicians, btw, with the lack of general brain capacity they represent,

              Ignoring popular opinion in favor of facts, you'll find that the average POTUS candidate is slightly more intelligent than the average PhD candidate, even after the 2016 campaign.

              Actually, we don't know what Trump's IQ is. He's never opened his records AFAIK. Given his previous success in the business world, however, I'd guess his lack of effectiveness as POTUS is less down to intelligence and more down to the arrogant (and blatantly wrong) assumptions that running a nation is like running a business and that The Donald is perfect. Even now, after over a year in office, he seems to be running on those assumptions. At any rate his IQ would have to be well into the mentally retarded (using medical term here) and non-functional range to drop the average below the borderline genius range, so the above statement most likely still stands.

              But previous to him the "dumbest" President we had, based purely on IQ, was Dubbya. His IQ was "only" on par with an average PhD candidate. Which put him somewhere in the 80th or 90th percentile overall.

              In other words, even the dumbest successful politicians are probably smarter than most of us. Truly stupid politicians don't last long.

    4. smackbean

      Don't understand people who keep on about Java being 'slow' (whatever that means...). It's nonsense and anyone with half a brain knows that.

      https://stackoverflow.com/questions/2163411/is-java-really-slow

      1. sisk

        Don't understand people who keep on about Java being 'slow' (whatever that means...). It's nonsense and anyone with half a brain knows that.

        It actually was comparatively slow 15 or 20 years ago. The issues were solved quite some time ago, but the reputation remains. It also probably doesn't help that there are a lot of inexperienced programmers using Java to write poorly optimized applications because it's the language used in most intro to programming courses. You wouldn't think many of those programs would be in the wild, but they are.

        1. TonyJ
          Joke

          "...It actually was comparatively slow 15 or 20 years ago. .."

          Not like El Reg commentards to hold onto views they formed 20 years ago...

    5. Anonymous Coward
      Anonymous Coward

      Java

      I used to make those Java / speed cracks too until I got a job at a place that built financial trading systems in pure Java. On commodity HP tin, we told clients the latency (time between an order hitting our border router to the result leaving our border router) was guaranteed to be under 10ms (1/100th of a sec), but by the time I left - several years ago - it was already 20x faster than that, and they were only just beginning to experiment with expensive high-performance NICs and suchlike. Oh yes, and tens of thousands of orders/sec.

      Yes, some of the commodity tin was tricked out with more RAM than I had disk space not long before, but still java. I'm never knocking it for performance again.

      Syntax, on the other hand,...

    6. John Smith 19 Gold badge
      WTF?

      "..make chip designers much more conservative..as they pause to wonder..those innovations could..,

      Shouldn't a security impact assessment be part of every commercial product?

      Oh no, sorry. Chip designers are special.

      Why? Because of the scale of their f**kups?

      Not everyone used the same processor for remote management

      But everyone did manage to f**k the implementation up.

    7. Oh Homer
      Holmes

      "make messy code performant"

      Here's a radical idea...

      Don't make messy code.

      To all those who for years sneered at low-level code optimisation over that myth called "portability", for your delectation may I present to you a big old plate of Just Deserts.

      Bon Appetit.

  2. Mark 85

    It would seem that code will need to be tighter and more thought out. I suspect the days of bloatware and those who can only write bloatware are doomed. It's also quite possible that many of the tools will need to be re-written in order to output optimum code. OTOH, those who can produce tight code with no bloat, etc. will do very well.

    1. Anonymous Coward
      Anonymous Coward

      Re: so Desperation

      RIP Adobe! :)

      1. dbayly

        Re: so Desperation

        RIP Microsft!

        1. yoganmahew

          Re: so Desperation

          My 27 years of assembler finally will bear fruit?

          1. Christian Berger

            Re: so Desperation

            Well if you're good at assembler your're likely one of the people who will write decent code in any language.

            1. John Styles

              Re: so Desperation

              I am actually dubious of this, I think inability to write code well in high level languages because of over-fixation on low level details that don't matter' is about as common a failure mode of developers as 'writing tremendously inefficient code because of too limited understanding of what is actually going on'.

            2. herman Silver badge

              Re: so Desperation

              It is all ALGOL to me.

          2. Anonymous Coward
            Anonymous Coward

            Re: so Desperation

            So looking forward to going back to

            __asm

            {

            mov eax, num ; Get first argument ...

            Have to dig out my old Microsoft Programmers reference with all the interrupts in it.

    2. Anonymous Coward
      Anonymous Coward

      RIIP Windoze ;-)

      1. HmmmYes

        I, for one, welcome our new Erlang/OTP overlords.

      2. This post has been deleted by its author

    3. itzman

      It would seem that code will need to be tighter

      Yes.

      Instead of bloatware and throw big tin at it, how about actually learning how to write good code?

      1. Jamie Jones Silver badge

        Re: It would seem that code will need to be tighter

        Instead of bloatware and throw big tin at it, how about actually learning how to write good code?

        But we've been saying that ever since MS brought us the "Too slow? buy a new machine. Uses too much memory? Buy more!" mantra in the mid 90's.

        And nothing ever happened, until 'almost-smart' phones became available, and suddenly everyone was concerned with lean, efficient programming. Then phones became more powerful, and once again, that philosophy died.

        Same will happen this time around. Lowest-common-denominator programmers and techniques will still be employed - "You just need to but a more expensive machine! More cores, more Mhz!"

    4. Charles 9

      "It would seem that code will need to be tighter and more thought out."

      Nope, because there are still deadlines to consider. You know the saying: you can either do it fast or do it right, unless you can find that person who can break the rules and do things RightFast.

    5. cosmogoblin

      Back in the 8-bit days of 64kB of RAM and no HDD, programmers learned to make neat and optimised code to work within the constraints. As memory increased, these skills withered and atrophied.

      I had high hopes for smartphone apps - with so many cheap Android phones having less than 100MB, would developers relearn efficient design? The answer is yes and no: I have a lot of fairly complex apps by small studios that take up a few hundred kB. But the big developers seem incapable of doing the same. How is Stellarium 43MB and CoPilot 57MB (without maps), but Kindle is a whopping 339MB? Why is the Google search app larger than any graphical game I have installed?

      1. ChrisC Silver badge

        "As memory increased, these skills withered and atrophied."

        In the world of desktop/mobile/web coding, perhaps. The world of low-cost embedded coding still heavily relies on people understanding how to wring every last drop of performance out of the processor - in my near 2 decades in the business the most powerful processor I've ever used had 256KB of flash and 64KB of SRAM, and at the other end of spectrum I've also written firmware for a processor with 1KB of flash and no SRAM, just a handful of general purpose registers. Bloody good fun, and I get paid quite nicely for doing it too :-)

        1. Gel

          My first was 256 bytes of firmware in a CDP1802. It sold quite well. Written in hex without an assembler.

          A lot of fudges were required to fit it into that space. The 256 bytes required two 256*4 eproms.

          Modern programmers have lost the art of squeezing every last drop out of an MCU. Most are frightened of assembler. Its surprising how competent you become with hex after a while. Assembler makes life a lot easier. Though I think we should look to VHDL for the way forward. This is what bitcoin miners use.

        2. rajivdx

          Chris, the times are a changing for embedded devices too - they now sport 1GB of RAM, 8GB of Flash, run Linux, take 2 minutes to boot, poorly programmed by amateurs and do things as mundane as turning a light ON or OFF.

      2. emullinsabq

        programmers learned to make neat and optimised code to work within the constraints. As memory increased, these skills withered and atrophied.

        That isn't quite true. I don't deny that there are plenty of people lacking skills today, but there were good reasons for reducing that optimization. It was routine in those days to pack nybbles together for both inputs/outputs because you could fit two into a byte. This saved space, but it had consequences when it came to writing clear code, and debugging it.

        One of the horrible habits that arose out of that era was mixing an error condition with the return value. so you'd check for the error (often -1) otherwise treat the return normally. It's so routine you still see that today, and it's awful compared to something like:

        if (!getch(&ch)) { handle_error(); }

        in which you separate the error from the return value, and are more likely to realize that there IS an error possibility.

      3. TonyJ

        "...Back in the 8-bit days of 64kB of RAM and no HDD, programmers learned to make neat and optimised code to work within the constraints. As memory increased, these skills withered and atrophied..."

        I remember reading an article back in the day about the guy who was tasked with converting Sim City from the Amiga to the humble BBC.

        Apparently, there was one routine in the Amiga that used considerably more memory than was wholly available on the Beeb.

        And yet (apart from graphically of course), he pretty much nailed it with a like-for-like reproduction.

        At college I had to learn to program in assembly - in hex - on "development boards" with <8KB RAM so it was imperative that anything you tried to do was neat.

        Mind you I was never very good at it myself but some of the others there had a natural gift for it that was staggering.

      4. Anonymous Coward
        Anonymous Coward

        My guess is the compiled code is still relatively small, but that 339MB is assets ( high res images, "Welcome to Kindle" videos, etc )

    6. onefang

      "OTOH, those who can produce tight code with no bloat, etc. will do very well."

      My skills will be back in fashion?

  3. Steve Todd
    FAIL

    You do know that Moore’s law says nothing about speed?

    It says that the number of transistors that can be fitted on a silicon chip of a given size will double every 18 months.

    Speed improvements slowed or stopped a while back, replaced by improved parallelism. We now have 16 core, 32 thread desktop CPUs. Design changes can fix most of the weaknesses that allow Spectre and Meltdown, but it will take them a while to filter through to live systems. In the mean time the reduction in speed does not mean Moore’s law has ended.

    1. Mark Honman

      Re: You do know that Moore’s law says nothing about speed?

      Yup, it's more correct to say that the "golden era" of single-threaded computing is gone - a time when moving to the next process node would enable higher operating frequencies _and_ the doubled transistor count could be used for new architectural features - such as speculative execution - and improved performance through integration of functions that were previously off-chip.

      Many of the architectural features that boot single-threaded performance are costly in area, and now that applications _must_ exploit parallelism to get improved performance there is a tipping point. If the applications scale well on highly parallel systems, for a given chip size more system performance can be had from many simple cores than a smaller number of more sophisticated cores.

      That is, provided the interconnect and programming model are up to scratch!

      1. Anonymous Coward
        Anonymous Coward

        Re: You do know that Moore’s law says nothing about speed?

        Just not everything can be run in parallel.

        1. MrXavia

          Re: You do know that Moore’s law says nothing about speed?

          "Just not everything can be run in parallel."

          But I find most things that are computationally intensive can be,

          Even your web browser should benefit from multi-processing.

          1. Charles 9

            Re: You do know that Moore’s law says nothing about speed?

            Then how come most video encoders aren't really MT-friendly and usually have to resort to tricks such as partitioning? It's telling that the x encoder suites (x264 and x265) still have a marked preference for CPU-based encoding. As I recall, certain e-coin systems use algorithms specifically meant to be easier for the CPU to do versus say a GPU, on similar principles.

            1. Anonymous Coward
              Anonymous Coward

              Re: You do know that Moore’s law says nothing about speed?

              You would assume that video encoding and decoding would be quite feasible to parallelise, with a thread for each GOP ( group of pictures ).

          2. JeffyPoooh
            Pint

            Re: You do know that Moore’s law says nothing about speed?

            MrXavia quoting someone else opined, " '...not everything can be run in parallel.' But I find most things that are computationally intensive can be..."

            Actually, if you look at your PC and imagine how many things it's doing at once (background processes and services), it could be paralleled (or multiple cores) up the ying-yang. Parallelism (or multiple cores) doesn't have to be within one particular algorithm.

            Imagine if you will, a quad-core processor where two cores were 100% dedicated to servicing the human's needs, and all the Windows' rubbish background services were offloaded entirely to the other two cores, so that they'd never bother you again. Might need separate RAM and HDD too.

            1. Anonymous Coward
              Anonymous Coward

              Re: You do know that Moore’s law says nothing about speed?

              and all the Windows' rubbish background services were offloaded entirely to the other two cores,

              Woah! All of that, on just 2 cores?? If only I could live long enough to see that sort of technology.

      2. Anonymous Coward
        Anonymous Coward

        "golden era of single-threaded computing"

        Ended over a decade ago. Intel hasn't managed to even double single threaded performance in that time, when that used to happen around every couple years. Which yeah Moore's Law doesn't say anything about, but back in the day doubling your transistor budget and shrinking their area by half had tangible performance benefits beyond "hey, more cores!"

      3. ibmalone

        Re: You do know that Moore’s law says nothing about speed?

        Branch prediction is really an attempt to use on-chip parallelisation to pretend to be a faster single-threaded system. Maybe it's finished, maybe a redesign at the architecture level can avoid the current exploits, but you're right, actual parallel cores have been with us for quite a while now, we should start using them. (Also, a clever compiler could use parallel cores to do a form of branch prediction anyway, something that could be fixed in software if a new exploit ever came up. Yes, there's more overhead to that approach.)

      4. Doctor Syntax Silver badge

        Re: You do know that Moore’s law says nothing about speed?

        "If the applications scale well on highly parallel systems, for a given chip size more system performance can be had from many simple cores than a smaller number of more sophisticated cores."

        That's a big if. OTOH a modern desktop system seems to have several things running at the same time and a server, where Meltdown will really be hurting, will have lots so the load can be highly parallel even if the application isn't. That implies context switching can be made cheaper. Rather than throwing all the transistors at more cores throw some of them at that and more cache so there's less chance of all the processes waiting on cache faults at the same time.

    2. jmch Silver badge

      Re: You do know that Moore’s law says nothing about speed?

      "the number of transistors that can be fitted on a silicon chip of a given size will double every 18 months"

      Correct, but that is limited by the physical size of atoms, there just isn't any more space for smaller circuits. On the other hand I'm not sure if in the original intention, the doubling was of transistors per square cm rather than per cubic cm (back then the circuits were rather flat), and so number of transistors per area maybe can still be improved upon with 3d layering of components.

      1. DavCrav
        Joke

        Re: You do know that Moore’s law says nothing about speed?

        "number of transistors per area maybe can still be improved upon with 3d layering of components."

        And if string theory is right, there's a few more dimensions to play with after the third dimension, so we have yet more room.

        1. BebopWeBop

          Re: You do know that Moore’s law says nothing about speed?

          From what I remember, many of those dimensions are a wee bit small.

          1. Sgt_Oddball

            Re: You do know that Moore’s law says nothing about speed?

            And somewhat folded in on themselves too

          2. Anonymous Coward
            Anonymous Coward

            Re: You do know that Moore’s law says nothing about speed?

            "From what I remember, many of those dimensions are a wee bit small."

            Something like 7 x 10^34 linguine.

          3. Bill Stewart

            Re: You do know that Moore’s law says nothing about speed?

            Re: String theory and small dimensions.

            "In OUR theory, the number of dimensions goes up to 11."

            "Why don't you use 10 dimensions and make them larger?"

            "Ours goes to 11. It's one bigger."

        2. Anonymous Coward
          Anonymous Coward

          Re: You do know that Moore’s law says nothing about speed?

          "And if string theory is right, there's a few more dimensions to play with after the third dimension, so we have yet more room."

          If string theory as currently formulated is correct, those extra dimensions are inherently unusable for any purpose save allowing the contents of the universe to exist.

          Physics is another of those fields where the exact details are important.

      2. Anonymous Coward
        Anonymous Coward

        Re: You do know that Moore’s law says nothing about speed?

        > that is limited by the physical size of atoms

        who says an atom is the smallest thing? Everyone knows by now, that's not true.

        1. Geoff May (no relation)

          Re: You do know that Moore’s law says nothing about speed?

          who says an atom is the smallest thing

          Correct, for example, my pay increase for the last 5 years is much smaller than anything else in the universe ...

    3. PyLETS

      Re: You do know that Moore’s law says nothing about speed?

      "Design changes can fix most of the weaknesses that allow Spectre and Meltdown, but it will take them a while to filter through to live systems."

      It's always been reasonable for processes running with the same userid to share information from an access control point of view - you can always have more userids or introduce the appropriate mandatory access controls. If you want to create better boundaries between processes to restrict information sharing, operating systems already have plenty of discretionary and mandatory access controls which are supposed to give software designers the ability to achieve this. It is appropriate to close off these side channel vulnerabilities where processes are already running in different security contexts. It probably isn't appropriate to hit performance where the software design already runs things within the same security context and available access controls which could be used aren't being used.

      Should I worry that a text editor I run can filch information from my word processor with the same user login or vice-versa ? Probably not and in this use case no performance hit needs to be imposed. Should I worry that some Javascript running in a supposed web-browser sandbox downloaded as part of a web page can filch information from my word processor ? Absolutely I should, and if fixing the sandbox means it has to run slower then that's a price which has to be paid.

      We expect hypervisors and sandboxed applications to be contained against side channel information leaks, so the performance hit of containment needs to be accepted as part of the processor and operating system access control design.

    4. Destroy All Monsters Silver badge

      Re: You do know that Moore’s law says nothing about speed?

      It says that the number of transistors that can be fitted on a silicon chip of a given size will double every 18 months.

      Yes, and it is a heuristic about *economics* not about physics.

      I also hear EUV litography is coming online now, so it's going to continue a bit.

      1. Bill Stewart

        Re: You do know that Moore’s law says nothing about speed?

        Moore's Law was originally about the specific technical details and specific time units, but we keep it around because it tells us things we like to hear, and because the economic principal is still sound - there's enough market demand to keep manufacturing improvement and research going so that computers keep getting exponentially better performance, or at least price/performance.

        The VAX I used 30 years ago had 50x the RAM of the PDP I used a few years before it, and about every 2 years we could afford double the RAM, so by now $50K that got us 4M RAM will get you 1 million times as much (~$50/4G, and it's >100x faster) and the 1GB of disk was four washing machines for maybe $150K, vs 1TB for $50 now, or 128GB of flash that's generally faster than the RAM on the VAX was.

        And the Cray-1 Supercomputer back then? Cell phones have been faster for a long time.

    5. Lotaresco

      Re: You do know that Moore’s law says nothing about speed?

      Indeed. I think Mark Pesce has his knickers in an unnecessary twist. Gordon Moore himself stated that Moore's Law would fail around 2025. There's no sign that it has failed yet, but as Moore observed, exponential functions do tend to collapse at some point. Getting confused about the difference between transistor count doubling every two years and speed doubling every two years is a schoolboy howler.

      There's also no reason at all why performance would go backwards.

      1. jmch Silver badge

        Re: You do know that Moore’s law says nothing about speed?

        "There's also no reason at all why performance would go backwards"

        Not generally, but from 2017 to the next 2-3 years, probably yes since speculative execution will have to be removed or refined with performance losses to be secure. It's of course probable that new chips designed from the ground up to avoid Spectre/Meltdown could eventually be faster than 2017 chips, but keep in mind that 2017 chips are based on 15-20 years architecture based on speculative execution. It's not a given that the first post S/M chips will have superior performance to 2017 chips although they should of course be more secure

      2. Daniel von Asmuth

        Re: You do know that Moore’s law says nothing about speed?

        Yes, but speeds keep increasing. If a box contains 4 * 32 cores these days, supercomputers can grow from 10,000 to 100,000 boxes and their electriciy consumption just keeps growing.

      3. John Smith 19 Gold badge

        Gordon Moore..stated..Law would fail around 2025. There's no sign that it has failed yet,

        Are you f**king kidding me?

        How long has "EUV*" been coming "real soon now" for?

        Last time I looked transistor widths are at 140 atoms wide, with oxide layers 1/10 that.

        *Because Extreme UV sounds so much easier than "Soft X-Ray Lithography" which is exactly what it is.

    6. Anonymous Coward
      Anonymous Coward

      Re: You do know that Moore’s law says nothing about speed?

      @Steve Todd; Came here to say this. The conflation of Moore's Law with raw speed/performance improvement no doubt happened because, for most of its life, the two went hand-in-hand.

      As you note, the speed improvements had *already* started slowing down to beneath what Moore's Law (didn't) "predict" around the time the industry started moving towards increased parallelism, and that's well over a decade ago now.

      (Still increases the number of transistors, but doesn't give you the "free lunch" that increased speed does- requires a lot more work to explicitly take advantage of it, and not all tasks are amenable to parellelism.)

      But... even *that* isn't relevant to the issue the article raises. The slowdown that the article is up in arms about is still essentially a software overhead issue- albeit one prompted by a hardware-based problem- and not related to the raw speed or number of transistors.

      And I mentioned "raw" speed above, because as computers' speed and complexity has grown, so have the overheads- increasing layers of abstraction and other guff- guzzling up that speed. Computers are still faster in use, but nowhere near as fast as the raw speed increase.

      This is just another case of overhead guff cutting down the actual usage we can get from the CPU compared to the "raw" speed. No more than that.

    7. Arthur King

      Re: You do know that Moore’s law says nothing about speed?

      Then it would be Amdahls Law we're after!

      1. John Smith 19 Gold badge

        Re: You do know that Moore’s law says nothing about speed?Then it would be Amdahls Law we're after!

        Correct.

    8. Daniel von Asmuth
      WTF?

      Re: You do know that Moore’s law says nothing about speed?

      Of course Moore's law says nothing about speed in itself, even though increased parallellism nearly always means increased speed. This assumes that applications are rewritten to use all those cores and that nobody uses easy but inefficient programming languages anymore. (Down here companies still believe that Java is the holy grail).

      The speed increase for a single core stopped around 2001 when the Pentium IV approached 4 GHz. The question is now if AMD and Intel can design more secure processors - nothing to do with Moore's law. If they drop speculative execution, then you can fit more cores on a die and reverse the loss.

      The real slowdown happens when you switch from a decent desktop to a mobile phone.

      Moore's law is related to the question if mankind keeps spending 20 % more on processors with each passing year, which drives the investment into EUV and similar process technologies - or less you have to increase die size to get more transistors.

    9. John Smith 19 Gold badge
      Unhappy

      Re: You do know that Moore’s law says nothing about speed?

      "We now have 16 core, 32 thread desktop CPUs."

      Amdahls Law suggests (kind of like a critical path analysis) that you can only speed up an algorithm until you get to the part that cannot be run in parallel. That is the minimum (or critical) path.

      And that's as far as your speed up goes.

      Amdahl's law dates from around the same time.

      Except it shows no signs of being broken any time soon.

    10. Cuddles

      Re: You do know that Moore’s law says nothing about speed?

      "It says that the number of transistors that can be fitted on a silicon chip of a given size will double every 18 months."

      Indeed, I'm rather confused about the point of the article. Moore's law originally described a trend in transistor count on chips. It has since been extended to describe a more general trend in the computing power available on chips. At no point has it ever had anything to do with what people program the chips to do or how secure they might be while running said programs.

      Everyone has long acknowledged that Moore's law can't last forever; in fact no-one ever claimed it could. But while there's plenty of debate to be had about whether we've already passed it or are approaching the limits soon, Meltdown and Spectre don't enter into the discussion at all.

  4. Anonymous Coward
    Anonymous Coward

    Woe betide us! Moore's Law is dead! Leaving...

    ...Universal Turing Machines in every home, of unbelievable power and speed, all connected to each other, and to vast data stores as well.

    I think we'll be okay.

    1. Bob Ajob

      Re: Woe betide us! Moore's Law is dead! Leaving...

      Not just in homes, there are also billions of battery powered mobile personal computing devices. These drive a desire for ever lower power usage as well as die shrinkage. Performance optimization seems a lower priority and I think should remain the job of software not hardware but just think about what was achieved decades ago with so little computing power. Phones are now running multiple concurrent threads at multi-gigahertz frequencies with gigabytes of RAM and hundreds times more solid state storage. Around fifty years ago NASA went to the moon with the help of a guidance computer that had around a thousands times less resources, imagine how tight the code was running on that!

      1. Paul Kinsler
        Stop

        Re: went to the moon ... imagine how tight the code was running on that!

        Maybe some doesn't need to be imagined...

        https://github.com/chrislgarry/Apollo-11

        1. Bob Ajob

          Re: went to the moon ... imagine how tight the code was running on that!

          Thanks for sharing, provides some fascinating insights into the history. I found an interesting description of the code section named PINBALL GAME here -

          http://bit.ly/2Dw7svs

          and here is a link on a NASA site about a real pinball machine -

          https://er.jsc.nasa.gov/seh/pinspace.html

  5. Edwin

    end of x86 & x64?

    With advances in parallel processing and piplining, branch prediction and whatnot, are we to see RISC on the desktop?

    1. Warm Braw

      Re: end of x86 & x64?

      are we to see RISC on the desktop?

      RISC simply refers to the instruction set, not they way the instruction set is implemented - there's increasingly less correlation between the two.

      The other thing is that compiled programs tend to be larger for RISC machines as you need more "reduced" instructions than you would "complex" instructions. This didn't matter when processors ran at or close to the memory access speed, but now they run much faster you're in greater danger of stalling the pipeline because you have to drag so much more stuff out of memory to execute your code so you would typically need more in the way of cache and other optimisations.

      Also, an instruction decoder can often turn a complex instruction (such as an ADD instruction that takes two memory based source operands and a memory-based destination operand) into a series of simpler processor operations (such as two loads to temporary registers, a register addition and a register store operation), achieving a RISC effect with potentially more compact instructions.

      What I hope we are to see on the desktop is something that takes us forward - not revisits where we have already been - and that will have more to do with the less-visible but vital improvements in the protection of memory (and cryptographic secrets in particular) and better segregation of trusted and untrusted code. And in that latter respect, I think we're going to have to get used to there being a little more than just "kernel" and "user" space to worry about.

      1. Charles 9

        Re: end of x86 & x64?

        "What I hope we are to see on the desktop is something that takes us forward - not revisits where we have already been - and that will have more to do with the less-visible but vital improvements in the protection of memory (and cryptographic secrets in particular) and better segregation of trusted and untrusted code. And in that latter respect, I think we're going to have to get used to there being a little more than just "kernel" and "user" space to worry about."

        If you want to see a real solution to this, you need to solve the problem of the performance penalty inherent to context switching. This is one key reason most OS's only use two contexts even when more are available (x86/x64 CPUs, for example, have four available, but because of the context switching penalty usually don't use more than two).

        1. Warm Braw

          Re: end of x86 & x64?

          the performance penalty inherent to context switching

          One thing we've learned over the last week is that speeding things up at the cost of security is not a great idea. And it's a bit of a circular argument in any case - the faster mode switch, SYSCALL, only provides a kernel mode switch as the other modes weren't actually being used. We really need to look at what we need in terms of security, not constantly optimise and compromise for benchmarks.

          And we need to do it right. As I've been delving into this a bit deeper, I notice that AMD's encrypted memory does notseem to extend past the memory bus - information in the cache (and CPU) is in the clear. That means similar side-channel techniques could potentially be used to bypass memory encryption. It's going to take some time to get our collective heads around this.

        2. grumpy-old-person

          Re: end of x86 & x64?

          In the 1970s I worked on ICL System4 mainframes which were IBM360 instruction set compatible - the big difference between the two was the System4 had 4 sets of registers (one for each processor "state") which avoided the save / restore overhead when switching context. Quite clever.

          I think the real problem is that while many architectures and implementations were explored around 50 years ago most were too slow / expensive given the hardware technologies of the time - they seem to have been forgotten.

          Perhaps the old stuff should be dusted off and considered again.

    2. Anonymous Coward
      Anonymous Coward

      Re: end of x86 & x64?

      Only in the 2090s, oh wait maybe the 1990s?

      http://www.wrocc.org.uk/riscos/history.shtml

  6. Dodgy Geezer Silver badge

    Two words...

    Quantum computing...

    1. Anonymous Coward
      Anonymous Coward

      Re: Two words...

      That future is uncertain.

      1. Chands

        Re: Two words...

        It sometimes uncertain :)

        1. Anonymous Coward
          Anonymous Coward

          Re: Two words...

          Depends if you look at it.

          1. Paul Herber Silver badge

            Re: Two words...

            Cat's don't like being stared at.

            1. Zippy's Sausage Factory
              Happy

              Re: Two words...

              Cat's don't like being stared at.

              One of our cats has a thousand-yard stare and will try to outstare you given half a chance. The only thing that can distract her from that is food. (The probability that these two are related is likely).

              Another of our cats, if you stare at her, she'll jump on you, walk up your chest and come and lick your nose. Or bite it, if she's in that sort of mood. (The presence or absence of purring isn't actually a clue, either, it's just if she feels like biting you...)

      2. Anonymous Coward
        Anonymous Coward

        Re: Two words...

        "That future is uncertain."

        Well, that's quantum physics in a nutschell!

        Pass me that cat, Schrödinger!

      3. AndrueC Silver badge
        Joke

        Re: Two words...

        Last time I looked at it it was fine. Do you want me to take another look?

        1. Another User

          Re: Two words...

          It is fine as long as you and the cat are together in the steel box. But better do not tell the result.

    2. Sgt_Oddball
      Headmaster

      Re: Two words...

      Now you're taking the branch prediction problem to a whole new level by just predicting all of the branches, all of the time, all at once.

      It also doesn't mean that they're immune to being forced to run arbitrary code. More it will run it but not execute because it's not the correct answer. Though I'm probably wrong about that.

      On another note, there's nothing to say the chips can't be hacked. Just that they haven't been hacked...yet.

    3. GreenBit

      Re: Two words...

      I'm quite ambivalent about that.

    4. GreenBit

      Re: Two words...

      Well isn't that just a super position to take!

  7. Flocke Kroes Silver badge

    Moore's law lives ...

    ... for flash ... for a little longer. Although the size of transistors has gone up (and bits/cell) the number of layers has gone up faster.

    CPUs cannot use the same trick (yet). 99.9% of a flash chip is idle with only a few sectors active so it does not use much power. Large chunks of a CPU transition every cycle. Getting the heat out of one layer of CPU transistors is bad enough. Trying that with 100 layers will cause a loud bang and instant vaporised CPU.

    IBM have been trying to drill thousands of holes in a CPU so they can pump a cooling liquid through them. Might be cool for a data centre, but it will burn you phone battery in minutes.

    1. Anonymous Coward
      Anonymous Coward

      Re: Moore's law lives ...

      Sounds like they need another dimension.

    2. Charles 9

      Re: Moore's law lives ...

      Even 3D Flash has its limits. You can only make the holes so big, and last I heard things get dicey after about 128 layers and the talk is switching to "stacks of stacks" but that raises performance issues.

      1. Anonymous Coward
        Anonymous Coward

        Re: Moore's law lives ...

        If David Braben and Ian Bell can write a game like "Elite" on a BBC microcomputer in 32k with a 2Mhz processor, perhaps we can all learn something from that ....

        1. 8Ace

          Re: Moore's law lives ...

          "If David Braben and Ian Bell can write a game like "Elite" on a BBC microcomputer in 32k "

          Even better than that, most of that 32K was video memory.

        2. Charles 9

          Re: Moore's law lives ...

          Yes, that nobody's perfect and that everything has its price. In this case, there were still some bugs in the code like dead-ends and a lack of consideration of non-standard hardware.

  8. teknopaul

    hmmm

    Starting to think el reg is drinking too much of its own coolaid?

    Google say with reptoline they rolled out fixes to their cloud before el reg broke the news and no one noticed the changes.

    Everyone else took a massive hit. But they had to react now.

    Google had months and fixed it. Implies others will do similar given time.

    Project zero is paying off for its investors.

    1. Adrian 4

      Re: hmmm

      Security has nothing to do with processor speed or density. It's a social problem. Finding technical fixes for security problems may be useful, but don't blame the resulting inefficiency on the process improvements.

      Computers don't care about security. Applications do.

  9. Terry 6 Silver badge

    Just wondering

    What proportion of machines are likely to need more raw power than we have now?

    How much software is there in use in ways that needs all the power that is available now, or in the reasonably near future?

    How much development of new stuff is based on "because we can" rather than "because we need to"?

    I am completable ignorant in these matters. So it really is just wondering, backed by cynicism.

    1. Anonymous Coward
      Anonymous Coward

      Re: Just wondering

      I agree, current tech can handle all the cat videos we can throw at it.

    2. DeeCee

      Re: Just wondering

      to run crysis @ 16k

      1. Anonymous Coward
        Anonymous Coward

        Re: Just wondering

        Or Euro Truck Simulator at above 30 fps in cities...

    3. itzman

      Re: Just wondering

      There is no doubt that computer applications would benefit massively from doing less, better.

      Sadly the unsophisticated users seem to want more features and care nothing about speed, stability or security.

      And Poettering is an object lesson in the concept that so to do the new generation of coders.

      1. m0rt

        Re: Just wondering

        "There is no doubt that computer applications would benefit massively from doing less, better."

        Just doing better.

        "Sadly the unsophisticated users seem to want more features and care nothing about speed, stability or security."

        Ok - in that case why can't supposed designers and coders who *should* care about security not provide speed, stability and security? They should. A lock that can be opened with any key - who is culpable? The person who requested it? The person who designed it? The person who built it? The person who bought it?

        I would argue that they all are to degrees, but the person who knows better is even more so. So to keep blaming users is a little laughable. People are people. In any cross section of the IT indutry, from users to coders to system designers and chip engineers, you will get people both at the top of their game and people who shouldn't be in the job. To treat the world as Us and Them just means we fail to own our own issues and sort those out and go around in a wonderful cloud of arrogance.

        "And Poettering is an object lesson in the concept that so to do the new generation of coders."

        What? Not sure what you were trying to say here. I think Poettering is a force of evil, but I can't quite work out if you were being derisory or salutatory.

        1. Doctor Syntax Silver badge

          Re: Just wondering

          "Ok - in that case why can't supposed designers and coders who *should* care about security not provide speed, stability and security?"

          Because they're being paid to produce more features sooner. An iron triangle's at work here - features, speed, quality, pick any two.

        2. Baldrickk

          Re: Just wondering

          The TL;DR of the answer to the question you asked is that it's the users who drive the market - you can argue whether they should or not, but it's what they do.

          Some cheap phone shipped from a cheap warehouse in china with terrible device security in place? Urgh. "But it's 10% cheaper than the equivalent Apple/Samsung mobe" - cue a long line of shoppers wanting a good deal.

          Users don't "see" security - even if they are aware of it, it's something that they expect to be there, particularly when it isn't. They only really encounter it when it gets in their way.

          1. Anonymous Coward
            Anonymous Coward

            Re: Just wondering

            "Users don't "see" security "

            Things like that are part of the reason why FCC certification (and regional variants on the theme) exist. Not everybody understands RFI and EMC and so on, so there are regulations. Same goes for various other aspects of product design and manufacture, not just for electric stuff.

            If a product is defective by design or defective as sold, product suppliers can (at least in theory) be held accountable.

            For some reason this doesn't seem to apply where a computer (or a bank) is involved.

            Make the existing product liability law (in many parts of the world) work right and lots of things, not just "smartphone security", might improve as a result.

            If suppliers are prepared to ignore the letter and the spirit of the liability laws, what would happen if a few of their victims decided to temporarily follow the moral example the suppliers have already set, at least until till the message gets through.

    4. Doctor Syntax Silver badge

      Re: Just wondering

      "What proportion of machines are likely to need more raw power than we have now?"

      It depends on what sort of machine you're talking about and what tasks it's running.

      Servers? Have you seen pictures of typical server halls? If you impose a 30% penalty on those so they need to add 30% capacity right now how are they going to cope?

      PCs? Sitting there writing comments on el Reg - very little of the available power being used. Watching video - quite a lot. Gaming - those guys never seem to have enough.

      1. Anonymous Coward
        Anonymous Coward

        Re: Just wondering

        "If you impose a 30% penalty on those so they need to add 30% capacity right now how are they going to cope?"

        Perhaps by serving 30% fewer privacy-invading advert and trackers or charging 30% more for the adverts and trackers that are served (which will achieve the same result)?

        What's not to like?

    5. JohnMartin

      Moores Law is dead .. long live Moores law

      Someone asked if anyone actually needs any more CPU, which is a fair question because I had same question when I had a desktop with a 386 processor running at 25Mhz with 2GB of RAM and a 40MB hard disk .. it was used mostly for word processing, terminal emulation and code editing, functions which it did perfectly well.

      People didn't make new CPU's because software was badly written, they did it because it drove hardware refresh sales, inefficient software just kind of happened because the resources were cheap.

      In any case, the bottlenecks are rarely CPU, more often its memory, and storage, and some people just devised ways of creating atomristors, which are memristors operating at 50Ghz that are one atom thick and have a storage density of 1 Terabits per square centimetre, and look like they'll be able to be stacked in 3D configurations like NAND .. which is kind of mind-blowing for a whole variety of reasons.

      Moores law probably started dying (or actually died) in 2006 when Denard scaling stopped going as predicted https://en.wikipedia.org/wiki/Dennard_scaling , so the big impact from a CPU perspective is that it shows that we are now at the beginning of the "Post CMOS Age", and Moores law was intimately tied to the falling costs of manufacturing CMOS based transistors. If we go beyond CMOS, then we get a post CMOS Moores law .. though if you make CPU's out of stuff that doesnt need power to maintain state, then there might be a big blurring effect between CPU and memory resulting in radical shifts in compute towards something where "processing" is an emergent function of memory, leading to direct support for neural nets and deep learning

      Interesting times ahead (still)

      1. Anonymous Coward
        Anonymous Coward

        Re: Moores Law is dead .. long live Moores law

        3D transistors for CPU cores? Isn't that what Intel were supposed to be doing with their 2nd generation FinFET?

        E.g. https://newsroom.intel.com/news-releases/intel-reinvents-transistors-using-new-3-d-structure/ (2011) and (some time later)

        https://newsroom.intel.com/newsroom/wp-content/uploads/sites/11/2017/03/22-nm-finfet-fact-sheet.pdf

        What did happen to that technology? Oh hang on, isn't it the underlying technology behind this little long-hidden hiccup which finally came to public view in 2017:

        https://www.theregister.co.uk/2017/02/03/cisco_clock_component_may_fail/

        (and as the weeks went by, it turned out that the core issue was with with Intel's C2000 SoC family, used in various SoC-class applications where corporates had made the mistake of believing their Intel reps promises rather than their own engineers experience)

  10. Pete 2 Silver badge

    Bzzzzt!

    > Moore’s Law has hit the wall, bounced off - and reversed direction

    No. Moore's Law is only about gate density on integrated circuit chips. To extend it to imply that means anything about computing power, is a misuse of the term.

    Though it must be said that given the size of a Silicon atom is 0.2nm and we are now looking at 5nm architectures, the prospect of a transistor consisting of just 25 atoms, and that this would be available in your local Tesco, is worthy of some contemplation. Even if that signifies that Moore's Law (the actual Law) is banging up against physical limits.

    As far as performance factors go. That is merely a limit on (current) human ingenuity. We will find ways to re-design chips. To squeeze more computation out of each square millimetre of Silicon (or maybe each cubic millimetre). We will will adopt more efficient architectures - maybe even secure ones - that will do more stuff, faster. And who knows, in the end we might even learn how to write efficient code.

  11. Lee D Silver badge

    Personally, I look at clock speeds now (as in real-world clock speeds, not theoretical maximum if you plunged it in liquid nitrogen) and most desktop Intel chips look pretty sad. There are mainstream computers out there that dial back down to 1Ghz or so.

    At one point 3GHz was the norm, 4GHz was possible, but we don't see improvements on those kinds of speeds any more. The top-of-the-line Intel chips are 4GHz. Hey, sure, lots of cores, but still 4GHz. We hit peak "speed" years ago. Then we took advantage of more "bandwidth" if you like (same speed but could do more at the same time). Now we're stuck because nothing really takes advantage of a 32-core processor, you can't make one work at 3-4GHz constantly without stupendous cooling, and we have nowhere to go. Compiler optimisations and branch prediction don't even figure, most of the Intel fixes have 5-10% impact only, it's only the worst-case loads that suffer more.

    The money for anything extra goes on GPU now if you want to actually do anything useful - whether that's gaming, mining or actual serious calculations. 1000s of tiny cores running at GHz.

    But we plateaued years ago, and nobody is really able to do much about it. Maybe it's time we started writing software that doesn't require some hundreds of megabytes of code to draw a couple of windows on the screen, especially now that we can just throw OpenGL data at the screen directly.

  12. wolfetone Silver badge

    Lovely obituary, but you haven't told us where the wake is being held and when?

  13. HmmmYes

    'The computer science behind microprocessor'

    Urghh.

    Computer science has at least one foot in discrete maths/logic.

    Microprocessor, and the Spectre, is down to computer architecture, a branch of electronics.

  14. Conundrum1885

    Re. SPECTRE

    No wonder Bitcoin went into full MELTDOWN.. investors probably realized that the current generation of miners had indeed hit the wall and they would not get any smaller without yields dropping to unacceptable levels. Last time I checked Bitmain had run into significant problems making the chips any smaller than 9nm and even Nvidia has decided to try and limit supply of their cards to miners.

    https://forum.getpimp.org/topic/1196/nvidia-tries-to-limit-gpu-sales-to-cryptocurrency-miners

    I would be concentrating on using current GPUs for deep learning and in fact there are commercial products available targeted at inference (cough Movidius /cough) and other custom chips for certain types of data but the more Tflops the better. A relevant piece of information is that the units in hospitals are often 2GB ancient cards due to the licensing and certification despite them being very primitive by modern standards.

  15. Christian Berger

    Extrapolating short term trends...

    ... usually doesn't work. It's simply to short of a time to make such predictions.

    The more relevant trend is probably that people now are much more contempt using 10 year old computers, so the average speed of computers doesn't rise as quickly as it used to do.

    Perhaps we will get another great increase in speed, not from hardware but from software. In the past reductions of complexity have brought great advances in computing. Typical examples were UNIX (much simpler than Multics) or the Internet (much simpler than the telephone network or X.25 networks).

  16. RobertLongshaft

    If I had a £ for every time some journo, academic or public figure predicted the death of moores law i'd have about £150.

    1. Stoneshop
      Pint

      i'd have about £150

      And doubling every 18 months

  17. Tom 7

    Poor Moore's Law

    Every two years the number of articles about the death of Moore's Law doubles.

    1. Anonymous Coward
      Anonymous Coward

      Re: Poor Moore's Law

      You owe me a keyboard

  18. Torben Mogensen

    Speculative execution

    The root of Spectre and Meltdown is speculative execution -- the processor trying to guess which instructions you are going to execute in the future. While this can increase performance if you can guess sufficiently precisely, it will also (when you guess incorrectly) mean that you will have to discard or undo work that should not really have been done in the first place. On top of that, accurate guesses aren't cheap. Some processors use more silicon for branch prediction than they do for actual computation.

    This means that speculative execution is not only a security hazard (as evidenced by Meltdown and Spectre), but it also costs power. Power usage is increasingly becoming a barrier, not only for mobile computing powered by small batteries, but also for data centres, where a large part of the power is drawn by CPUs and cooling for these. Even if Moore's law continues to hold for a decade more, this won't help: Dennard scaling died a decade ago. Dennard scaling is the observation that, given the same voltage and frequency, power use in a CPU is pretty much proportional to the area of the active transistors, so halving the size of transistors would also halve the power use for similar performance.

    This means that, to reduce power, you need to do something other than reduce transistor area. One possibility is to reduce voltage, but that will effectively also reduce speed. You can reduce both speed and voltage and gain the same overall performance for less power by using many low-frequency cores rather a few very fast cores. Making cores simpler while keeping the same clock frequency is another option. Getting rid of speculative execution is an obvious possibility, and while this will slow processors down somewhat, the decrease in power use (and transistor count) is greater. As with reducing clock speed, you need more cores to get the same performance, but the power use for a given performance will fall. You can also use more fancy techniques such as charge-recovery logic, Bennet-clocking, reversible gates, and so on, but for CMOS this will only gain you a little, as leakage is becoming more and more significant. In the future, superconductive materials or nano-magnets may be able to bring power down to where reversible gates make a significant difference, but that will take a while yet.

    In the short term, the conclusion is that we need simpler cores running at lower frequencies, but many more of them, to get higher performance at lower power use. This requires moving away from the traditional programming model of sequential execution and a large uniform memory. Shared-memory parallelism doesn't scale very well, so we need to program with small local memories and explicit communication between processors to get performance. Using more specialised processors can also help somewhat.

    1. MacroRodent

      Re: Speculative execution

      so we need to program with small local memories and explicit communication between processors to get performance. Using more specialised processors can also help somewhat.

      Sounds like the Transputer from the 80's. https://en.wikipedia.org/wiki/Transputer

      1. bazza Silver badge

        Re: Speculative execution

        We're headed back towards the Transputer in more ways than you'd imagine.

        Firstly, today's SMP execution environment provided by Intel and AMD is implemented on an architecture that is becoming more and mode NUMA (especially AMD; Intel have QPI between chips, not between cores). The SMP part is faked on top of an underlying serial interconnect (Hypertransport for AMD, QPI for Intel).

        So, the underlying architecture is becoming more and more like a network of Transputers, with the faked SMP layer existing only to be compatible with the vast amount of code we have (OSes and applications) that expects it.

        And then languages like Rust and Go are implementing Communicating Sequential Processes as a native part of the language; just like Occam on Transputers. Running CSP style software on a SMP environment which is itself implemented on top of NUMA (which is where CSP shines) simply introduces a lot of unnecessary layers between application code and microelectronics.

        Sigh. Stick around in this business long enough and you can say you've seen it all come and go once before. Possibly more.

        Having said all that, I'm not so sure that a pure NUMA architecture would actually solve the problem. The problem is speculative execution (Spectre) and Intel's failure to enforce memory access controls in speculatively executed branches (Meltdown), not whether or not the microelectronic architecture of the machine is SMP, nearly SMP, or pure NUMA. A NUMA architecture would limit the reach of an attack based on Spectre, but it would not eliminate it altogether.

  19. Anonymous Coward
    Anonymous Coward

    Processor speed != IT

    The author assumes that processor speed is the only speed that matters in IT.

    It is not, and that renders his predictions of slow Doom invalid.

  20. Danny 2

    #50

    "From the age of fifty, Moore’s Law began to age rapidly, passing into senescence and then, at the beginning of this month, into oblivion."

    It's not that everything starts to go one thing after the other; everything goes suddenly, and all at once.

    #meltdown #spectre

  21. steelpillow Silver badge

    self-fulfilling prophecy

    As the reputation of Moore's law climbed, people began to measure past growth more accurately and to base future investment plans on that past growth rate, spending (or saving) whatever it took to maintain the predicted straight line. It became a self-fulfilling prophecy.

    But I wonder now, if you strip all that pre-process pipeline hardware out of a CPU core, how many extra cores could you fit on a single chip? If you could up a given die size from say 8 cores to 10, then there need be no overall hit. Quick, the moon is rising. Open the coffin lid!

  22. Aladdin Sane

    Wirth's law

    "What Intel giveth, Microsoft taketh away."

    1. Lotaresco

      Re: Wirth's law

      Werther's Law: Those caramels don't taste as nice as they look.

  23. Fading
    Mushroom

    Moore's Law Dead?

    Much like the desktop, HDD, vinyl, magnetic tape, internal combustion engines and side burns. With all these dead things still very much a part of our lives (excluding the side burns) is this what the zombie apocalypse looks like?

  24. HmmmYes

    To be honest, Moores law died about 2005ish.

    Youve not really seen much in the way of clock speeds beyond 2-3G.

    What you have seen if massive lying about clock speed - sure, they cache intruction clock might run at x, but the rest runs a lot sloooower.

    And the sped taken to flush and loads caches has increased.

    1. Torben Mogensen

      HmmYes writes: "To be honest, Moores law died about 2005ish.

      Youve not really seen much in the way of clock speeds beyond 2-3G."

      What you observe here is not the end of Moore's Law, but the end of Dennard scaling.

  25. Stese

    With any luck, the push for greater performance, on similar hardware, will finally start developers really thinking about optimisation of code.

    The requirements for common programs has massively expanded as lazy developers have push more and more crap into programs, without thinking of the performance impact.

    A few years ago, I used to be able to stream videos happily without any stutter on my old 1.4 p4 with 2gb ram. Try it today, the same stream, on the same hardware, and it stutters, and is generally unwatchable. My internet bandwidth has increased, and the device is always used on Ethernet, @ 100Mbit. The only thing that has really been changed is the software running in the machine....

    1. Anonymous Coward
      Anonymous Coward

      Couldn't agree more - 90% developers I have worked with haven't really got a clue about performance and just hack away till they get something working. It is perfectly possible to write fast, light, elegant code - but it takes much more time and effort.

      1. Charles 9

        So what if you have (as happens so often) a deadline?

        Writing fast, light, elegant code is one thing, but doing it with a time limit? That would require beating the iron triangle and getting ALL THREE legs at once.

  26. Buttons
    Meh

    Ooops!

    For some reason I confused Moore's law with Murphy's law. My bad.

    1. Anonymous Coward
      Anonymous Coward

      Re: Ooops!

      As long as you don't confuse it with Cole's Law.

  27. stephanh

    it's not that bad

    Fixing Spectre is conceptually pretty simple: once you detect that you mis-predicted a branch, you need to restore *all* your CPU state, including cache content and branch predictor state. Not fundamentally different from restoring your register state.

    Now admittedly, making this change in a highly complex existing CPU design is a different matter. But if existing CPU vendors cannot fix it, they will be replaced by new companies which can.

    1. Charles 9

      Re: it's not that bad

      Problem being, how do you restore the cache state when you're already using the cache as it is? Sound like an "open the box with the crowbar inside" problem. Increasing the cache simply begs the question of not using it in general rather than in a segmented way. IOW, no matter what you try, it's bound to get ugly.

      1. stephanh

        Re: it's not that bad

        @Charles 9

        "Problem being, how do you restore the cache state when you're already using the cache as it is?"

        I imagine a scheme where you keep a bunch of "scratch" cache lines outside the normal cache. Once the instruction which triggered main memory fetch retires, you write the "scratch" cache line to the real cache. Again, fundamentally not different to how you treat a register. Just think of your cache as a register file, each cache line being an individual register.

        You would only need a very limited number of such "scratch" cache lines. Just enough to support the amount of speculative execution you do. If you happen to run out of scratch cache lines, you just stall the pipeline. Number of scratch cache lines should be dimensioned so that this is rare.

        It might actually end up increasing performance since mispredicted instructions aren't going to pollute the cache.

  28. Prst. V.Jeltz Silver badge

    Thinking that moores law would continue indefinitely is as stupid and short sighted as thinking that our economy will grow year on year.

    Its a shame our whole society is based in that premise.

  29. MrChristoph

    Absolute tosh!

    This is sheer prophecy - i.e. total BS

    It's a bit like when scientists claim that we currently know everything worth knowing except maybe some constants to an even higher number of decimal places. It's happened many times through history and these prophecies have always been wrong.

    Who knows knows what innovations will come? Nobody does.

    1. ravenviz Silver badge

      Re: Absolute tosh!

      I agree with the sentiment but Moore's Law has a precise definition.

    2. King Jack
      WTF?

      Re: Absolute tosh!

      @ MrChristoph

      No scientists have ever made that claim. Science is the quest for knowledge. The more you know the more mysteries you reveal. It is never ending, you never 'know' something with 100% certainty. You need to speak to a real scientist and look up the scientific method. Please name these 'scientists' and give examples of the many times this has happened.

      1. stephanh

        Re: Absolute tosh!

        @ King Jack

        "Please name these 'scientists' and give examples of the many times this has happened."

        "While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals. "

        -- Albert A. Michelson, Nobel Prize in Physics laureate

        See: https://en.wikiquote.org/wiki/Albert_A._Michelson

        So at least 1.

    3. bazza Silver badge

      Re: Absolute tosh!

      This is sheer prophecy - i.e. total BS

      It's a bit like when scientists claim that we currently know everything worth knowing except maybe some constants to an even higher number of decimal places. It's happened many times through history and these prophecies have always been wrong.

      Who knows knows what innovations will come? Nobody does.

      So long as DRAM is slower than CPU cores, we'll need caches and speculative execution to keep things as fast as they currently are. Given that DRAM latency is effectively governed by the speed of a signal along the PCB trace between the CPU and the SIMM, I'd say we're pretty much stuffed.

      Stop Executing Arbitrary Code

      One aspect overlooked in a lot of the discussion is that this is only, and really only, a problem if you are executing code on your machine that you don't trust. If you trust all the software that's running, then you have no need to patch or redesign to avoid Meltdown and Spectre.

      The real problem behind this is that these days pretty much everything we have in modern software involves running code we don't trust. This might be Javascript in a browser tab, or hosting VMs on a public cloud. It would be utterly crazy if we reversed a whole 22 years of CPU design progress simply because our modern approach to running software is, well, ludicrously risky.

      I say a better approach would be to retreat from arbitrary code execution, and start thinking about how we might have remote presentation protocols instead. There's no particular need to run the code client side, just so long as the code output is visible client side. So far so very X-server. However, we should recognise that it's impossible to exploit an properly implemented execution-less protocol; perhaps we should consider it as a way forward.

      1. Charles 9

        Re: Absolute tosh!

        "Given that DRAM latency is effectively governed by the speed of a signal along the PCB trace between the CPU and the SIMM, I'd say we're pretty much stuffed."

        IOW, caching is basically a case of "Ye cannae fight physics," hitting a hard limit with the Speed of Electricity.

        "I say a better approach would be to retreat from arbitrary code execution, and start thinking about how we might have remote presentation protocols instead. There's no particular need to run the code client side, just so long as the code output is visible client side."

        Unless, of course, latency comes into play. Why do you think network computing has such limited use outside the controlled environment of LANs? Because the Internet is itself an untrusted, unreliable environment. You're simply trading one set of disadvantages for another. And for many, the reason the code MUST run client-side is because you need the speed you cannot get other than from a locally-run machine. Ask any gamer.

        1. bazza Silver badge

          Re: Absolute tosh!

          @Charles,

          IOW, caching is basically a case of "Ye cannae fight physics," hitting a hard limit with the Speed of Electricity.

          I think we can do a little better than the DRAM that's currently used. HP's memristor is (apparently) faster than DRAM. As well as being non-volatile and with no wear life problems and huge capacities. So a SIMM based on that would be quicker. But still not quick enough to eliminate the need for a cache.

          As things are today it's kinda nuts; the signalling rate down those PCB traces is so fast that they're RF transmission lines, and there's more than 1 bit on the trace at any one time! It was the Cell processor in the PS3 that first used that style of RAM connection. Sigh - I miss the Cell; 100GByte/sec main memory interface. It was one helluva chip.

          Unless, of course, latency comes into play. Why do you think network computing has such limited use outside the controlled environment of LANs? Because the Internet is itself an untrusted, unreliable environment. You're simply trading one set of disadvantages for another.

          Not really. We already have an elaborate certification system to establish that the website I'm getting data from is in fact the website it says it is. All I'm talking about is changing the data that's recevied. At present it's a blend of html, javascript, css, etc. That's not a problem if it comes from a website we trust, but the javascript is potentially disasterous if it comes from a malicious website. However, if what my "browser" received were simply a remote display protocol then I don't care what the website is showing me, it cannot (assuming the protocol implementation is good) run arbitrary code on my machine. There would be no such thing as a malicious site, because there would be no mechanism by which any site could launch arbitrary code on a client's machine.

          I suppose I have to trust the site to run the code they've said they will. But I do that anyway today; for example I trust Google to send me the correct Javascript for what is to be done.

          As for reliability - services like Google Docs are all about the Internet and Google's computers being reliable (or at least they're supposed to be).

          And for many, the reason the code MUST run client-side is because you need the speed you cannot get other than from a locally-run machine. Ask any gamer.

          That's true enough; a game that runs in a browser is better off running client side instead of server side. I suppose I'd counter that line of argument by asking what's wrong with a proper piece of installable software instead (I know, I know; web, write once run anywhere, etc etc).

          But for the majority of what most of us do with the web I dare say that we'd not notice the difference. Furthermore the monstrous size of the pages some websites dish up these days is ridiculous (www.telegraph.co.uk is appallingly bloaty). We really would be better off getting a remote display data stream instead; it'd be less data to download.

          As far as I can tell there is no real disadvantage for the client in having server side execution viewed with some sort of remote display protocol (unless it's a game), and only positive benefits. The server's worse off though; instead of just dishing out a megabyte or so of html/javascript/css/images, it'd have to actually run the darn stuff itself. That would take considerably more electrical power than the likes of Google, Amazon, consume today. The economic model of a lot of today's "free" services would be ruined.

          I think that it's unfortunate that the companies that would lose a lot by such a massive change (Google, Facebook, etc) are also those with a lot of influence over the web technologies themselves (especially Chrome from Google). Instead of getting web technologies that are better for clients, they're in a position to ensure that we keep using technologies that are better for themselves. That's not so good in my view.

          Interestingly I've been taking a close look at PCoIP a lot recently. One of the directions Teradici seem to be headed is that you use that protocol to view a desktop hosted on AWS. That's not so far away from the model I've outlined above...

        2. Charles 9

          Re: Absolute tosh!

          "However, we should recognise that it's impossible to exploit an properly implemented execution-less protocol; perhaps we should consider it as a way forward."

          But then how does the client interpret the stuff you send down the wire? Through a client, which no one can guarantee can't be exploited in some way. Remember, some clients (including browsers) have been directly pwned through strange code: not via things like JavaScript.

    4. terrythetech
      Facepalm

      Re: Absolute tosh!

      I used to work with scientists and the only one I knew of who thought "There are only a few loose ends to tie up and we're done" is now a lawyer. That is, the scientists who believe that science is done and dusted give up science for something more lucrative.

      Thing is there seem to be more scientists than ever - I wonder what they are all doing.

  30. Anonymous Coward
    Anonymous Coward

    There is ofc a way to keep the existing technology and have security

    Basically run the secure functions on a seperate security CPU and make the user CPU a client that can only access results once security has been validated.

    Security CPU acts as a host to user CPU and has it's own seperate working memory to store all the different predicted branches but can write or page returned results into user memory space after validation however the state and working data of Secure CPU is not directly accessible to user CPU and the mispredicted results are flushed.

    If the code run by secure CPU is openly published and you make hardware vendors responsible for their security failures by making them put up a bond that pays out in the event of a fail then the latest issues could at least be isolated.

    1. Charles 9

      Re: There is ofc a way to keep the existing technology and have security

      But can't even black boxes get subjected to something like a Confused Deputy ("Barney Fife") attack? As long as humans have to interact with it in some way, there's bound to be a way to make it go wrong, a la Murphy's Law.

      1. Anonymous Coward
        Anonymous Coward

        Re: There is ofc a way to keep the existing technology and have security

        "As long as humans have to interact with it in some way" true, but if you make the humans having control subject to punishement in the event that they fail then it will act to discourage the intentional sabotage of security and faith that has been intel/microsoft for the past 30 years.

        1. Charles 9

          Re: There is ofc a way to keep the existing technology and have security

          That'll never happen, though, because the company simply pool enough pull to get the laws changed or declawed. Otherwise, something like you describe would already be in the books.

          All else fails, they and everyone else simply leave your jurisdiction as too risky.

    2. Anonymous Coward
      Anonymous Coward

      Re: There is ofc a way to keep the existing technology and have security

      You will never get the H/W Vendors to place their 'head on the block' to assure your confidence in the security of the hardware.

      There is always something that gets missed or forgotten and things are too complex to be able to 100% test all possible 'routes' through the hardware.

      As a small error could possibly lead to a huge 'Payout' it will never happen.

      Just imagine if the current 'Spectre & Meltdown' issues were covered by a bond how much it would cost Intel and others !!!

      Just like Software has bugs so does Hardware and this is something we have to (and already do) live with.

      Wholesale changes in the architecting of newer CPUs will not happen overnight but there will be changes as this will have frightened many.

      I imagine that as usually happens when the 'latest & greatest' is shown to be 'not as good as we thought' we will find a lot of old ideas being re-visited with a view to re-implementing them with newer techniques and technology or forging some new hybrid design between new & old ideas.

      Maybe this is the 'kick up the pants' that will spawn the next new 'Latest & greatest' :)

      Maybe Moore's Law is simply taking an alternative route as we speak :)

      1. Anonymous Coward
        Anonymous Coward

        Re: There is ofc a way to keep the existing technology and have security

        "Just imagine if the current 'Spectre & Meltdown' issues were covered by a bond how much it would cost Intel and others !!!" well with a bond being paid to a third party as a percentage of each CPU sale then intel would have already paid and so would loose nothing but their reputation.

        Intel fked up and to be frank I do not think it was an accident, for years they have been hiding their fails behind a wall of secrecy and the result is the majority of their customers now being without a secure machine that runs at the speed advertised. Their customers have paid for their trust in intel but you think intel should not now have to put their money where their mouth is to recover the confidence they have lost though mismanagement.

  31. John Styles

    I have a question. Suppose I am completely uninterested in security as am I essentially using a PC for computation and can apply any security outside the PC, which is unconnected to the big, bad, scary internet, to what extent can I just say 'oh bugger it speculate all you wan't?'

    a) on Windows?

    b) on Linux (where presumably I can turn off retpolining in GCC)?

    1. Anonymous Coward
      Anonymous Coward

      'oh bugger it speculate all you wan't?'

      As much as you trust your local environment not to put software on the machine that will compromise the security of said environment.

      If you have a isolated machine then security is limited by what you allow it to run, if nothing can get in or out then it will be as secure as the OS and other software allow

  32. naive

    George Orwell was right: Some are more equal then others

    Maybe there are alternative techniques to mitigate Meltdown/Spectre. If kernels get a dedicated core, including a fixed set of caches and pipe-lines, maybe the dumpster diving by user-land apps into kernel memory can be prevented. I always wondered why CPU bakers did so little to isolate Supervisor mode processes from user-land. With the current state of CPU technology, it should be possible that with moderate architectural CPU changes, operating system designers can modify their kernels such that they never allow user processes into cpu cores and memory areas which are reserved for Supervisor mode, the MMU could for instance maintain a page access rights table, restricting read access of free but reserved pages to Supervisor mode processes. The kernel itself, should never schedule user processes on cores which are designated to Supervisor mode. In this way, there is a clean separation between kernel and user land code, without the need to lose useful enhancements like branch prediction.

    1. Charles 9

      Re: George Orwell was right: Some are more equal then others

      But what happens when Userland has to talk to Kernelland A LOT (which tends to happen on things like high-speed networking)? There MUST be some interface between them, and as long as there's an interface, there's a way to exploit it.

  33. g00se
    WTF?

    Java is dead. Long live Java!

    I hate to break it to the buriers of Java, but not only is it not dead but it (or a proprietary version of it) is either running or runnable *inside* your Intel processor and/or its associated chips. See https://www.slideshare.net/codeblue_jp/igor-skochinsky-enpub (slide 33 onwards)

    1. Anonymous Coward
      Anonymous Coward

      Re: Java is dead. Long live Java!

      Wow, you are correct. It runs a full blown Java VM inside Intel ME. And signed Java applets can be invoked even with SMS.

      The Intel Management Engine (ME), also known as the Manageability Engine, is an autonomous subsystem that has been incorporated in virtually all of Intel's processor chipsets since 2008. Intel ME has full access to memory (without the Intel CPU having any knowledge); has full access to the TCP/IP stack (incl. modem stack) and can send and receive network packets independent of the operating system, thus bypassing its firewall. Intel ME is running on MINIX 3 operating system.

      https://en.wikipedia.org/wiki/Intel_Management_Engine#Claims_that_ME_is_a_backdoor

      Intel should rather starting to selling CPUs without "ME", Meltdown and Spectre.

      1. onefang

        Re: Java is dead. Long live Java!

        "Intel ME is running on MINIX 3 operating system."

        So the year of Minix on the desktop arrived long ago already.

      2. Charles 9

        Re: Java is dead. Long live Java!

        You forget that tech like ME was demanded by administrators who wanted to be able to administer and reset machines without the costs associated with going there physically (especially if long distances or oceans are involved).

  34. Richard Gray 1
    FAIL

    Problem years in the making

    The problem regarding the software (in the UK) is that for the past 20 years or so kids have been taught computing in terms of Word, Excel and Powerpoint.

    They were not and are not (as a rule) taught about how computers function, the components of a computer and how they interact.

    Teach Word in English, Excel in Maths and Powerpoint in Art, teach computers and how they work in Computer studies!

    For the average kid today (the IT Pro of the future!) computers are Black boxes where the network pixie brings them their you tube videos.

    The kids you didn't teach 20 years ago, are the Computer teachers you don't have now to teach the programming for the kids today.

    Until teachers with PROPER programming knowledge teach proper efficient coding, it dosn't matter what language you use it will be crap coding.

    Rant over now time to deal with my _ing users....

    1. HmmmYes

      Re: Problem years in the making

      No.

      Dont teach excel or powerpoint. Both are shit.

  35. Weeble9000
    Headmaster

    Performant is a word now?

    n/t

    1. Anonymous Coward
      Anonymous Coward

      Re: Performant is a word now?

      Performant is a perfectly cromulent word and its use embiggens us all.

    2. King Jack
      Holmes

      Re: Performant is a word now?

      performant

      adjective

      Computing

      Functioning well or as expected.

      ‘a highly performant database which is easy to use’

      Origin

      Early 19th century (as noun): from perform + -ant; the adjective dates from the 1980s.

      From the dictionary. read it some time.

  36. Anonymous Coward
    Anonymous Coward

    Silicon is going to be obsolete

    Graphene will usher in a new era.

    Fellow carbon-based lifeforms, rejoice!

  37. GreenBit

    Argh!

    And we were like *this* close to the AI singularity!

  38. Jamie Jones Silver badge
    Boffin

    Here's a thought - idleness can mean speed increases!

    All this speculative/predictive stuff is done so that the CPU doesn't waste time idling...

    At the same time, we are rather limited in CPU speeds due to the temperatures reached when the chips are working.

    Without any speculative stuff, and using the philosophy that it's OK for a "busy" CPU to be idle, how much could the speed be ramped up due to the otherwise cooler core?

    1. Charles 9

      Re: Here's a thought - idleness can mean speed increases!

      We tried that already IIRC. See Pentium 4 and NetBurst.

      1. Jamie Jones Silver badge

        Re: Here's a thought - idleness can mean speed increases!

        We tried that already IIRC. See Pentium 4 and NetBurst.

        Ahhhh. OK. Thanks. Now you mention it, I vaguely recall some people complaining that the P4 was less complicated than the P3, and thus was less powerful when running at the same speed - the idea being it would allow them to crank up the speed as it ran cooler. I didn't realise that predictive branching was one of the things they sacrificed.

        From what I gather, they couldn't easily achieve the increased speeds needed to surpass the P3's power.

        Cheers, J

        1. Charles 9

          Re: Here's a thought - idleness can mean speed increases!

          I don't know about the branch prediction bit, but what NetBurst taught us was that raw speed wasn't a cure-all. Once you got into the 4GHz range, all you really did was make things get really hot. That's why Intel took a step back and used the P3 as the basis for an architectural fork that eventually became the Core line. The idea with Core is to dial back and instead focus on handling things more efficiently. Some top-end cores do creep into the 4GHz range again, and as a result they also generate plenty of heat again (thus why most need special cooling setups), but only after those efficiency improvements made them a whole lot better than the NetBurst CPUs could ever achieve.

          Thing is, like raw speed, architectural efficiency has its limits; you can only trim things so much before you hit the bone. It seems we're starting to see bone in places and seeing side effects as a result. We've hit the speed wall, and now we're hitting the efficiency wall. We can probably get some results if we can solve the problem of fast context switching, but after that it's going to take a serious rethink to find new ways of improving things, and each new idea runs the risk of side effects.

          1. Anonymous Coward
            Anonymous Coward

            Re: Here's a thought - idleness can mean speed increases!

            If you have a model based upon a centralised logic unit to perform the processing then getting the code/data to the logic is the bottleneck, you can reduce the impact via CPU cache but the bottleneck remains. See https://www.techopedia.com/definition/14630/von-neumann-bottleneck

            Basically Von neumann knew that his model was limited but it met the needs of cost versus productivity at the time and allowed for computers to become a useful tool. Now we find that the tricks we have used to try to get around the inherant problems with the model have caused more problems than they have solved, essentially we need to use a different model.

            Your heat issue is due to putting all your logic in a small area, if it is spread out then it becomes less of a problem. Tricks like like CPU cache putting storage and logic together has performance advantages but you still hit the bottleneck when you try to get the results out. There are more but the fact that the tricks and architecture have been patented results in the same problems you see with the internal combustion engine i.e. replacing them would put the current market leaders out of business.

            Ultimately we are banging our heads against a wall because the companies controlling this field have a vested interest in not allowing it to change.

            As to your 4GHz and semiconductors this is nothing new, it used to be said that if you wanted switching above 4GHz then you had to use valve technology but valve tech that hasn't been incoporated into semiconductor design has mostly stood still in the past 40 years. I wonder why

            1. Charles 9

              Re: Here's a thought - idleness can mean speed increases!

              No, I think the problem you're overlooking is that, although von Neumann knew his architecture has limits, you neglect to realize that ALL of them have limits. Chief among them the unavoidable ceiling that is the Speed of Electricity. It's a hard-and-fast "Ya Cannae Fight Physics" limit. Everything else we do is to try to make the best of the situation. That's one reason for a central processing unit and why we are trying to move towards MORE rather than LESS centralization (Systems-on-Chips if you'll recall). Make things smaller and the electricity takes less time to get there, plain and simple. It's just that in larger setup, we need things big enough for human hands to handle. In essence, there are competing needs here meshed up against hard-and-fast limits, leading to one hell of a juggling act.

  39. Stevie

    Bah!

    Never should have moved away from steam 'n' cogs.

  40. steve 124

    Pete and Repeat sitting in a tree...

    I've heard this all before... "PCs are dead... long live game consoles.... PCs are dead... long live Tablets... PCs are dead... clocks beyond 3.8GHz are impossible... etc. etc."

    This is just another speed bump and in 2 years you'll be touting how PCs are dead because of some TLS vulnerability that was found or some other "big threat" of the day.

    This will slow them down for a few months (and of course after patching for MD and Spectre (Hail Hydra) everyone will "need"/want to upgrade to the new, secure, faster CPUs and Intel and AMD will laugh to the bank. Honestly, there hasn't been very much reason to upgrade for the past 4 years or so and so this will probably force some folks to do that now (once the new cpus come out).

    Don't worry Henny Penny, it's just raining, you can go back inside.

    1. Charles 9

      Re: Pete and Repeat sitting in a tree...

      "clocks beyond 3.8GHz are impossible..."

      Well, NetBurst taught us that while CPUs clocked beyond 3.8GHz is possible, it's far from ideal. Recall that NetBurst P4's were notorious for their thermal profiles.

      1. Baldrickk

        Re: Pete and Repeat sitting in a tree...

        They made quite nice space-heaters though - and ones that do useful work to boot.

        I still have my P4 rig, though it doesn't see too much use these days.

  41. Trollslayer
    Joke

    Is it a bird? Is it a plane?

    No! It's Captain Cockup!

    I'm waiting for GPU malware.

  42. Danny 2

    #60

    https://www.youtube.com/watch?v=19VsGaAH_do

    Smile

  43. Anonymous Coward
    Anonymous Coward

    ?? Going backwards? Only for Intel/ARM/etc.

    AMD doesn't have the same 2 year chip redesign that Intel is looking at. Moore's law is not limited to Intel.

    Why is the Reg always in the tank for Intel (OH YEAH, because Intel has posted the great fake performance numbers for security compromised chips).

    Let that one sink in for a bit. It's not unlike when we discovered Lance Armstrong cheating. It takes a little while for your retrospective view to reform. Then you realize that your system is running 30% slower and that, in reality, Intel has NOT lead the performance numbers. They cheated, and no one discovered it for 20 years.

  44. Anonymous Coward
    Holmes

    Wirth's law, also known as Page's law, Gates' law and May's law, is a computing adage which states that software is getting slower more rapidly than hardware becomes faster.

    {Wikipedia https://en.wikipedia.org/wiki/Wirth%27s_law}

  45. RLWatkins

    There is no such thing as Moore's Law.

    It would be better called "Moore's observation", as the man himself said on more than one occasion.

    Operating as we are on the borderline between science and engineering, it's helpful to remember what a "law" is. Gordon Moore himself revised his observation a few times, and never touted it as an inviolable principle, but rather as a way of predicting progress.

  46. J.G.Harston Silver badge

    People will re-learn (or more likely, learn for the first time) what *software* optimisation is, and hopefully will finally kill off the attitude of "not fast enough? get a faster CPU".

  47. John Savard

    Greatly Exaggerated

    Ever since Dennard Scaling died, it's been made clear that what doubles every so often due to Moore's Law isn't performance, but just the number of transistors on a die.

    As they're building 10nm, 7nm, 5nm, and even 3nm fabs even now, Moore's Law, as currently defined, still has some life left to go. Even with EUV working, of course, the finite size of atoms does mean it can only go so far. (Yield improvements, of course, could continue Moore's Law even once we're stuck at 3nm by increasing the size of a die, until we have chips that fill a whole wafer.)

    1. steve 124

      Re: Greatly Exaggerated

      "(Yield improvements, of course, could continue Moore's Law even once we're stuck at 3nm by increasing the size of a die, until we have chips that fill a whole wafer.)"

      Lol, as I was reading your comment I thought the same thing... One day we'll have CPUs the size of frisbees (Intel i19 anyone?)

  48. Potemkine! Silver badge
  49. agatum

    From here on in, we’re going to have to work for it.

    No problem here: I started my programming career over 30 years ago when EVERYTHING was scarce, and I still did ok.

  50. Anonymous Coward
    Anonymous Coward

    will they ?

    "The huge costs of Meltdown and Spectre - which no one can even guess at today - will make chip designers much more conservative in their performance innovations,"

    Only if there is a serous business cost to Intel as the result of M&S, which at this point isn;t obvious ( at least to me ).

    1. Ken Hagan Gold badge

      Re: will they ?

      Well "M" is fixed, as I understand it, by the current round of patches and didn't affect all chip designs anyway, so we're really only worrying about S. S, in turn, is only an issue for people running untrusted code at lower privilege levels, so folks like Google running a completely in-house software stack (at least on some of their iron) don't need to worry about it anyway and folks whose only instances of untrusted code are JavaScript in web browsers can dial down their timer resolutions a bit and push the feasibility of the attack into long grass (or possibly even into a different time zone).

      On the other hand, if you are selling VM guests, S makes your entire business model recklessly unsafe both for you and your customers, so that's annoying. Either you or your customers will just have to take the performance hit or buy more hardware, or a bit of both.

  51. This post has been deleted by its author

  52. Anonymous Coward
    Anonymous Coward

    Speculative execution is not inherently insecure

    It just needs to be appropriately (and efficiently) isolated.

    Perhaps we will enter the Golden Era of the Competent Programmer.

    Nah...

  53. Cuddles

    Exponentials

    "the capital requirements to develop those new devices climbed nearly exponentially"

    Almost as though it was following some kind of law regarding something doubling after a certain period of time passed...

  54. IGnatius T Foobar
    Megaphone

    Moore's Law

    Although the sentiment of this article is worthwhile, we must remember that Moore's Law speaks of transistor count/density, not the actual speed of a computer.

  55. l8gravely

    It's the memory being so much slower...

    The root cause of this is the mismatchi between access speeds of CPUs to registers, local cache, level 2 cache, memory, then NVMe (or PCIe busses), then SSDs, then Hard Drives, then modems, then teletypes, ad infinitum.

    The entire reason the CPU designers DID speculative execution was that memory wasn't fast enough to feed data as quickly as they wanted, which is why on-chip caches have been used for decades. Speculation is just one way of going down the *probable* path and getting work done as fast as possible. In the not-so-often case you're wrong and you suffer a stall as you throw out all your (speculative) work and do the work you need to wait for cache/memory/disk/etc to feed you the instructions and/or data you need.

    Remember how we used to add memory to systems so we wouldn't swap? Now we add SSDs to get better performance?

    Todays CPUs are really quite super duper fast, when you can feed them data/instructions fast enough. But once you can't, you have to wait, which sucks.

    I suspect that in the next few generations, we're going to see an increase in both on-die cache sizes, increases in memory bandwidth and memory access speeds being the way forward to speed up systems again. We already see this with the DDR->DDR2->DDR3->DDR4 and single/double ranked DIMMS. But that's going to have to change, possibly to a more serial bus to get improved performance out of DIMMs, which are then read in a more parallel way internally.

    But it's going to cost. Is it smarter to reduce the number of cores, but use that transistor budget to drastically increase the local cache size? And the associated cache management machinery? Will we go back to off-chip level 3 caches to help right the imbalance?

    Lots of possibilities, we just now have hit another inflection point in CPU and system design where there will be a bunch of experimentation to figure out the best way forward. I forsee a bunch of companies all taking the x86_64 instruction set and trying to re-architect the underlying engine that it runs on to investigate these various tradeoffs. Yes, process improvements will help, but I think the pendulum has swung to the design side more strongly now, and we'll see more interesting ideas (like the Crusoe processors from Transmeta, etc).

    Will Intel and AMD go away? No. They'll be here. But I strongly suspect they'll be doing some radical rethinking. And probably buying some smart startup with the next interesting way forward. Whcih will be hard since the motherboard and memory guys will needed to plan out the next steps in the off-chip connections and interfaces to get data moved to/from memory faster and with less latency.

  56. docteurvez
    Happy

    It's kind of ironic in the end.

    The Mainframe was there decades ago, with the multi-tasking, multi-users functionality and probably the best Hypervisor ever coded.

    And even now, with the powerful Mainframe cpu IBM can manufactures, if I try to code something that will try to sneek out of my area, MVS will "spit" me out with an abend-code 0C4, which means : "Sorry buddy, you are not authorized to read this region".

    Intel should have learned from the Big Iron...

  57. Lord_Beavis
    Trollface

    A return to the Golden Age of Computing

    You mean programmers are now going to have to optimize their code? Development times just went up by a factor of three and costs by 12...

  58. faisalch

    Moore's Law ?

    Your article discusses branch prediction strategies used to predict a branch outcome before the actual outcome is known. Moore's law however referred to the number of transistors per square inch on integrated circuits doubling every year since invention.

  59. DV Henkel-Wallace

    Umm, I don't think so

    Moore's law is about the *number of transistors* and says nothing about speed.

  60. GSTZ

    Moore's law becoming obsolete

    Moore's law is indeed on doubling the number of transistors, but in former times that had the effect of also doubling speed. This is no longer the case - rather we are now doubling theoretical throughput (useful only if we had software that could make use of ever growing parallism), and we are also doubling complexity, unreliability and vulnerability. We are still in electronics, and electrons fly around atoms - so the atom size is a hard limiting factor. Another one is heat - the more layers we put on top of each other the harder it gets to dissipate it from the middle. We'd probably have to turn down clock speeds to avoid meltdown. Furthermore, making the chip structures even more tiny and brittle and thus even more susceptible will force us to invest more transistors into error correction and fault repair circuitry. At some point adding even more transistors becomes ineffective as we have to use them for unproductive purposes and also we get overwhelmed by complexity. All those effects will ultimately stop that trend we know as Moore's law.

    Is this a tragedy? Probably not. We can make things a lot simpler, and thus work more productive and more reliable. We can put more functions into hardware, eg. via ASICs. And maybe we could even educate software whizzards to become more humble and to concentrate more on user needs rather than on the latest software fashions and their own ego.

    1. Charles 9

      Re: Moore's law becoming obsolete

      "Is this a tragedy? Probably not. We can make things a lot simpler, and thus work more productive and more reliable. We can put more functions into hardware, eg. via ASICs. And maybe we could even educate software whizzards to become more humble and to concentrate more on user needs rather than on the latest software fashions and their own ego."

      And if they clash with the needs of up top and/or hard-and-fast deadlines?

      As for improving memory access, also don't forget the Speed of Electricity, which is another hard-and-fast limit, and because of the need to make things human-accessible for repair and upgrade purposes, there will come a certain minimum latency for memory.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon