back to article Cheap as chips: There's no such thing as a free lunch any Moore

A year ago this column mourned the death of Moore's Law, the 1965 paper so beloved by both engineers and computer scientists because of ongoing performance benefits seemingly so effortlessly achieved. We suggested in our death notice that in lieu of flowers, donations should be lavished on Intel shares. Researchers now …

  1. Anonymous Coward
    Anonymous Coward

    "If only it ran macOS!"

    The reason it doesn't is exactly because it can achieve that kind of "performance" - running reduced applications (and far less multithreading/tasking) in a far less powerful OS which has no support for external devices. It's again a trade-off, versatility against performance.

    1. Dan 55 Silver badge

      Re: "If only it ran macOS!"

      I don't think device drivers are that taxing. Well, maybe they are for Apple.

      The threading is the same, the XNU Darwin kernel is the same or as near as makes no difference. The UI might not allow the user to access it, but that's something different.

      1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: "If only it ran macOS!"

      The rumors of an ARM based Mac are showing real evidence of being more than just rumors, so you may have to eat those words soon.

      iOS and macOS have the same kernel, and a phone certainly does have devices connected to it via bluetooth (or Lightning) and the internal bus connects microphones, multiple cameras, speakers, a display, storage, two networks etc. That's more than most laptops these days... If you've ever seen a 'ps' on an iPhone, you'll know it has just as many processes running as a Mac.

  2. disco_stu

    Is it really that much of a surprise that the Surface Go is slow - its a dual core Pentium with the same performance as a desktop from 10 years ago.

    1. cdegroot

      Nothing new...

      ...which is best described as "blazingly fast for its time".

      The real problem here is software developers that have stacked abstraction on top of abstraction and always got their butts saved by Moore's law. We had snappy GUIs on hardware that's so laughably primitive, people call it a "microcontroller" these days and refuse to even put it in your watch.

      This is purely a software problem.

      1. FrancisKing

        Re: Nothing new...

        I agree. The Archimedes Risc OS was written in Assembler, and ran in 1MB. My faithful Atari STFM had a windowing system within 512KB. My Windows 10 desktop required 4GB to run a web-browser, 2GB simply wasn't enough.

        Admittedly, there's more going on, with more colourful displays and more pixels, but it does feel like that's been a lack of coding discipline.

        1. Anonymous Coward
          Anonymous Coward

          Re: Nothing new...

          I agree. The Archimedes Risc OS was written in Assembler, and ran in 1MB.

          C64 / GEOS was written in assembly (well, everything was) and worked in the 64KB machine surprisingly well.

          C64 of course would take a week to render just the ElReg front page while swapping to a floppy...

          1. Dan 55 Silver badge

            Re: Nothing new...

            Obligatory SymbOS mention (WIMP on Amstrad CPC 6128).


        Re: Nothing new...

        Wirth's law - software is getting slower more rapidly than hardware becomes faster.

      3. Chris G

        Re: Nothing new...

        I have noticed one or two updates from MS in recent times that were bigger than whole OSs used to be. I can't remember which but it sctually slowed my lap top down until I dumped most the package.

        1. Anonymous Coward

          Re: Win10

          IIRC that's because Win 10 "updates" are entire ISOs most of the time now. :/

          (I feel a little smug that my Linux machine is kbs (pic either ;) ) worth of data, but then I remember I have to boot into 10 for some software :( )

        2. vtcodger Silver badge

          Re: Nothing new...

          Indeed. I've been retired for a decade, but I make a practice of asking clerks, professionals, and others I encounter in day to day life about how they like their computer. Their major complaints? They can't understand the User Interface and the bloody thing is SLOW. My physician indeed has hired an assistant to do much of the routine computer stuff -- updating prescriptions, taking medical notes, etc -- because his choice was fighting with the computer or attending to we patients.

          Boot up time is a particular complaint and seems to be being addressed. A few years ago, people in medical offices were complaining about 20 minute boot up for Windows computers.

      4. Anonymous Coward
        Anonymous Coward

        Re: Nothing new...

        Even our telephones are more powerful than supercomputers of the disco era. Slowness should not exist.

      5. vtcodger Silver badge

        Re: Nothing new...

        This is purely a software problem.

        If not actually caused by software, more disciplined software practices are almost certainly an answer to poor performance on current products. The underlying problem is likely depending on Moore's Law -- capability grows exponentially over time -- while ignoring Malthus Law -- capability grows exponentially ... until it doesn't.

        Techniques for speeding up software (profiling, a bit of refactoring, etc) are well known, and not too difficult to apply. Once anyway. One problem is that no one -- especially not system decision makers -- much cares about performance. The second is that the software architecture we have inherited from mid-twentieth century mainframes assumes that performance is not, never will be, and indeed can not be, an issue. The third is that faster is not necessarily more secure. Making software and hardware faster often makes it less secure -- and vice versa. People didn't use to actually care very much about secure, but thanks to the internet, security has become an overriding issue -- rather to the detriment of performance sometimes.

  3. Anonymous Coward
    Anonymous Coward

    Sorry Marc, there's a bigger issue in an article of yours from just over a year ago, that needs rather more attention right now, and is entirely independent of instruction set and memory architecture and...

    1. Mephistro


      Totally agreed. After re-reading that nice article and comparing it with the actual status quo- e.g. Facebook pushing its "research app" to minors- I got the impression of living in a dystopian remake of "The Groundhog's Day".

      Places like ElReg forums are full of people worried about this surveillance capitalism, but the rest of mankind doesn't give a flying fuck, as long as they get their mirrors and glass beads.

      My hope is that initiatives like the GDPR will help raise awareness of the issue among the masses, but given the amount of money and power that social media companies have amassed, it'll be a close race. 8^(

  4. macjules

    Steady on there

    Have I just read an El Reg article heaping praise upon an Apple product?

    You never know: Apple might starting returning your telephone calls now.

    1. Michael Wojcik Silver badge

      Re: Steady on there

      Mark managed to fanboy Apple and Microsoft in one article. If only he'd been able to squeeze Google in for the trifecta...

  5. Mage Silver badge

    with more than twice the grunt of the model shipped

    No, because the problem is OS and GUI bloat.

    Not lack of CPU power, though I'd say Moore's law started running out of steam around 2002. The annual upgrades before then were stunning.

    Comparing a 2002 1.8GHz P4 mobile (2.2GHz existed) Inspirion 8200 with a 1600 x 1200 ultra sharp matte screen running XP and a Linx 1010 tablet with Windows 10 and a Lenovo E460 with Win7 and Linux Mint + Mate (1920 x 1080 screen, i5-6200U, 4x 2.3GHz cores).

    The Dell Inspiron 8200 XP (optimised, SP3 and all patches and sensible GUI settings): Beats all above for ease of use and performance, except Linux on E460 for any application using single core or Desktop. It does have an updated 120GByte PATA IDE and 2G RAM, 40G and 1G originally.)

    Linux on the E460 is about same speed for desktop response faster everything else. But not the sort of difference between the 8200 and the earlier Dell inspiron from 2000 with 450MHz PIII coppermine mobile and Win2000. It's 1400 x 1050 screen and really hugely slower, but can beat the 10" Atom Win10 tablet at times.

    Perhaps Intel is rubbish at low power + decent performance? Browser on my Alcatel A3 XL phone is faster than any Atom netbook or tablet I have.

    I have Linux on the Atom netbooks. I have an EEEPC, with 32G CF card instead of original Flash. Current Linux Mint on it (18.3 Mate desktop) not as fast as the original Linux on it. We did try XP on it, but it's ill-suited for cheap Flash and would periodically freeze for a while. Also it was before the CF card socket was shoehorned in by cutting away part of the internals of case.

    Netbooks and Tablets often seem spoiled by being only 600 or 720 pixels high in landscape mode. Also why don't the tablets dock to keyboard in Portrait mode?

    We reached peak PC on x86-64 more than a year ago.

  6. Warm Braw

    We can't separate them easily anymore

    It hasn't been that easy for some time.

    Access to I/O devices may require instructions that cause specific forms of bus transaction. Shared memory and multiprocessor configurations depend on certain instructions providing hardware interlocks. Virtualisation requires the instruction set meet certain criteria. Operating system enforced security requires some sort of CPU support to back it up.

    What might perhaps be up for consideration is the hard boundary between hardware and software: it's possible to envisage a layered model in which the number of layers commited to silicon is implementation dependent - it is, after all, just a variation on microcode and virtualisation.

    1. Version 1.0 Silver badge

      Re: We can't separate them easily anymore

      Works for me, I just replaced my old HP laptop's spinning disk drive with a new SSD and Moore's law kinda kicked in - way faster and good for another few years.

      1. Mage Silver badge

        Re: HDD vs decent SSD

        Nothing to do with Moore's law on chip improvements. Totally different technology. I wonder too was it a 5400 RPM low power drive?

        Also I wonder what the SSD life is and how much advance warning of failure.

        Moore's law wasn't a law, but an observation of CPU performance and transistor quantity on same size chip. About density.

        In reality 14nm isn't ten times density of say 140nm (or 100 times actually for same area chip), because 14nm isn't even the average or typical, but smallest feature. Geometry doesn't mean what it used to mean when 90nm was the norm.

        1. Duncan Macdonald

          Re: HDD vs decent SSD

          SSD advance failure warning - normally NONE.

          If you do see any failure on an SSD - expect it to brick very soon.

          Keep backups!! If a disk (HDD or SSD) fails (or a data corrupting bit of malware strikes) then the only thing that will save your work is a recent external backup.

        2. Anonymous Coward
          Anonymous Coward

          Re: HDD vs decent SSD

          Technically, if SSDs are also on silicone, they could hit similar return on rate of improvement. So not that far off Moore's law! :)

          1. Fungus Bob

            Re: HDD vs decent SSD

            Silicone? Should have good impact resistance...

            1. Michael Wojcik Silver badge

              Re: HDD vs decent SSD

              Caulk it up to flexible engineering. Swells the chest with pride, it does.

        3. MOV r0,r0

          Re: HDD vs decent SSD

          There's an unspoken assumption with Moore's Law about per transistor cost. It's assumed as density rises for a given die size, the per transistor cost will drop. What's been happening is that the cost flipped and has been going the wrong way.

          So it might be perfectly possible that Moore's Law could hold but that the economies it supported are broken so there's not the financial imperative to make it happen. Not dead, just broken?

          1. Mage Silver badge

            Re: per transistor cost

            It's true that a new Fab for State-of-the-art is scary expensive. However leakage and tunnelling are now serious issues. 8nm to 14nm minimum feature does need very high quality materials and is close to the limits for sensible performance. Yes, you can as lab demo make a rubbish transistor from a few atoms. You've also a problem with high speed production of chips as you'd have switch from masks to Vector graphic writing using electron beams or something.

            Temperature and background radiation and device life (metals drifting) become an issue too. I've wondered is best performance/reliability around the 28 nm to 40nm device geometry.

            1. MOV r0,r0

              Re: per transistor cost

              You're saying it's an engineering issue rather than a matter of economics - I don't agree. I don't doubt there are engineering issues but from the creation of the very first transistor there always were and people did not throw in the towel like they are doing now.

          2. Anonymous Coward
            Anonymous Coward

            Re: HDD vs decent SSD

            Cost per transistor is still falling, the problem is the startup costs for design and especially mask sets keep increasing so you need more and more units of the same chip for it to make economic sense to use a smaller process.

            If you have hundreds of millions of units of each design like Apple does for each A* SoC, no worries, at least for the foreseeable future. If you have a chip that goes in a product that sells only a million units a year, your cost per transistor will go up significiantly with each shrink. Thus why such lower volume products will often contain chips fabricated with processes at least two generations old.

            1. MOV r0,r0

              Re: HDD vs decent SSD

              Historically fab costs always were hideous but the greater yield from increasing density per die allowed migration down to mid and entry level product - that's where the mass market actually is and where the investment was finally recovered: it's that bit that is broken, just ask Intel. It's their entire business model.

              You're going to have to amortise your R&D and production start-up costs somewhere so it doesn't really matter where your costs are occurring, it all feeds into per-transistor. Apple have a CPU design with an instruction set that can be implemented with reduced complexity so their reduced transistor count and the very large margins they achieve on their products are what is keeping them immune, not volume. Check the all time top ten sellers: they only have one phone in there.

  7. juice

    It's the classic tripod issue...

    You can have two out of the three when it comes to making a CPU fast, secure and low power.

    (Maybe swap power for cost)

    Moore's "law" was always an observation rather than a guarantee, and hardware manufacturers were partially able to mitigate the dwindling improvements by working smarter, not faster - virtualised cores, speculative branching and the like.

    But by their very nature, these techniques work by sharing resources which otherwise would be sitting idle. And when you share resources, it's very hard to keep things separated.

  8. Martin Summers

    I'm confused. Why on earth are you comparing a Surface Go to an iPad Pro? Trolling or ignorance of Surface Go's market?

    1. Anonymous Coward
      Anonymous Coward


      "I'm confused. Why on earth are you comparing a Surface Go to an iPad Pro? Trolling or ignorance of Surface Go's market?"

      Let see. One costs twice as much as the other. Guess which? How about YOU compare the iPad Pro with a Surface Pro with Intel Core I7 rather than some shiny object? What a foolish comparison.

      1. Martin Summers

        Re: Confused...

        Point missed methinks.

    2. Michael Wojcik Silver badge

      Well, I think they're comparable, inasmuch as I have zero interest in either.

  9. martinusher Silver badge

    There's really no need to panic over this.....

    These attacks all require attacker to execute code on a system -- typically Javascript -- and require access to accurate timing information. It also implies that sensitive information is held in memory.

    This is a typical setup for convenient computer use. Its also highly optional. We don't need to execute Javascript in web pages. We don't need high resolution timers and we don't need to retain sensitive information in memory. If I had a system that had really important information on it then I'd first ask myself whether it needed to be connected to the Internet, secondly whether I should be running general purpose programs on it like Web browsers that expect unfettered access to the Internet and whether I really needed accessible high resolution timers. I'f also be paying a lot of attention to network traffic that was going to unexpected places (probably managing my own static domain service cache that would flag and log any requires to unusual locations.)

    We are continually made victims of our own architectural choices -- convenience trumps security. (....and if we weren't all trying to use adware infested web sites then we probably woudn't need multicore high performance processors for everyday tasks....again, convenience over security.)

    1. doublelayer Silver badge

      Re: There's really no need to panic over this.....

      I agree that they haven't been a major problem yet. However, there is a place where you don't have that clear a boundary. Case one is on a VM host, where one VM can run its own code and take memory from another VM. You can't block the malicious VM from running code, and you will have sensitive data in memory at some point even if it is only authentication data. The drive-by access by javascript is possible too, but requires more knowledge of the system, so is unlikely as you say. Another possibility is that access to certain parts of memory that are rightly removed from your segment may allow privilege escalation. I don't know where they are or how hard it is to use them, and I don't think people are finding that yet, but if we left them unpatched, it might be worth criminals' time to find out.

    2. Mage Silver badge

      Re: There's really no need to panic over this.....

      Better sandboxed Web Browsers that also parsed JS more before execution?

      Millions of upvotes on the 3rd party javascript/adware. Using uMatrix, NoScript etc can be more effective than AV, especially for Zero Days. No storage is auto-executed on mount. I KNEW in 1995 that was a very stupid idea for Windows. Amiga had proved it.

  10. Snowy Silver badge

    It's alive

    Moore's Law is alive and well it just moved over to Quantum Computers.

  11. IGnatius T Foobar !

    I for one welcome the end of Moore's Law.

    Computers are fast enough. We can start learning to optimize software again. We can keep computers and other devices for more than six months without them becoming obsolete. We can be spared from OS peddlers adding more and more stupid display tricks like sliding and fading UI elements that don't add anything to the user experience but consume more and more computing power.

    I just turned off all of the stupid display tricks on my Windows 10 machine this morning. And I'm running it on a modern computer. OMFG. It's so snappy that I can run it from a remote desktop on a Raspberry Pi and it still behaves well. Imagine the extra nonsense they'd keep adding in if Moore's Law were still in effect.

    I welcome the flatlining of performance. Bring it on.

  12. Anonymous Coward

    NO. No they would not.

    "If Moore's Law had ground along, Surface Go would sport an Intel CPU fabricated on a 7nm process – with more than twice the grunt of the model Microsoft shipped."

    No, they would have put an even worse CPU, as the 7nm cost of production would be higher, and they would want to claw back more profit (or the pure cost would make it impossible to make at that price point).

    When a device is slow, it is not because they cannot make it faster (at least most of the time), but making it faster is more *expensive*.

    See the current folding phones as an example, of tech being there, but cost and reliability of production being prohibitive, making them almost useless in their current form/price range (as damage/loss makes it rather unpalatable).

    You purchased the slowest model... The slowest model. Don't then say it could be faster... unless you are saying "if I bought the faster model, it would be faster". ;)

  13. Anonymous Coward
    Anonymous Coward

    Maw's Lore

    90% of PCs running slow are due to one case of poorly optimised code, usually a third party browser script. There's a reason ios never supported flash player.

  14. Anonymous Coward
    Anonymous Coward

    TL;DR Hardware improvements can no longer hide shit software - design and coding.

    Maybe now we will start to see some real software engineering ?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon