back to article Intel finally takes the hint on software optimization

In a quest to deliver better application performance and greater efficiency, many software and hardware vendors are turning to custom silicon to achieve their goals. Apple's A-and M-series processors are prime examples. Meanwhile in the cloud, Amazon has spent the last few years developing custom CPUs, AI accelerators, and …

  1. John Smith 19 Gold badge
    Coat

    Wake me up when they repeal Amdah's law

    Because y'know, max speedup is down to how well you can parallelize the prblem in the first place.

    which ultimately depends on wheather you can devise a trully parallel way to do arithmetic.

    Without that your're pretty much stuffed.

    And you always will be.

    1. Kevin McMurtrie Silver badge

      Re: Wake me up when they repeal Amdah's law

      It doesn't have to reach the arithmetic level. Conquer-and-divide libraries are getting easier to use. Java's ForkJoin system is a complete clusterfck for I/O but it works well for maximizing multi-core efficiency with minimal coding.

      I'd rank bad code and bad configuration as still being the #1 limit to performance. Those giant staffing/contractor pools have their own special coding style that performs 10x to 10000x slower than it should while being completely obfuscated to normal humans. When somebody with more security checkboxes than security knowledge touches a system you can count on it losing another 20% to 90% of its performance on top of that.

  2. steamnut

    a cunning plan?

    I wonder if this is just a ruse to get software developers to use "Intel only" features and instructions so that it creates an advantage over AMD? Yes, the software vendors can write instructions that get around the special instructions on non Intel processors but the ploy may well tempt some cloud vendors to take a closer look...

    1. Fazal Majid

      Re: a cunning plan?

      And yet they disabled AVX512 in Alder Lake, the biggest differentiator they have over AMD, just because of those gimped E-cores that don’t support it.

  3. An_Old_Dog Silver badge

    Tradeoffs

    Writing code to use super-advanced, proprietary CPU features, or more-extremely, to custom silicon, is the ultimate in vendor lock-in.

    For major players, the gains may be worth it, but for most companies, it probably isn't.

  4. Duncan Macdonald
    Flame

    Coding efficiently ?

    Except for some supercomputer jobs, coding efficiently has long gone out of style. Compare the following :-

    A PDP-11/73 (1 MIP 8kB CACHE 2MB RAM 100MB disk) running RSX-11M PLUS could easily handle 6 concurrent users.

    A laptop PC with an Intel 4410Y (over 4000 MIPs 2MB CACHE 1GB RAM) is horribly slow running Windows 10 for a single user.

    A software emulation of a PDP-11/73 could easily run faster on a 4410Y than a real PDP-11/73.

    Due to the horribly inefficient coding and design in Windows 10 and its apps the 4410Y processor despite being over 1000 times faster than a PDP-11/73 is not good when just asked to support 1 user.

    The current overriding requirements from management for coding are quick delivery and low cost - efficiency, proper testing and security are ignored.

    1. An_Old_Dog Silver badge

      Re: Coding efficiently - Apples and Oranges

      I agree with what you wrote about inefficient coding, but think your examples were off the mark.

      RSX-11M provides only a text-only interface, and Windows includes a GUI. MS Windows does tons of stuff which RSX-11M does not. (Part of the problem with MS Windows is that most of the extra stuff it does is not needed, or not-wanted-to-outright-hated by its users.)

      A combination of incredibly-cheap modern CPU cycles, incredibly-cheap RAM, and rising programmers' wages, with the mass-software market's demand for bright-and-shiny GUIs, more features (but each user wanting a different set of features than the other users want), and faster releases of new versions, make code efficiency the lowest thing on a software manager's priority list.

      (And I think the PDP-11 has a pleasant, reasonably-orthogonal instruction set, whilst the modern x86 instruction set is a piss-soaked bag of rusted nails. Particularly egregious are write-only and machine-specific registers.)

    2. John Smith 19 Gold badge
      Unhappy

      horribly inefficient coding and design in Windows 10

      It helps to consider this from a slightly differenct perspective.

      MS Has no friends in the software business. Merely companies it has not destroyed yet.

      It's "friends" are all hardware vendors.

      They need a reason for people to replace thier hardware, despite it doing the same thing to today as it was doing basically a decade ago, and still being reliable enough to do so.

      Those same vendors know MS will cover for them if they f**k up writing their drivers because the MS end of the interface will be patched to handle it. Of course they won't tell anyone, so making it that much more difficult for FOSS suppliers to ensure that unit will run on their OS.

  5. MalIlluminated

    Talent at mass and scale

    Perhaps Intel can turn the developers responsible for the Arc drivers loose on the problem.

  6. A Non e-mouse Silver badge

    So Intel are now saying: "You've written the software wrong." There's a (strong) argument that says that if people are writting the software wrong for their hardware, maybe their hardware is wrong?

    1. Paul Crawford Silver badge

      Nope, they just don't have software engineers who can do it well (and understand even the basics of CPU operation) or they don't care as the PHB has features to deliver and performance on customer's hardware is not his problem.

  7. aki009

    Lack of vision since 1999

    I'm not sure when Intel lost its vision, but it was sometime around when Itanium flopped. The last vestiges of techs at the VP level were replaced by marketing and bean counter types.

    Just like Boeing, it takes decades for the rot to set in, but it's here now, and the top has no clue as to what business they are in. Boy oh boy, will it be difficult (and expensive) to fix the damage they've caused to what's supposed to be a tech company.

    I'm giving them 50-50 odds of still being a market leader in the CPU business in 10 years. (And maybe I'm being too generous and should adjust that to 5.)

    1. martinusher Silver badge

      Re: Lack of vision since 1999

      You're probably right. They made so much money from the x86 processors that there was little to no incentive to get into or even remain in other segments.

      The problem is that the x86 type processors are grossly inefficient. Intel doesn't say so out loud but they're actually microcoded (there's no way that you could design one of these parts using random logic). This means they'll always lose out to a RISC running tailored and optimized software. Senior management won't really understand this -- they'll run the numbers and conclude that anything that deviated from the core business is just detracting from profits. The fact that they should have and could have been ARM -- they had the parts and the market reach -- its just an immense missed opportunity.

      1. Nate Amsden

        Re: Lack of vision since 1999

        losing out? x86 has killed almost every traditional RISC server/workstation processor out there regardless of how optimized the software was. Alpha/MIPS/SPARC/PA-RISC all dead or walking dead. Power(PowerPC?) not far behind. Itanium dead too of course. MIPS still has customers in the embedded space I'm sure, but their glory days back on SGI big iron etc are of course long gone.

        Even modern ARM multi core server CPUs show they can only get good performance if they too are consuming 100-200+ watts of power per socket.

        I think the argument of x86 being inefficient died about 15 years ago, about when we started seeing quad core processors(also we were well into 64-bit x86 at that point). Also I think even x86 has been mostly RISC with a translation layer since 686 days or something? At the end of the day RISC or not RISC it doesn't matter, it's an obsolete argument.

        Don't blame Intel for the current state of x86, if you hate x86 so much you should hate AMD, if it wasn't for AMD64 instruction sets x86 would probably be buried by now as Intel wanted to kill it, but they were "forced" to adopt AMD64 and go from there. We were fortunate that happened, Itanium seemed to be a far worse solution.

        (I don't have any issue with x86 myself)

        1. An_Old_Dog Silver badge

          Re: Lack of vision since 1999

          Intel was not "forced" to uglify the x86 instruction set. They chose to. Whether they could make other choices and still profit, I don't know.

        2. A Non e-mouse Silver badge

          Re: Lack of vision since 1999

          x86 has killed almost every traditional RISC server/workstation processor out there regardless of how optimized the software was

          Did x86 win out because it was the better processor, or did it win out due to little software being available for other processors?

          e.g. Windows NT was available for a variety of CPUs* yet little end user software was available for it.

          ARM is only now gaining ground as it started out in a niche (embedded) that x86 couldn't compete in.

          * I heard the other day that NT was originally written on something other than x86 then ported to x86 to prove how portable NT was.

      2. A Non e-mouse Silver badge
        Meh

        Re: Lack of vision since 1999

        It's been discussed on El Reg forums several times before, but modern RISC-like processors (e.g ARM, MIPS, etc) are also microcoded.

        There isn't much distinction between "CISC" & "RISC" processors nowadays. Both processor camps have looked over the fence and borrowed ideas from each other.

  8. OhForF' Silver badge
    Joke

    Intel breakthrough

    Intel should have assigned that talent in mass and scale to help OS vendors use optane efficiently.

    Had they done that I'm sure they'd manage to run an endless loop in half the time.

  9. Fazal Majid

    Clear Linux

    Intel’s Clear Linux is a good illustration of how optimizations can get an easy 10-20% improvement in performance, and it’s not Intel-specific, AMD also uses Clear for its own benchmarks. But the project hardly gets any love internally at Intel.

  10. Henry Wertz 1 Gold badge

    PLEASE make your code platform-specific!

    To be honest, this sounds more like Intel saying "PLEASE, make your software platform-specific! Load it up with processor-specific instructions." To be honest, of course, if ffmpeg, x265, etc. did not have any of that code in it, they would be dead slow. There are specialized cases (video encoding being one) where you'll get MASSIVE speedups from using these types of instructions. But, in most cases, I'd really prefer the compiler to take care of that and leave my code fully portable.

    adde

    I may just be being cynical, this may just be a matter of Intel genuinely wanting to point out cases where spending a few minutes adding a CPU intrinsic, or replacing you rmath lib with Intels, or the like, will get you your speedups, rather than losing sales to ARM or POWER or RISCV or whatever where a possibly already optimized copy of the code outruns (or out performance-per-watts) the Intel system.

    Either way, I do take it as a good sign in so far that Intel is seeing enough competition to find this necessary. Healthy competition really makes things more interesting (especially as a Linux user where besides supporting the x86/x86-64, it suports MIPS, ARM, POWER, RISCv, etc. etc., so there's a reasonable chance of having a distro already up and running on whatever interesting hardware comes out.)

  11. Dippywood

    Been There, Seen That. Collect T-Shirt at Reception.

    Around the early 2000's. Intel had a professional service division that offered exactly this service, as well as 'we will help you to migrate to Intel' and assorted other things.

    Of course, this was not 'core business' and Intel Solution Services was wound up in 2008...

  12. guyr

    Won't help the vast majority of developers in a corporate environment

    I worked most of my career as a software developer in a corporate environment, i.e, software for internal use. In 35 years, the top priority was always "get it done". Perhaps Intel is only targeting its efforts at huge software vendors like NVidia or the game companies. But for corporate developers, these types of specialized routines will not be used unless they get baked into the common development tools used: Java and C++ tool chains, etc. If we can get a 25% speedup (or a 25% reduction in resource use) by changing a compiler flag, that will help. If we *might* see an improvement by manually stitching an Intel-supplied module into our software build (which we then have to test extensively to make sure nothing else breaks), that will never happen.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like