back to article Intel claims first Alder Lake chip is the fastest desktop gaming silicon in the world

Intel has had to significantly change its approach to chip design with Apple, Arm and AMD slowly undoing its decades-long dominance. The new 12th-generation Core processors, code-named Alder Lake and introduced today, are a further step in that direction. The first chip off the bat is gaming desktop silicon, including the …

  1. Anonymous Coward
    Anonymous Coward

    Ground to make up...

    They definitely have some ground to make up against AMD and especially Apple. Was pricing some laptops recently, and for equivalent price, the AMD versions seem to have twice the performance on CPUMark and GeekBench scores. I do think we'll start seeing more and more architectures with RAM on-die with the chip per Apple.

    1. katrinab Silver badge

      Re: Ground to make up...

      In maybe 3 or 4 years time, we will likely start seeing chips from Qualcom with similar performance to what Apple is offering today.

      1. Sorry that handle is already taken. Silver badge

        Re: Ground to make up...

        I hope so but I suspect a significant part of Apple's consistent ARM performance advantage is its software optimisation, which is simpler in a closed ecosystem.

        1. Anonymous Coward
          Anonymous Coward

          Re: Ground to make up...

          Except all of the software optimization is in LLVM, which isn’t a closed ecosystem at all.

    2. Anonymous Coward
      Anonymous Coward

      Re: Ground to make up...

      >>> do think we'll start seeing more and more architectures with RAM on-die with the chip per Apple.

      There is no RAM on die. (Not the RAM you are thinking of.)

      1. tip pc Silver badge

        Re: Ground to make up...

        You know what they meant though.

        1. Anonymous Coward
          Anonymous Coward

          Re: Ground to make up...

          I know it's 100% wrong. If i went to a customer and offered them LPDDR4 SDRAM on an ASIC die they'd die of laughter.

          1. Anonymous Coward
            Anonymous Coward

            Re: Ground to make up...

            Quoting El Reg:

            “The SoC has access to 16GB of unified memory. This uses 4266 MT/s LPDDR4X SDRAM (synchronous DRAM) and is mounted with the SoC using a system-in-package (SiP) design.”

            1. Anonymous Coward
              Anonymous Coward

              Re: Ground to make up...

              SIP does not mean on die.

              The quote literally says, "mounted with" not "mounted on". And not "on die".

              1. Anonymous Coward
                Anonymous Coward

                Re: Ground to make up...

                OP here. I did mean within the SOC package as a whole; y'all seem to have gotten what I meant though. Pretty sure the average person doesn't know where the line is between "on die" and "within the SIP package"... and the difference is utterly irrelevant to the original statement I made.

                Perhaps I should have just said "bringing CPU and memory closer together" to avoid the mildly-amusing pedantic debate? :)

                Either way, pretty cool to see all the myriad ways designers are still finding to improve performance! Neat stuff to come for years.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: Ground to make up...

                  Why bother with being acurate when talking technological whatsits, eh?

                  (Bet people would be more concerned over less than accurate comments about sports, music or celebrities. [sigh] )

  2. MrTuK

    Alderlake certainly is a capable range of CPU's and what makes the 12900K very interesting is although it is supposed to be 125w (What a joke) it seems it will be at least 225w and up to 330w if overclocked so using a 50% more power to get how much % faster than a 5950X ? Intel has always been good at throwing high power at their CPU's to get more performance, not sure if this can be air-cooled or not whereas the 5950X can be ! No I am not an AMD fanboy but I do remember Intel demo'ing a certain CPU and not declaring that were using an industrial chiller to cool the CPU just to win the performance crown and I do remember Intel paying rebates to Dell so they wouldn't sell AMD 64 based systems - I just don't trust Intel at the moment because of their history that I haven't forgotten. I doubt the AMD VCache CPU's will beat a 12900K unless they are both using air cooling, but I am sure some people Intel fanboys would say that's an unfair comparison to force air-cooling on both CPU's - personally I hate water cooling as at some point it will leak and then you are up a creek without a paddle so to speak !

    1. Pascal Monett Silver badge

      Re: personally I hate water cooling as at some point it will leak

      I've been using watercooling for years. If done right, it does not leak.

      I would also point out that there is a high likelyhood that your motherboard has its SATA IO chips watercooled.

      As for me, I have abandoned the idea of air cooling completely. Watercooling is more efficient, and quieter, even with top-of-the-range CPUs and GPUs.

      1. Snake Silver badge

        Re: water cooling will leak

        "I've been using watercooling for years. If done right, it does not leak."

        It must, sooner or later. A "when", not "if", however note that the "if" may not come within the usage lifetime of the system in question, before either being decommissioned or torn down for upgrading.

        Components age and a closed-loop system is essentially an entropy trap waiting to be triggered.

        1. Gene Cash Silver badge

          Re: water cooling will leak

          I have had my system leak, and I discovered the hard way that nvidia cards are not water resistant.

          However, since my machine's been in use for 4 years, it's not leaked again. I did have to replace a pump, but as that's outside the case, it wasn't too difficult to flip the machine on it's back.

          It's worth it for the quiet. I can't stand fans and I've never found one that didn't whinehum/vibrate/rattle/whirr after a couple months.

        2. seven of five Silver badge

          Re: water cooling will leak

          Twelve years and counting. Not even an of the shelf set, back when I built it, you still had to swap the top of an aquarium pump and then add the tubing to that.

          Granted, I changed the water a few years ago.

    2. Wade Burchette

      Intel and AMD define TDP differently. Intel defines TDP as the minimum cooling needed whereas AMD defines TDP as the typical amount of energy needed. This means you cannot compare TDP of Intel and AMD processors. The result is AMD processors use only about 50% more power than the stated TDP whereas Intel processors can use over 200% more power.

      To Intel's credit, they told us up-front the TDP of the boost speed of Alder Lake, which I think was around 240 watts.

    3. big_D Silver badge

      It is like an American gas guzzler at the beginning of the 70s, released just in time for the fuel crisis.

      Electricity prices have gone up over 30% here in recent months. I think prices jumped from ~24-28c/kWh to 34-36c/kWh. I'm seriously looking at replacing my Ryzen 1700 desktop with a new Mac mini as I really don't need the full power of the Ryzen desktop any more (I bought it to teach myself Hyper-V) and the Mac mini would be fine for my general use and some photo editing - under load, it will user over 100W less (ignoring the graphics card's draw!).

  3. katrinab Silver badge
    Meh

    I've just taken delivery of a Threadripper Pro 3945WX machine. It is a *lot* faster than the Ivy Bridge 3770 machine it is replacing, somewhere in the order of about 10 times faster, but for the majority of the tasks I do on it, I don't actually notice any difference. One task I do used to take about a minute, now takes about 5 seconds. I definitely notice the difference there. But most of the things used to take maybe 100ms, and while they may now take maybe 10ms, they were fast enough before. They felt instant before, and still feel instant.

    My main reason for upgrading was not actually speed, but being limited to a maximum of 32GB of RAM on the old machine.

    One thing I do definitely notice though is that the new machine is a lot quieter.

    1. Anonymous Coward
      Anonymous Coward

      You always need more power.

      Because software keeps getting slower.

      Kids today with their garbage collecting languages and interpreters and the like. Back in my day, we used C/C++. We wrote custom new/delete operators to get more speed. We optimised for memory use and speed. Now I have 64GB of RAM and I still need to reboot every 10 days because it's run out. And get off my lawn.

      1. Naselus

        YC++? Luxury! Back in my day we'd wake up int shoe box, walk uphill in 32 feet of snow t'code factory, and then chisel Fortran operations into clay tablets in cuneiform for running on a rotating drum feed....

        Oh, are we not doing the Four Yorkshiremen bit?

      2. Anonymous Coward
        Anonymous Coward

        Surely they used assembly?!?

  4. Anonymous Coward
    Anonymous Coward

    For every improvement Intel makes...

    ...Microsoft is ready to soak it up with its latest Windows goodness.

    Windows is as productive now as it was on a 486.

  5. elsergiovolador Silver badge

    Point

    What is the point of the so called "efficiency" cores apart from looking good on brochures and misleading customers about somewhat having similar architecture to M1?

    Intel seems to be completely lost.

    1. doublelayer Silver badge

      Re: Point

      The point is to add performance for concurrent tasks. Consider AMD's chips. They're showing much higher benchmark scores than comparable Intel ones. Why is that? Individual cores don't run a lot faster than Intel's cores, although in some cases there is a difference, but one major difference is that AMD's chips have a lot more cores available than the comparable models. AMD laptop processors can have 8 cores/16 threads, whereas even the highest-end laptops using Intel usually have 6/12 or 4/8. The same is true with desktop chips.

      Having eight fast cores is expensive, and if the user doesn't run compute-intensive things all the time, they may go unused. Intel's thinking in this case that many users will benefit from extra cores, but mostly so the compute-intensive stuff they do run has less competition. Instead of having to provide a lot of fast cores, they could instead provide some fast cores as they have done and add some slower ones to take background tasks. That would give people a similar level of hardware parallelism while keeping the manufacturing price lower. Whether that convinces manufacturers is yet to be seen, but it has been demonstrated as useful in mobile devices and by Apple, so it's not so unusual an idea. If it works, likely AMD will do similar.

    2. cb7

      Re: Point

      "What is the point of the so called "efficiency" cores"

      It makes the chip more energy efficient. Probably not a big deal for most desktop users but a massive deal for laptops.

      If your laptop can suddenly go 21 hours on a single charge compared to 9 hours before, that's a major selling point.

      1. elsergiovolador Silver badge

        Re: Point

        But are they? If they take more time to complete a task, the screen and other peripherals have to be powered as well during that longer time. It may be that "performance" core actually take less energy by the fact they finish quicker. They could just make sure that "performance" core don't consume as much power when idle.

        Also you are referring to a laptop processor. There is no need for that in a desktop machine.

      2. confused and dazed

        Re: Point

        Don't want to get all Greenpeace on you, but pouring power down the drain isn't good for anyone, desktop or laptop

    3. David Webb

      Re: Point

      To me, it seems quite logical. Imagine you're doing a large render of something in Blender. At the moment on my rig, I'm using 1-2% CPU usage just having a browser open and typing this. With P and E cores, all the Blender stuff can be offloaded to the P core, all the background stuff (OS, AV etc.) can be stuck onto the E cores, this frees up the P cores to use 100% of their CPU instead of the 90+% that is only available because of background tasks,

      I'm not sure how rendering works, it's just an example, but it could be that the renderer itself has tasks which do not require the full P core, these can be offloaded to an E core whilst the P core continues with the high load stuff. Of course, this wouldn't be as powerful as having ALL the cores as P cores, but how often do you use your CPU at full whack anyhow? The E cores should be more than capable of handling all general daily usage and only use the P cores for bursts which should allow a reduction in electricity usage, and as my gas and electric bill has just doubled, every little helps.

      1. eldakka

        Re: Point

        With P and E cores, all the Blender stuff can be offloaded to the P core, all the background stuff (OS, AV etc.) can be stuck onto the E cores, this frees up the P cores to use 100% of their CPU instead of the 90+% that is only available because of background tasks
        Or you could just have more P-cores so that your background tasks are using 1 P-core while the rendering is using 15 P-cores for example. That'd give better rendering performance than having 8 P-cores for rendering and 8 E-cores for the background tasks.

        No, I don't think that's the advantage. The advantage is when you aren't using the computer for high-CPU tasks, you can have all or most of the inefficient (but fast) P-cores shutdown/idle, while just using the effiient (slow but fast enough) E-cores. So if you are just sitting there reading el Reg and not much else, you can be doing all that on the E-cores and let the P-cores sleep. Very little modern desktop tasks require a lot of CPU for anything outside short infrequent bursts - browsing, word processing, emailing, social-media'ing, etc.

        Tasks that do require CPU horsepower tend to be things like games, rendering and other media 'modification'-related (e.g. editing, trans/encoding, etc.) tasks, other 'specialist' tasks that the average person wouldn't be doing. Which, on average across all desktops sold, is a tiny percentage of desktop computer usage on average across the entire population's computer hours. Sure, I might use a lot of CPU for long periods of time playing games or transcoding videos, but my parents don't, my sister and brother don't, most of my neighbours don't, my 2 work desktop computers don't.

        1. elsergiovolador Silver badge

          Re: Point

          Tasks that don't require power will be blocking I/O for longer for example, dragging everything down.

          They could focus on making sure the CPU does not use power when idle. Background task could run in short bursts leaving I/O and other stuff available for things that require it.

          I still think it's a gimmick. They probably waste much less silicon on "efficiency cores", so they are much cheaper to make and they can claim machine has 16 cores instead of 8 and that is the reason they are pushing it. You could probably replace all that 8 "efficiency" cores with one or two normal ones.

      2. katrinab Silver badge

        Re: Point

        I’m guessing with Blender there will be the threads that do the actual rendering which would go on the performance cores, plus a thread to schedule all the rendering tasks and keep track of what needs to be done, which would go on an efficiency core.

    4. ArrZarr Silver badge
      Holmes

      Re: Point

      Honestly? I think it's at least partially about the physical dimensions of the chip. The extra 8 efficiency cores take up considerably less space on the die than the power cores. The new chips are the same width but longer. For Intel to provide 16 power cores, the chip would be considerably larger.

      For ATX motherboards that isn't an issue, probably not for mATX either but if you're trying to build an ITX system, Threadripper sized chips start becoming decidedly inconvenient.

    5. David Hicklin Silver badge

      Re: Point

      But does this not also need the software to be core type aware and run the jobs accordingly?

      1. doublelayer Silver badge

        Re: Point

        It only requires the OS scheduler to be aware of the differences. Some software might implement it themselves, but if they don't, the OS will handle process prioritization. Not all OSes have had to do this before, but if it's not good enough now, they will probably improve it.

  6. tip pc Silver badge

    They need a cluster of anti virus cores

    “ while smaller cores take on lower-priority tasks like virus scans”

    A cluster of cites for av and a cluster for windows then the rest for your actual work load would likely be most efficient.

    1. elsergiovolador Silver badge

      Re: They need a cluster of anti virus cores

      That seems like a false economy though. You don't want AV scanning to block I/O for longer on slow cores.

  7. Anonymous Coward
    Anonymous Coward

    As far as I'm concerned, Intel has been playing catch-up to AMD for quite some time now, and AMD hasn't released their latest cycle of offerings yet, either.

    Besides, "first generation" chips are to be avoided in my experience. Wait until at least the third or fourth stepping/version for the bugs to shake out, because no matter how much simulation you do, the real world always seems to bite hardware inn the keester. :)

  8. Anonymous Coward
    Anonymous Coward

    Everybody seems to be so worried about appeasing Windows.

    Wintendo is my host OS because I want to run my games. Full stop.

    ALL of my work is done in Ubuntu VMs. ALL of it.

    It has been that way for years. When you don't use Windows for MOST of your compute intensive tasks, you'll find your CPU "bandwidth" goes a LOT further than it does with anything from Redmond.

    1. Anonymous Coward
      Anonymous Coward

      Proton and the release of Steam Deck coming up is sure to put a further dent in the Windows is best for gaming argument.

      I'll soon be looking for a new desktop, and I'm seeing less and less of an argument for continuing to use Windows.

  9. Al fazed
    Flame

    Blah Blah Blah

    Did they say anything about the new back doors for CIA etc ?

    ALF

  10. knarf

    Register do your homework

    Did you look at the testing they did against AMD chip.... no....

    They tested against WINDOWS 11 on gaming, which I would bet my shirt was done before AMD release their new driver to fix the windows 11 performance issues.

    So a very large pinch ... bucket of salt is required here.

    1. Andy Denton

      Re: Register do your homework

      I was just about to post the very same thing. Intel are especially disingenuous when it comes to benchmarks. I suspect they'll have caught up with AMD to an extent but they won't have the performance lead they're trumpeting.

  11. Sorry that handle is already taken. Silver badge

    So that's what they mean by fastest

    The chip can run at a frequency of 5.2GHz

    I guess we wait to see how the real world performance compares.

    Most interesting Intel desktop announcement for a long time I think.

  12. vmistery

    Whilst I’m pleased Intel is making an effort to get back in the game and they are certainly going to have a leap here with both a node change (on desktop) and a new design I’d hold out for the real life power consumption and sustained performance reviews before I buy. I suspect that yes it will be faster even than the upcoming AMD chips with additional cache but potentially not without sufficient cooling and a lot of extra juices. Now that would be fine for my gaming desktop but not so great for a laptop or mini system

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like