back to article Intel's stock Raptor Lake chip will do 6GHz and overclock another 25%, if it keeps cool

Intel says its 13th-Gen Raptor Lake CPUs will do 6GHz at stock settings and top 8GHz when overclocked, according to slides shared during the company’s Tech Tour in Israel this week. This would give Intel a clock-frequency lead — 300 MHz— over AMD’s Ryzen 7000-series CPUs, announced late last month, which top out at 5.7GHz. …

  1. Eric Olson

    Honest question

    Anyone else getting a lot of P4EE and PR-rating flashbacks suddenly?

    1. Piro Silver badge

      Re: Honest question

      PresHOT baby, let's go long pipeline crazy, forget the power budget, and crank those clocks!

      What's that, it won't stay cool? Invent a new standard for cases that have an inlet directly over the CPU! (Thermally Advantaged Chassis)

      1. Mongrel

        Re: Honest question

        Well, fans in the side of the case were quite common before clear side panels became a thing.

        I miss them

        1. Jedit Silver badge
          Thumb Up

          Re: Honest question

          Same, to be honest. My favourite case ever is the XClio Windtunnel, a full tower case with two 250mm fans on the side. Lovely piece of kit.

        2. Oglethorpe

          Re: Honest question

          They're great for getting air directly to the CPU but they cook everything on the motherboard or make the CPU inlet fan do a lot of work. I'm a fan (pardon the expression) of lots of slow fans.

          1. NoneSuch Silver badge

            Re: Honest question

            My new Threadripper rig has seven BeQuiet! Silent Wings Pro 4 fans. You cannot hear it, even under load.

            As for Intel, they are finally competing with what AMD put out 9 months ago, and still losing.

      2. Anonymous Coward
        Anonymous Coward

        Re: Power baby Power!

        I guess every one of these will come complete with its own power station?

        In an age when we are supposed to be conserving power, these behemoths are IMHO, a relic from the past.

  2. manalive

    Yawn... Apple Silicon is the future. ARM is the future.

    x86/64 is dead, they just don't know it yet.

    1. Snake Silver badge

      RE: the "future"

      Yawn, yet another tech-head who thinks that architecture matters (ARM, Linux) on the desktop, rather than user application compatibility (read: user preferences and experience).

      Because 30+ years of proof of this concept from lack of significant desktop sales penetration (OS/2, again Linux, et all) apparently... doesn't really prove anything at all.

      1. Anonymous Coward
        Anonymous Coward

        Re: RE: the "future"

        It proves that marketing budgets are more important than the tech if you want sales.

      2. Ken Hagan Gold badge

        Re: RE: the "future"

        I assumed the OP was being sarcastic. (The claim is clearly bollocks.) Someone deserves a "whoosh" here. Perhaps it is me.

    2. Kevin McMurtrie Silver badge

      Using the right tools

      Apple makes scalability compromises to accomplish their chips' performance. Apple Silicon will never be suitable for extremely large workloads. It's not what the architecture is trying to accomplish and it's not a market that Apple has the slightest interest in.

      The x86 chips also have compromises to accomplish their performance. They work best when they're bulky and running hot.

      ARM servers already exist and have good uses, but they're not yet replacing what x86 is good at.

      1. DS999 Silver badge

        Re: Using the right tools

        What scalability compromises? That's just a typical ill informed opinion repeated on PC message board echo chambers with no basis in reality. Please point to the flaws in Apple's designs that compromise its suitability for such workloads.

        The only reason Apple Silicon isn't posting huge numbers in "extremely large workloads" is because Apple hasn't and isn't going to design a CPU with 128 threads like the high end Intel and AMD stuff. There's nothing special about x86, or Intel/AMD's designs, that make it any more or less suitable for such workloads than ARM, or Apple's designs.

        If there is anything that would limit the performance of a hypothetical 128 core Mac it would actually be macOS, which (probably) hasn't had kernel work done to address pain points that cause performance issues which only start showing up with dozens to hundreds of cores, like Linux and to a lesser extent Windows have - because Linux and Windows servers with more cores than the upcoming Mac Pro have existed for well over a decade.

        It was only a few years ago people like you were claiming that Apple's chips were only suitable for phones, they couldn't handle a "desktop workload" and that if Apple dropped x86 as rumored Macs would be permanently behind PCs performance wise. That bar has been moved a couple times since, because it was based on nothing other than wishful thinking from those who think x86 is somehow the ultimate expression of computing power.

        1. Richard 12 Silver badge

          Re: Using the right tools

          The fundamental design of the Apple Arm is that it's a monolithic system-on-chip.

          That's the scalability compromise. It's all in a single package - CPU, GPU and RAM.

          That means two and a half things:

          The TDP of the entire system is limited to that which can be dissipated within a single package. So it cannot ever be as fast as a system where these components are physically separated, because it cannot dissipate the heat.

          It cannot ever be upgraded. The RAM and GPU are fixed at SoC manufacture, and thus the only options possible are the ones the chip manufacturer chooses to supply. If your workload requires more RAM or a better GPU, tough. Can't buy one. (They might be able to reintroduce external GPU over USB-4, but never RAM.)

          No 32bit software support. At all.

          (The first two of these are specific choices by Apple. You could make a SoC with x86-64 cores or a discrete system with Arm cores)

          None of these really matter for a cheap (to build) commodity consumer grade laptop, but they do elsewhere.

          1. DS999 Silver badge

            Re: Using the right tools

            The fixed configurations scale just fine within those limits. Your complaint seems to be that Apple doesn't offer an endless variety of different CPU SKUs that Intel and AMD do, but instead just offer a few fixed configurations.

            If your workload requires more CPU than you can buy from Intel or AMD, or more GPU than you buy from Nvidia, what then? Everyone runs into a limit at some point, so your problem is that Apple has chosen lower limits?

            Apple is not designing for the balls out cook an egg on your PC case market. You're right that by lumping CPU and GPU on the same MCM they are limited - but they have chosen to limit themselves well beyond that so that design decision has nothing to do with the products they are offering. The Mac Studio's M1 Ultra doesn't draw even 100 watts, while Intel and AMD are releasing CPUs able to draw 200 to over 400 watts in the case of Intel and Nvidia is selling GPUs that draw 600 - those numbers are BEFORE any overclocking is added! As a result power supplies that offer 1000 or even 1500 watts are the fastest growing segment of the DIY market! So yeah the x86 PC world has a higher high end, but you have to pay for it with a system that sounds like a jet taking off from an aircraft carrier.

            If you define "scalable" as "I can buy systems that..." then yeah Apple will always be behind, but that's got nothing to do with their chip design. That's just what they have chosen to target. Using the same logic you could claim Apple can't scale downwards, because they don't sell any $300 Macbooks while you can find any number of $300 Intel and AMD laptops.

            1. low_resolution_foxxes

              Re: Using the right tools

              It's not clear if you are an Apple marketing droid, or a troll incorporating a most excellent impersonation.

              Apple make shiny nice things for people who don't want/can't understand customisation. They are OK for what they are, and by some metrics and applications they are interesting.

              At the top end of performance, you will need customization to get the best results; and so far the heat dissipation alone is going to seriously dent the Apple chip performance.

              Now if Apple release the new "iLiQuiD" nanocooling-crystals upgrade for the inevitable £329,000 pricetag (+£1999 to purchase the revolutionary set of iWheels to allow iMovement), I'm sure it would be wonderful.

            2. Gob Smacked

              Re: Using the right tools

              I'm no apple fanboy at all - more a hater...

              But as a techie, I can only admit they have made big strides to performance on ARM. And using that experience, it would be not much of a problem to scale down to CPU only designs and start expanding in the ARM performance market.

              I'd really hate to see that, but it could very well be coming...

            3. Richard 12 Silver badge

              Re: Using the right tools

              Wow, you're really engaging in full-on doublethink and redefining terms in the middle of sentences.

              Please learn the meanings of terms like "scalable" before you make a further fool of yourself.

              For example, upgrading a Mac Mini M1 to have 32GB of RAM is totally impossible. Compare that difficulty with any Intel or AMD server, desktop (or most laptops).

    3. aerogems Silver badge

      While Apple's M-series of chips are impressive, they do take a number of shortcuts like having the RAM and GPU on-die, to get those impressive results. That's not likely going to fly for a lot of use cases.

      Personally, I'm keeping an eye on RISC-V. Virtually all the benefits of ARM, without the licensing fees and more freedom of design. As long as the instruction set doesn't fragment as a result of the openness, I expect it will be starting to eat into ARM's market by the close of the decade.

      1. cornetman Silver badge

        I expect to see increasingly large-scale RISC-V in phones pretty soon. Having a license-free core has to help the bottom line and the customisability on such a platform is probably a distinct advantage. Expect to see this in China first perhaps and then growing from there.

        ARM are currently king, but I think there is limited life there.

  3. Adair Silver badge

    On fire

    The world is burning and Intel trumpets increased clock speed and commensurate power demands. O dear, but then that's humanity for you - some us us really don't give a shit.

    Everything's fine until the ship has gone to the bottom and all the lifeboats have been found to be rotten.

    1. Anonymous Coward

      Re: On fire - not me

      So what are you running on your computer kit?

      I'm running an Intel Core m3 8100y with a TDP of 5 watts. It is slow, but so am I.

      If you're willing to put up with slow desktops, laptops, and servers, the solutions are out there.

      1. Adair Silver badge

        Re: On fire - not me

        A ten year old i5 T type that peaks at ~20W, but still does everything I need, alongside a Dell Wyse 'server' that is happy being fanless - prob about the same as your 5W.

        But what's that got to do with Intel's latest bit of wank?

    2. Oglethorpe

      Re: On fire

      I would say that FLOPS/Watt matters more, especially when dynamic clock speeds and the ability to independently idle cores are involved. Consider that a slow processor taking more time to do a task means running all the other components in the computer, as well as the peripherals for longer.

    3. Anonymous Coward
      Anonymous Coward

      Re: On fire

      I had a monster tower case setup at one point, 1,000W PSU, 8GB dual graphics cards and 64GB memory on an i7 multi-core CPU. You know what I got fed up with? Hoovering out the case every week and the noise despite using liquid cooling most of the syste. SO I gave it to my Dad and bought a gaming laptop with 64GB and an 8Gb gfx and AMD Ryzen, plus external JBOD, now my consumption has dropped to miniscule amounts most of the time. I can still crank it up when I'm gaming but the polar bears can sleep easier at night and the little disc in my leccy meter is no longer a blur!

      1. Oglethorpe

        Re: On fire

        I thought (and please correct me) that, unless you have something exotic like a thermosiphon, water cooling will always be noisier than air; given that the same amount of air has to be used to dump the same heat. My case has 6 fans set up to maintain a positive pressure with magnetic filters on the intakes. I just vacuum the filters when I'm cleaning the office and I've only seen negligible dust on large air coolers that have been in daily use for years.

        Something else to be aware of with PSUs is the efficiency curve. A 1kW PSU running at a couple hundred Watts will generally be less efficient than a lower powered one with just enough grunt for the system at full tilt.

        1. Nick Ryan Silver badge

          Re: On fire

          Like many things, this depends.

          A water cooling system tends to have a much higher thermal mass than a cooler/fan system and therefore it can hold much more heat which allows it to cover spikes in heat generation better.

          If a water cooling system has a large radiator (large surface area) and large, slower fans blowing air through it then this can be quite quiet. However, should the amount of heat getting stored in the water get too much then the fans need to be run faster, and therefore noisier.

          When a water cooling system doesn't have a large enough radiator then the heat dissipation mechanism needs to be more active, which tends to mean louder.

          We're not quite at the stage where it's worth seriously considering integrating heating systems into a property but with the way the CPU and GPU manufacturers are going it won't be too long. 800W of heat dumped into underfloor heating from a PC will provide quite a thermal mass to warm and the heat may as well go somewhere useful. Not so good during warm periods though...

        2. Tom 38

          Re: On fire

          One of the key benefits of water cooling is that you can move the heat to where it can be more efficiently dispersed. The typical air cooled case has 3 fans at the back (typically one case exhaust fan, one PSU exhaust fan and one GPU exhaust fan) and relies on negative pressure to pull cool air from the front, over the disks (somewhat less of an issue now with SSDs) and to the CPU heatsink and fan. This then blasts the hot air from the heatsink all around the interior of the case. With this kind of setup, you can't cool the CPU/GPU any cooler than the case temperature, and this depends on how efficiently you can dump all that heat outside the case and draw fresher air in.

          Oglethorpe mentioned having 6 fans and filters on the intake - intake fans in general add very little in terms of cooling, its better to have more exhaust fans which will draw the air in. 3 exhaust fans and 3 intake fans is actually going to run pretty much the same as 3 exhaust fans, but with twice the noise.

          Your next problem with air cooling is that of fan size. To cool the case, you need to exhaust a lot of volume of air. The larger the fans blades, the more air it can move. The faster they spin, the more air it can move and the louder it gets. So you have a trade-off between fan size, fan speed and noise. With air cooling, you're constrained by fan size because of the dimensions of the case, graphics card slot size etc.

          With a liquid cooling solution, you can move the heat immediately to the edge of the case. You can then get rid of that heat out of the case using very large, quiet fans, and because there is nothing venting heat within the case, your baseline for cooling is the room temperature rather than the case temperature.

          So, no, you have the same amount of heat to move, but you require less volume of air to move that heat, as the air entering the radiator is cooler. Plus, you can typically use larger more efficient fans that can produce a higher airflow per decibel than the case fans, and you eliminate the CPU fan, which doesn't exhaust heat at all in an air cooled case.

    4. aerogems Silver badge

      Re: On fire

      Well, when you need to heat ships against the cold of the void because we all live on giant space ships, you'll be happy to have all that waste heat from CPUs!

      1. Anonymous Coward
        Anonymous Coward

        Re: On fire

        Heat buildup is a problem when in a vacuum, because it can only be radiated away from the craft :)

        1. Roland6 Silver badge

          Re: On fire

          Heat build up is also a problem when dealing with extremes of temperature. It is also quite a challenge to keep electronics happy so that it can be protected from the extremes of Antartica whilst also not overheating.

          Obviously, the extremes of space present further problems - see the design of the James Webb telescope.

  4. cornetman Silver badge

    We have been here before. Intel most likely trying to steal AMD's thunder with their parts claiming over 5Ghz boost with standard cooling and power draw.

    Expect Intel to have extreme cooling on these parts for anything over 5.8GHz.

    I call complete BS on 7 or 8GHz. Not a chance. We have all been here before.

    1. katrinab Silver badge

      I think the 8GHz will be a very selectively binned 13900K part with liquid nitrogen cooling and extreme over-clocking by the very best engineers Intel has to offer.

  5. anonymous boring coward Silver badge

    "Ladies and gentlemen, start your engi.. fans"

  6. AMBxx Silver badge

    When THz

    When I started in IT, chips were running in MHz. I wonder if I'll live long enough to see THz?

    1. Peter2 Silver badge

      Re: When THz

      I too personally remember using chips measured in single digits of MHz. If we'll see THz probably depends on how it's measured. Single chip performance I doubt that we'll see hit 1THz.

      However, AMD already offers 64 core processors at a maximum clock of 4.3 GHz which is 4.3 * 64 = 275.2 Ghz worth of grunt on the single chip, so if you could use that lot as a single core (which obviously you can't) then we'd be more than a quarter of the way there today and i'd fully expect to live to see either 128 8GHz cores or 256 4GHz cores, which would break the THz barrier if we were to accept that particular counting method.

      1. Snowy Silver badge

        Re: When THz

        May be when chips go optical?

    2. Steve Davies 3 Silver badge

      Re: When THz

      When I started in IT, chips were running in kHz. I wonder if I'll live long enough to see THz?

      There fixed it for you.

    3. katrinab Silver badge

      Re: When THz

      Probably not. When did we first see a 5GHz CPU - it was the AMD FX-9590 in 2013.

      How much have clock speeds increased by since then?

      Compared to previous 9-10 year periods.

      Unless there is some completely new way of doing chips, we seem to be pretty close to peak GHz.

    4. Ken Hagan Gold badge

      Re: When THz

      The wavelength of 1THz light in vacuo is 0.3 millimetres. On silicon, probably nearer 0.1. Probably achievable, but you CPU die will have to be a cluster of tens of thousands of essentially independent CPUs, each with many times fewer transistors than we have now because we can't shrink transistors *much* further than we do now. (They are already only a few dozen atoms across.)

  7. LybsterRoy Silver badge

    Can anyone tell me why the majority of PCs will need this sort of speed. The main complaint I hear from people with very old laptops (eg T4800 (I think) running Vista) is the boot up speed not the program speed. Most PCs already spend most of their life wondering what to do with the left over CPU cycles so why give them more?

    1. Julian 8

      To run MS teams......

      1. Steve Davies 3 Silver badge

        re: To run MS teams

        To run any MS software TBH. Once upon a time it was lean and mean but these days it is slow as paint drying.

    2. This post has been deleted by its author

      1. hoola Silver badge

        It is a vicious circle. As CPU performance increase the software developers care even less about how resoure-hungry the monstrosities they produce are.

        We see this when something is upgraded (now of course it is an online update) and promptly starts running at a slug-like speed. More commonly is the gradual reduction in responsiveness as each update add more shite that is running in the background to make things happen faster.

        Why so many of the communication tools are so resource heavy beats me. They just sit there spewing endless pop-ups, bleeps and windows.....

        Ah, answered my own question.

        Teams is the work of the devil and I really fail to understand why it is such a pile of rubbish. It is as though MS gave 10 different group of developers bits of a brief without seeing the overall picture.

        1. Richard 12 Silver badge

          Teams is based on Sharepoint and Electron, which are both wasteful spawns of Satan himself.

    3. Roland6 Silver badge

      Marketing and bragging rights.

      Intel and AMD are in competition and with most things IT related 'speed' is an important yard stick; whether it is relevant to everyday users...

      As we've discussed elsewhere on ElReg, the majority of homes don't actually need particularly fast broadband (ie. anything over 100Mbps), yet that hasn't stopped the speed-based marketing and competition between ISPs.

    4. Nick Ryan Silver badge

      Why give them more? So poor quality programmers can waste them.

      There's a perpetual argument that most programmers shouldn't invest any time in optimisation because the impact of their optimisation will be so marginal to be not noticeable.

      Unfortunately this is a point of view put out by the terminally short sighted, and also likely those running systems that are perpetually top of the range.

      When code is executed thousands or millions of times over, the net effect of optimisation is highly important. When programmers don't care and, for example, just use variants as all of their variables, the CPU overhead to handle all of these variants is tremendous. Multiply this by thousands of operations and use a less than top of the range system and the impact is serious.

      However for the likes of Microsoft, software optimisation is known as "buy more and faster hardware".

    5. Anonymous Coward
      Anonymous Coward

      Bluntly? The majority don't need it, as the majority of PCs are basically used as a web browser. There are certainly use-cases where the performance is needed/required, though - code compilation, image editing (raw camera images), video editing (if it's not offloaded to the GPU), scientific simulation, games, etc.

  8. Richard 12 Silver badge

    Will you start the fans, please!

    That's going to need some pretty effective cooling.

    Is it a new socket too?

    Seems like forever since you could upgrade a CPU after 3-5 years without swapping the motherboard as well :(

    1. Solviva

      Re: Will you start the fans, please!

      Richard 12, not O'Brien?

      Better hope you collect significantly more gold than silver tokens if you want to pay the leccy bill after running 8 GHz for a few mins!

  9. Chris 15


    Ladies and gentlemen, start your engines Leafblowers

    (to try and keep these beasts cool)

  10. Anonymous Coward
    Anonymous Coward

    So with the 70% VM performance hit of the latest mitigation bug fix for Intel CPUs, that should equate to what? Not quite 3 GHz?

  11. Binraider Silver badge

    With MS Teams eating 4GB, and Excel running at over 27GB for a trivial spreadsheet, I am sure our friends at MS will find a way to burn those cycles up on frivolous rubbish while not actually improving functionality...

    1. TeeCee Gold badge

      If you've got Excel chewing 27GB, that spreadsheet is not trivial. It may be simple and have a fuckton of imported data to analyse, but that's a different kettle of fish.

      I just fired up a genuinely trivial spreadsheet to sanity check that apparent cobblers. 66MB total Excel memory use. Most of that's overhead too (i.e. the executable and its raft of associated DLLs) as, drilling down, it's allocated a whole 140KB of working memory.

      1. Binraider Silver badge

        This was handling approx 100MB of flat-file CSV data with some simple formulas applied. I have screenshots of it shooting up to 27GB!

  12. Roj Blake Silver badge


    8GHz is an extra 33% on top of 6GHz, not 25%

  13. aerogems Silver badge

    I feel old

    I remember seeing a headline on an IT rag about the Pentium 133 and it breathlessly claiming how "you may never see the hourglass again!"

  14. Anonymous Coward
    Anonymous Coward

    Man, ads are getting out of hand if they are requiring a 6Ghz proc to run now.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like