Honest question
Anyone else getting a lot of P4EE and PR-rating flashbacks suddenly?
Intel says its 13th-Gen Raptor Lake CPUs will do 6GHz at stock settings and top 8GHz when overclocked, according to slides shared during the company’s Tech Tour in Israel this week. This would give Intel a clock-frequency lead — 300 MHz— over AMD’s Ryzen 7000-series CPUs, announced late last month, which top out at 5.7GHz. …
Yawn, yet another tech-head who thinks that architecture matters (ARM, Linux) on the desktop, rather than user application compatibility (read: user preferences and experience).
Because 30+ years of proof of this concept from lack of significant desktop sales penetration (OS/2, again Linux, et all) apparently... doesn't really prove anything at all.
Apple makes scalability compromises to accomplish their chips' performance. Apple Silicon will never be suitable for extremely large workloads. It's not what the architecture is trying to accomplish and it's not a market that Apple has the slightest interest in.
The x86 chips also have compromises to accomplish their performance. They work best when they're bulky and running hot.
ARM servers already exist and have good uses, but they're not yet replacing what x86 is good at.
What scalability compromises? That's just a typical ill informed opinion repeated on PC message board echo chambers with no basis in reality. Please point to the flaws in Apple's designs that compromise its suitability for such workloads.
The only reason Apple Silicon isn't posting huge numbers in "extremely large workloads" is because Apple hasn't and isn't going to design a CPU with 128 threads like the high end Intel and AMD stuff. There's nothing special about x86, or Intel/AMD's designs, that make it any more or less suitable for such workloads than ARM, or Apple's designs.
If there is anything that would limit the performance of a hypothetical 128 core Mac it would actually be macOS, which (probably) hasn't had kernel work done to address pain points that cause performance issues which only start showing up with dozens to hundreds of cores, like Linux and to a lesser extent Windows have - because Linux and Windows servers with more cores than the upcoming Mac Pro have existed for well over a decade.
It was only a few years ago people like you were claiming that Apple's chips were only suitable for phones, they couldn't handle a "desktop workload" and that if Apple dropped x86 as rumored Macs would be permanently behind PCs performance wise. That bar has been moved a couple times since, because it was based on nothing other than wishful thinking from those who think x86 is somehow the ultimate expression of computing power.
The fundamental design of the Apple Arm is that it's a monolithic system-on-chip.
That's the scalability compromise. It's all in a single package - CPU, GPU and RAM.
That means two and a half things:
The TDP of the entire system is limited to that which can be dissipated within a single package. So it cannot ever be as fast as a system where these components are physically separated, because it cannot dissipate the heat.
It cannot ever be upgraded. The RAM and GPU are fixed at SoC manufacture, and thus the only options possible are the ones the chip manufacturer chooses to supply. If your workload requires more RAM or a better GPU, tough. Can't buy one. (They might be able to reintroduce external GPU over USB-4, but never RAM.)
No 32bit software support. At all.
(The first two of these are specific choices by Apple. You could make a SoC with x86-64 cores or a discrete system with Arm cores)
None of these really matter for a cheap (to build) commodity consumer grade laptop, but they do elsewhere.
The fixed configurations scale just fine within those limits. Your complaint seems to be that Apple doesn't offer an endless variety of different CPU SKUs that Intel and AMD do, but instead just offer a few fixed configurations.
If your workload requires more CPU than you can buy from Intel or AMD, or more GPU than you buy from Nvidia, what then? Everyone runs into a limit at some point, so your problem is that Apple has chosen lower limits?
Apple is not designing for the balls out cook an egg on your PC case market. You're right that by lumping CPU and GPU on the same MCM they are limited - but they have chosen to limit themselves well beyond that so that design decision has nothing to do with the products they are offering. The Mac Studio's M1 Ultra doesn't draw even 100 watts, while Intel and AMD are releasing CPUs able to draw 200 to over 400 watts in the case of Intel and Nvidia is selling GPUs that draw 600 - those numbers are BEFORE any overclocking is added! As a result power supplies that offer 1000 or even 1500 watts are the fastest growing segment of the DIY market! So yeah the x86 PC world has a higher high end, but you have to pay for it with a system that sounds like a jet taking off from an aircraft carrier.
If you define "scalable" as "I can buy systems that..." then yeah Apple will always be behind, but that's got nothing to do with their chip design. That's just what they have chosen to target. Using the same logic you could claim Apple can't scale downwards, because they don't sell any $300 Macbooks while you can find any number of $300 Intel and AMD laptops.
It's not clear if you are an Apple marketing droid, or a troll incorporating a most excellent impersonation.
Apple make shiny nice things for people who don't want/can't understand customisation. They are OK for what they are, and by some metrics and applications they are interesting.
At the top end of performance, you will need customization to get the best results; and so far the heat dissipation alone is going to seriously dent the Apple chip performance.
Now if Apple release the new "iLiQuiD" nanocooling-crystals upgrade for the inevitable £329,000 pricetag (+£1999 to purchase the revolutionary set of iWheels to allow iMovement), I'm sure it would be wonderful.
I'm no apple fanboy at all - more a hater...
But as a techie, I can only admit they have made big strides to performance on ARM. And using that experience, it would be not much of a problem to scale down to CPU only designs and start expanding in the ARM performance market.
I'd really hate to see that, but it could very well be coming...
Wow, you're really engaging in full-on doublethink and redefining terms in the middle of sentences.
Please learn the meanings of terms like "scalable" before you make a further fool of yourself.
For example, upgrading a Mac Mini M1 to have 32GB of RAM is totally impossible. Compare that difficulty with any Intel or AMD server, desktop (or most laptops).
While Apple's M-series of chips are impressive, they do take a number of shortcuts like having the RAM and GPU on-die, to get those impressive results. That's not likely going to fly for a lot of use cases.
Personally, I'm keeping an eye on RISC-V. Virtually all the benefits of ARM, without the licensing fees and more freedom of design. As long as the instruction set doesn't fragment as a result of the openness, I expect it will be starting to eat into ARM's market by the close of the decade.
I expect to see increasingly large-scale RISC-V in phones pretty soon. Having a license-free core has to help the bottom line and the customisability on such a platform is probably a distinct advantage. Expect to see this in China first perhaps and then growing from there.
ARM are currently king, but I think there is limited life there.
The world is burning and Intel trumpets increased clock speed and commensurate power demands. O dear, but then that's humanity for you - some us us really don't give a shit.
Everything's fine until the ship has gone to the bottom and all the lifeboats have been found to be rotten.
I would say that FLOPS/Watt matters more, especially when dynamic clock speeds and the ability to independently idle cores are involved. Consider that a slow processor taking more time to do a task means running all the other components in the computer, as well as the peripherals for longer.
I had a monster tower case setup at one point, 1,000W PSU, 8GB dual graphics cards and 64GB memory on an i7 multi-core CPU. You know what I got fed up with? Hoovering out the case every week and the noise despite using liquid cooling most of the syste. SO I gave it to my Dad and bought a gaming laptop with 64GB and an 8Gb gfx and AMD Ryzen, plus external JBOD, now my consumption has dropped to miniscule amounts most of the time. I can still crank it up when I'm gaming but the polar bears can sleep easier at night and the little disc in my leccy meter is no longer a blur!
I thought (and please correct me) that, unless you have something exotic like a thermosiphon, water cooling will always be noisier than air; given that the same amount of air has to be used to dump the same heat. My case has 6 fans set up to maintain a positive pressure with magnetic filters on the intakes. I just vacuum the filters when I'm cleaning the office and I've only seen negligible dust on large air coolers that have been in daily use for years.
Something else to be aware of with PSUs is the efficiency curve. A 1kW PSU running at a couple hundred Watts will generally be less efficient than a lower powered one with just enough grunt for the system at full tilt.
Like many things, this depends.
A water cooling system tends to have a much higher thermal mass than a cooler/fan system and therefore it can hold much more heat which allows it to cover spikes in heat generation better.
If a water cooling system has a large radiator (large surface area) and large, slower fans blowing air through it then this can be quite quiet. However, should the amount of heat getting stored in the water get too much then the fans need to be run faster, and therefore noisier.
When a water cooling system doesn't have a large enough radiator then the heat dissipation mechanism needs to be more active, which tends to mean louder.
We're not quite at the stage where it's worth seriously considering integrating heating systems into a property but with the way the CPU and GPU manufacturers are going it won't be too long. 800W of heat dumped into underfloor heating from a PC will provide quite a thermal mass to warm and the heat may as well go somewhere useful. Not so good during warm periods though...
One of the key benefits of water cooling is that you can move the heat to where it can be more efficiently dispersed. The typical air cooled case has 3 fans at the back (typically one case exhaust fan, one PSU exhaust fan and one GPU exhaust fan) and relies on negative pressure to pull cool air from the front, over the disks (somewhat less of an issue now with SSDs) and to the CPU heatsink and fan. This then blasts the hot air from the heatsink all around the interior of the case. With this kind of setup, you can't cool the CPU/GPU any cooler than the case temperature, and this depends on how efficiently you can dump all that heat outside the case and draw fresher air in.
Oglethorpe mentioned having 6 fans and filters on the intake - intake fans in general add very little in terms of cooling, its better to have more exhaust fans which will draw the air in. 3 exhaust fans and 3 intake fans is actually going to run pretty much the same as 3 exhaust fans, but with twice the noise.
Your next problem with air cooling is that of fan size. To cool the case, you need to exhaust a lot of volume of air. The larger the fans blades, the more air it can move. The faster they spin, the more air it can move and the louder it gets. So you have a trade-off between fan size, fan speed and noise. With air cooling, you're constrained by fan size because of the dimensions of the case, graphics card slot size etc.
With a liquid cooling solution, you can move the heat immediately to the edge of the case. You can then get rid of that heat out of the case using very large, quiet fans, and because there is nothing venting heat within the case, your baseline for cooling is the room temperature rather than the case temperature.
So, no, you have the same amount of heat to move, but you require less volume of air to move that heat, as the air entering the radiator is cooler. Plus, you can typically use larger more efficient fans that can produce a higher airflow per decibel than the case fans, and you eliminate the CPU fan, which doesn't exhaust heat at all in an air cooled case.
Heat build up is also a problem when dealing with extremes of temperature. It is also quite a challenge to keep electronics happy so that it can be protected from the extremes of Antartica whilst also not overheating.
Obviously, the extremes of space present further problems - see the design of the James Webb telescope.
We have been here before. Intel most likely trying to steal AMD's thunder with their parts claiming over 5Ghz boost with standard cooling and power draw.
Expect Intel to have extreme cooling on these parts for anything over 5.8GHz.
I call complete BS on 7 or 8GHz. Not a chance. We have all been here before.
I too personally remember using chips measured in single digits of MHz. If we'll see THz probably depends on how it's measured. Single chip performance I doubt that we'll see hit 1THz.
However, AMD already offers 64 core processors at a maximum clock of 4.3 GHz which is 4.3 * 64 = 275.2 Ghz worth of grunt on the single chip, so if you could use that lot as a single core (which obviously you can't) then we'd be more than a quarter of the way there today and i'd fully expect to live to see either 128 8GHz cores or 256 4GHz cores, which would break the THz barrier if we were to accept that particular counting method.
The wavelength of 1THz light in vacuo is 0.3 millimetres. On silicon, probably nearer 0.1. Probably achievable, but you CPU die will have to be a cluster of tens of thousands of essentially independent CPUs, each with many times fewer transistors than we have now because we can't shrink transistors *much* further than we do now. (They are already only a few dozen atoms across.)
Can anyone tell me why the majority of PCs will need this sort of speed. The main complaint I hear from people with very old laptops (eg T4800 (I think) running Vista) is the boot up speed not the program speed. Most PCs already spend most of their life wondering what to do with the left over CPU cycles so why give them more?
This post has been deleted by its author
It is a vicious circle. As CPU performance increase the software developers care even less about how resoure-hungry the monstrosities they produce are.
We see this when something is upgraded (now of course it is an online update) and promptly starts running at a slug-like speed. More commonly is the gradual reduction in responsiveness as each update add more shite that is running in the background to make things happen faster.
Why so many of the communication tools are so resource heavy beats me. They just sit there spewing endless pop-ups, bleeps and windows.....
Ah, answered my own question.
Teams is the work of the devil and I really fail to understand why it is such a pile of rubbish. It is as though MS gave 10 different group of developers bits of a brief without seeing the overall picture.
Marketing and bragging rights.
Intel and AMD are in competition and with most things IT related 'speed' is an important yard stick; whether it is relevant to everyday users...
As we've discussed elsewhere on ElReg, the majority of homes don't actually need particularly fast broadband (ie. anything over 100Mbps), yet that hasn't stopped the speed-based marketing and competition between ISPs.
Why give them more? So poor quality programmers can waste them.
There's a perpetual argument that most programmers shouldn't invest any time in optimisation because the impact of their optimisation will be so marginal to be not noticeable.
Unfortunately this is a point of view put out by the terminally short sighted, and also likely those running systems that are perpetually top of the range.
When code is executed thousands or millions of times over, the net effect of optimisation is highly important. When programmers don't care and, for example, just use variants as all of their variables, the CPU overhead to handle all of these variants is tremendous. Multiply this by thousands of operations and use a less than top of the range system and the impact is serious.
However for the likes of Microsoft, software optimisation is known as "buy more and faster hardware".
Bluntly? The majority don't need it, as the majority of PCs are basically used as a web browser. There are certainly use-cases where the performance is needed/required, though - code compilation, image editing (raw camera images), video editing (if it's not offloaded to the GPU), scientific simulation, games, etc.
If you've got Excel chewing 27GB, that spreadsheet is not trivial. It may be simple and have a fuckton of imported data to analyse, but that's a different kettle of fish.
I just fired up a genuinely trivial spreadsheet to sanity check that apparent cobblers. 66MB total Excel memory use. Most of that's overhead too (i.e. the executable and its raft of associated DLLs) as, drilling down, it's allocated a whole 140KB of working memory.