i7... 253W TDP... nuts.
In an office environment? At a certain point you have to think of your HVAC system.
Intel doubled down on "more power is better" with the launch of its 13th-gen Core processors at its Innovation event this week. With a 253W thermal design power (TDP) for its latest i9 and i7 desktop processors, water cooling might as well be a requirement. Even Intel's consumer-focused i5 now has a 180W TDP. That's 10W more …
Yet RTX4090 makes that i7 look like its being battery operated. ;-)
For a lot of people, their office computers sit mostly idle interrupted by times of number crunching. It's the gamers that are going to feel all the power demands that these new generation parts cause the most.
I don't think you'll be putting 24-core CPUs in most office desktops. A lot of office work can be comfortably run on a low-range laptop CPU, so even if you prefer desktops, you can find a more sane option. If you do need that performance, it might be worth considering whether you can do it on a server in a room with better temperature control with a weaker machine controlling it. By the point that you've restricted these parts to users who need that performance right next to them, your thermal requirements will be easier to manage.
Unless you need the computing power, i agree it is nuts.
I have been looking at processors with a low TDP, yet the Intel processors with 35w TDP are either all in one solutions or a NUC. I need a system that is flexible with regards to storage options - must intel due to programs running.
My lounge ceiling has six inset spotlights that were originally 50w tungsten GU bulbs. Switched them on and felt the warmth instantly. Replaced the bulbs with 5w "daylight" LED units. The overall lighting is about the same - but it now gives an unnerving feeling of the room going colder.
"The overall lighting is about the same - but it now gives an unnerving feeling of the room going colder."
The daylight lamps can be very blue. Look for something that mimics tungsten lighting around 3200K color temperature. Sometimes it's enough to fool you into thinking it's a bit warmer.
Most of the time what I'm doing is not compute intensive, but from time to time I need some iron to get things done. This has led me to use a laptop for doing things like commenting on El Reg or maintaining a database and only firing up my loaded cheese grater (small holes) when I need it. If I could have one machine where I can tell it to idle a certain number of processors or cores, I'd be happy. I mean really idle them so until I expressly ask for them to be on, they aren't. Keeping things sync'd is a pain. Sure, there's options to sync things online, but very few if you want to keep the data in-house. I view anything cloudy with trepidation and avoid it wherever possible. I don't want to receive visitors in dark suits that want to quiz me about a phone number in my contacts or something on my calendar that I abbreviated in a way that raises flags. I have had calendar entries for things like rallies and protests not because I'd go, but because I want to remind myself to stay the hell away from that area that day. Since I do field service work, I want to avoid getting stuck in traffic jams or unruly mobs protesting shortages of toilet paper or something stupid.
You can disable some CPU cores, but in most cases, it's not saving much if any power. Modern processors already put cores into an idle mode when they're not in use. Many don't have a facility for turning off the power to a core directly, so if you disable one from a cluster but continue using the others, it will just be permanently in that sleep mode. Many of the ways to keep a CPU's power usage down involve limiting how it responds when coming out of idle, for example not allowing it to clock up as fast as it can to limit peak power consumption. Having tried and mostly failed to find good statistics on CPUs' idle power draw, I understand that this answer isn't necessarily useful.
Depending on the idle draw of a more powerful machine, it could be acceptable to leave it operational with little or no load. If that's not good enough, you might consider having a remote wake function (depending on your hardware, there are a few ways to do that). That way, you can start your powerful machine whenever you need its power and shut it down when you're able to use your laptop alone, with the cost that you'd have to wait for it to boot. You can speed up that process by using a sleep or hibernate mode.
At that point, you don't have many options short of getting some lower-power hardware between the socket and the power supply so that it too can be toggled remotely. I know there are at least some devices like that that can be purchased, and it's a relatively easy thing to DIY if you like those projects, but at that point it's probably worth asking if it's worth the effort to have two remote-wake systems for power supply and computer itself.
With many people worrying about energy pricing and sustainable computing, it seems an odd choice to push such a power hungry chip family in 2022
You could kinda excuse it in the top end i7/i9 processors as you're paying for a performance premium, but in i5 land, is a 24 thread, 180TDP really a good choice for what will end up being the go-to business desktop?
combining that with talks of ever increasing clock speeds, it kinda feels like Intel has gone full circle back to the Pentium 4 era, where clock speed was king, efficiency be damned, and chips ran hot enough to melt motherboards (And gave AMD a unique and compelling USP with their "more is less" Athlon XP Range)
It's a long way from the original Intel Core design ideas.
What else can you do? Office workers and home (non-gaming) have been covered by existing chip technology for over the past decade, compute is at a point where they're happy with what they have and don't need any more.
So the only real reason to buy a new one is when the old one dies or you need the power, high end gaming or data crunching, it's a small niche but you have to appeal to them. Though I'm sure there's some Halo Effect they're after too for those who just need a replacement.
I built a machine when I emigrated countries eight years back, it'd still do the job for the vast majority of what I do, and I'm a data analyst, the replacement I built a couple of years back was more because I enjoy building computers and had the spare cash, functionally there's little difference in how many chrome tabs they can keep open and both run the decades old video game I play, the new one might run photoshop a tad faster for my photography hobby and it's nice to render video that bit faster, but far from necessary.
Hell, the one I left behind probably still do a half decent crack at it.
I'm thinking more or less along similar lines: what happened to the push for efficiency? When AMD announced their Ryzen 7000 series and the large boost in TDP I thought it was a misprint at first and now Intel is doing the same thing. So much for efficient and cool running CPUs and their less costly cooling requirements. It's no wonder ARM is making inroads in the data center, small though they may be.
They seem to have forked, with the laptop variant being focussed on power efficiency. If you get the laptop version, but in Mini PC form, you get a decent fast desktop PC, and it's far less power hungry and you still get a full desktop experience.
https://www.aliexpress.com/wholesale?catId=0&SearchText=ryzen+9+mini+pc
DESKTOP Ryzen 9 6950X TDP 170 Watts.
LAPTOP Ryzen 9, 5980 HX TDP is 45 Watts, a tiny fraction of the desktop variant.
Sure the desktop processor has twice as many cores (16 vs 8), but it uses ~4x the power!
I really don't need this performance, even the Ryzen 7 5800H TDP 15 W in this box is fast enough to run all my office stuff/ IntelliJ developer software and 3D stuff like Design Spark Mechanical.
Yeh, 15 Watts.
We won't know until Intel lets these out for testing, but on the AMD side plenty of reviews also showed the results of putting the CPUs into 65W mode. The spoiler there is that the 6 and 8 core CPUs gave up almost nothing in performance, but ran considerably cooler. The 12 and 16 core CPUs did leave some performance on the table for it, but it really wasn't a *lot* in the grand scheme of things. And they expect 65W mode to be available on most BIOSes for home users to play with. And potentially Dell et.al. might use it.
PCWorld (not the UK retailer), did a good comparison of the ECO modes in the new Zen 4 CPUs. link
The findings were basically:
Tests were done with Cinebench R23
Single core, no drop found in single core performance when in 105W or 65W ECO modes. (single core score is also higher than the old 5950X and the i9-12900K).
In Multicore mode. The 7950X ran about 10% slower in 105W mode (than standard mode), and around 25% slower in 65W mode.
But critically, even in 65W ECO mode, the 7950X was still faster than both the 5950X and i9-12900K, with those two having no power limits applied!
For comparison: The Cinebench R23 multithreaded scores were:
i9-12900K (no power limit): 27,283
5950X (no power limit): 25,600
7950X (no power limit): 37,973
7950X (ECO 105): 34,300
7950X (ECO 65): 28,655
Be interesting to see where the i9-13900K lands in all this.
My theory is that they're listening too much to people who like treating the processor market as a religion. For a while, AMD's been touted as having far superior parts. In some ways, they did; AMD was able to put a ton more cores on the chips and clock them high enough to get better benchmark ratings. AMD's chips were also logical choices for cases where high-performance machines were needed. For the average user machine though, AMD versus Intel wasn't that important.
I think at this point, Intel decided that, instead of sticking to producing chips that are functional but unimpressive, they had to take that benchmark trophy off of AMD. Their manufacturing methods aren't in line with the TSMC ones that AMD is using, so while they're trying to fix that, they just pumped up the power usage of their chips and added cores to get their numbers up. AMD, seeing this, decided that they also needed the benchmarks up, so they've done it too. In the end, anyone who puts the flagship chips from either company into a machine doing an average workload is likely to find they've wasted power and money.
Like computers or mobile phones, companies can't resist the idea that, even though their mid-range products from five years ago are fine for most people, they can get a lot of interest if they just turn some number up high enough. That might even work if they paid attention to the things that come along with the advancement. It hasn't worked for the phone manufacturers who have been playing the game of how many megapixels can you get from a camera sensor so it can throw most of them away, at least partially because they keep raising the prices. I doubt it will work for CPUs either.
There was a chart tucked away at the end of Anandtech's review of the Ryzen 7000 that showed what happened to performance when the 170W 7950X was restricted to a 65W operating profile using what AMD calls "ECO mode", here. They seem to be pretty efficient processors, being pushed hard for performance.
I wonder what happens to these Intel CPUs when you do the same, or even if you can.
Whilst this may allow them to take the performance lead in some metrics, that performance per watt ratio is not gonna get any better whilst still stuck on the 10nm process.
To a certain extent this is bad for them as the newer Zen 4 is an OEMs dream - build the cooling to whatever price point you want and the performance will taper accordingly - the point being that Zen 4 will allow some OEMs to bodge the cooling but still get a reasonable return on performance. If they were to do the same with these Intel CPUs it will cost more performance - basically these are going to need expensive cooling solutions.
I think Intel have made a tactical mistake - it looks like they aren't going to do a revised 6P/0E version like they did for Alder Lake lower end parts. Ironically this is the market segment where a revised 13th gen core design would do the most damage in terms of stealing custom / OEM orders from AMD.
The 6-core 12400 was already trading blows with the 5600X, and the 12100 wasn't that far behind - certainly it's probably the best quad core option right now - a 13th gen version may actually tip the balance especially as the cost of the Zen4 6-core right now is prohibitive and the only AMD option for a CPU with IGP on AM4 is the cache stripped 5600G/5700G.
More performance for a similar power envelope and not much more in pricing would have probably been a good move.
Worse still, AMD can now move to position AM4 as the value option and start reducing pricing of some of the higher end parts (e.g. 5800X) to fend of Intel's mid/lower range parts with more cores, each of which having similar IPC.
Instead Intel have chased the performance crown, which they might win in some tests if their IPC improvement claims are correct... But it's a costly win and they've created a product that few OEMs will want and even enthusiasts will have some doubts about.
Remember the AM3 FX-9000 series - due it's high TDP and powerful cooling requirements it died in the market place - yeah for sure it's lower comparable performance also had a major impact but being difficult to package wasn't ever going to help. Intel at least have a competitive product, and I guess a better customer / fan base, as well as a better influence over many OEMs, so I suspect it will not be as bad, but still even if the i9 ends up as king of the hill I suspect it will be comparatively rare - the 13600K is the product to watch from this.
I'm not sure who the 13700K is really aimed at... The same TDP max as the 13900K but lower turbo frequency, half the E cores and less cache... It will need to be aggressively priced.
More like 1.5-1.8V into the CPU, with further on-chip regulation. Had a quick scan through the 12th gen i7 datasheet just now, and for the higher end parts in that family it was indicating maximum input currents up to 280A...
Which is one of the reasons why modern processors have so many pins/lands/contacts/whatever you want to call them for connection to the motherboard - partly it's to provide all the connections required for all of the integrated peripherals (USB, PCI, graphics etc.) which would formerly have been part of the external support chipset, but partly it's to cope with the increasing power demands. e.g. on the LGA1151 sockets used a few generations ago, around 1/3rd of the pins were there to deal with the power supply side of things.
No, the CPU power draw mainly comes from a 12V source using DC-DC conversion to generate the >1.5V they use.
Principally this is fed via the main ATX power connector for older lower power setups, and in more modern systems using the ATX EPS 4 or 8-pin power connector.
There may be some minor usage of power from the 5V and 3.3V rails related to memory and I/O circuits, but for the most part this is almost inconsequential.
This is why power supplies are moving towards ensuring almost all usable power is derived from the 12V line, as GPUs also use 12V from the PCIe specific power cables and convert that down for its needs.
Get these tech companies to refocus their efforts on power efficiency and chips that are good enough for normal use instead of what crypto miners might want. Some of the recent examples of TDP from Intel, AMD and Nvidia are just absurd and sickening where energy demand is not matching to supply.
New 40 series GPUs from Nvidia draw so much power through the wires of the PSU they actually pose a fire hazard unless you have an upgraded ATX 3 power supply.
Problem is, processor TDP is an artificial number which doesn't have any standardised measure. It roughly correlates to thermal load on a cooler from a given manufacturer and whatever load they choose to put on the chip, but it's certainly neither 'how big does my power supply need to be to avoid the voltage dropping out' (that's a significantly bigger number) nor is it 'if my cooling solution can physically dump this many watts of thermal energy at a steady-state of X degrees difference between package temp and ambient, my chip never limit itself'.
"Or some other metric". Find some way of quantifying a computer electronic's energy efficiency within its class and then punitively start hitting the worst offenders.
It is absolutely plain to see that the likes of Intel, AMD, Nvidia et al are greedily going after the crypto miners with their most recent products and it is not something that should be allowed to slip by unchallenged by governments during an energy crisis.
Technically, less efficiency means higher energy usage. In most countries, energy is taxed for most normal people and businesses. So the more you use, the more you are paying in both energy costs and tax.
But keep in mind that tax money is unlikely to actually go towards making industry improvements, what is the point?
I'm all for penalising stupid, but not as a side reward for useless/lazy/corrupt (delete as appropriate for your country's government).
The UK (and other countries) use taxes on cars to push people to less polluting models. The UK's vehicle emission duties taxes become extremely punitive further up the scale. Not only does this encourage people to buy lower CO2 emission vehicles but it encourages industry to cater to that demand by selling lower CO2 emission vehicles in the first place.
The same should be happening for computer equipment to push consumer demand for lower power equipment. It is an absurdity that NVidia is producing graphics cards that draw so much power that they can consume 600W by themselves (and set your machine on fire if your PSU is ATX2.0). It is an absurdity that this generation of Intel processors draw 30-80W more than their 12th gen counterparts.
TDP is already taxed, it's called an electricity bill!
Also TDP has nothing to do with efficiency, TDP is primarily about what cooling you need, efficiency is about the amount of work done, for a given amount of consumed energy.
Each gen of chips is more efficient than the last, that's always been the case, and is unlikely to change any time soon. Yes the TDP has risen, but the work being done for that consumed power has increased to a greater extend, ergo more efficient.
As an example the latest Zen 4 CPUs have ECO modes, in this mode they have a lower TDP setting. A zen 4 in the lowest ECO mode can do around the same amount of work as a Zen 3 CPU, whilst only consuming around 30% of the power. Removing the power restrictions, will increase power consumption of course, but the work done will also increase.
It looks like the 7000 chips are a bit of a monster and in ECO (65W) mode can get 80% of peak performance. And they have not yet announced the 3D cache versions. They also have room to drop prices, 13th Gen may well be a difficult sell especially as there is no upgrade path as 14th Gen will be a new socket. AMD will have 8000/9000 series on the same AM5 motherboard
Yes, new Zen 4 looks very interesting, and from what I've read and watched, the actual architecture hasn't changed massively from Zen 3, a few tweaks here and there, some optimisations etc, but most of the gains have come from the new TSMC n5 node. Zen 5 is rumoured to be a more major architecture update, and also should be on TSMCs n3 node. (Zen 5 due 2024).
I'm on AM4 currently. Did consider moving to AM5, but wasn't keen on being an early adopter for AM5, plus I don't really 'need' a big upgrade. Plus of course the cost of at least needing a new motherboard, new DDR5 memory, and the CPU all adds up.
As I mostly game on this PC (I work on a laptop), I instead ordered a 5800X 3D to replace my now oldish 3800X. That should see me through for a good few years, perhaps even long enough to see AM6 coming out! Especially considering the CPU is rarely the bottle neck in gaming.
This post has been deleted by its author
I was sceptical of the benefits of massively power hungry components, and then I (reluctantly) upgraded to a 280W (high mid-range!) 3070Ti when they actually became available for a tolerable price. While the performance is nice the fan noise to keep it cool under load is horrible. Seems liquid cooling (or truly massive heatsinks/fans) is now required if you don't want to sit next to a hair-dryer. Fortunately I paired it with a modest 65W AMD 5700X - adding one of these new 250W monsters here sounds like an awful idea for a whole host of reasons (both practical and moral).
I very nearly went for a NUC-style barebones PC from Zotac (packaging high-end laptop parts into a less thermally and power constrained box), but the pricing was a big deterrent. I think next time I probably will. Loss of upgradability is a bit of a downer but compared to a 750W space heater ... I think I can cope.
No, they have even more complicated versions of AVX that run on their GPGPU based products. CPUs actually have less complex AVX implementations now.
I think Intel realised that a) their AVX-512 design was not power efficient and at this point haven't got an alternative ready (especially in terms of adding it to the E cores), and b) very few standard workloads need large scale AVX at all outside of data analytics or science applications, at which point they'll happily sell you a Xeon Phi or other.
Almost all other desktop software such as Blender, OpenSSL, etc., which can use AVX-512 can also use AVX1/2 as well.
I expect it will return to Intel CPUs once they have a) got an E core implementation that still retains reasonable efficiency from them even if the throughput is a bit crappy, and b) worked out a better way of having the FPUs process the data more efficiently instead of downclocking, or using a lower power limit for processing certain instructions on the P cores.
It's not like Intel are gonna settle for AMD with better features in the market.
It's possible that it may end up like the situation last time with Intel having MMX and AMD having 3DNow extensions to the CPU. Although it will be odd cos it's Intel that started with AVX512 and AMD has just joined in whereas Intel seems to have dropped it for desktops.
Quote: "To this end, Intel released flurry of cherry-picked internal benchmarks that show its new chips besting the two-year-old 5950X and going toe-to-toe with AMD's SRAM-stacked 5800X-3D in a selection of games and productivity apps."
Also worth mentioning they included the specs for the test environments for these benchmarks linked on the slides, and Intel basically didn't set a level playing field. (Shocked I tell you! Said no one). Copy of the slide here (on reddit).
The new Intel system was sporting fast premium 5600 DDR5 memory.
Whereas they fitted the Ryzen systems (5950X and 5800X3D) with 3200 DDR4 memory.
Lots of people complaining about this, as anyone who knows Zen (at least for 2 & 3), knows you generally fit 3600 memory [*], otherwise you gimp the performance of the CPU. You can easily drop around 5-15% in game FPS by using 3200 RAM instead of 3600 RAM.
Also the fact that Intel focused on the 5950X for gaming benchmarks is just odd! The 5950X is not a gaming CPU, with other chips in the range actually being faster for gaming (even before the 5800X 3D). Basically if you're gaming, you want no more than 8 cores, so that you have a single CPU chiplet. More than 8 cores means two chiplets, and the increased latency between them impacts gaming (although doesn't really impact productivity type workloads, and the game impact depends on the specific game).
The 5800X 3D data is in there, but almost as an afterthought, as they just added a tiny little mark on the chart, rather than a new bar, almost as if they were hoping people wouldn't notice the data! Someone on Reddit actually put the bars back in :-) link
With the nobbled RAM, the 5950X and 5800X3D scores likely need to be at least 5%+ higher than they actually are at the moment. Which means over all, the 5800X3D still leads.
As always, for real data, wait for someone like Hardware Unboxed or Gamers Nexus to get hold of the new Intels (they already have the new Ryzen CPUs tested). As they actually know how to do benchmarks!
* In case anyone doesn't know, running the Infinity fabric frequency at a 1:1 ratio with RAM speed, gives best performance in Zen. So 3600 RAM, as it's double rate, runs at an 1800Mhz, same speed as the Infinity fabric. You can also do 3733 RAM and infinity fabric at 1866Mhz for even better performance, but 3733 RAM is less common and some systems can have stability issues.