Maybe I'm reading this page wrong, but it seems all the intel systems are packing nVidia 3800 Ti GPUs, won't that skew the rendering performance somewhat??
I mean sure, you can build a laptop with one in, but still...
Intel has said it has put "desktop-caliber" silicon in a mobile package to provide its fastest 12th-generation Core laptop processors yet. The chipmaker unveiled the seven 12th-generation Core HX laptop chips at the Intel Vision event Tuesday, where the company also revealed AI chips meant to challenge Nvidia and a roadmap …
Even if you're considering the laptop a "portable desktop", that seems pretty absurd; you'll be lucky to get more than a few tens of seconds at peak before everything thermal-throttles, especially if there's a GPU even remotely matching the workloads implied by the CPU's feature set. And as for performance on battery... Shudder.
For comparison, an MBP 16" M1 Max with 64GB RAM (the most power hungry model in the range) idles in macOS at around 7-8W and I've rarely managed to push it past about 70W at peak, which included GPU usage. I think the true peak is more like 110-120W, which is itself pretty high, but this includes all GPU cores running at full tilt as well - the Intel specs are for the CPU alone.
This compares apples (no pun intended) and oranges, perhaps, but the efficiency in the Intel offering here is almost comically bad. I struggle to see the point in a "desktop class" CPU intended for mobile use when it's got power use specs like that. Maybe you could build a small factor PC in the style of Apple's weird Mac Studio or similar; otherwise, surely it'd be cheaper & provide much more reliable performance (in every sense of "reliable") to just use a normal form factor machine with the desktop CPU inside.
Anyone have suggestions about the kind of device where the CPU's peak performance would actually be useful and sustained? TIA!
You've misunderstood - 55W is not the idle power consumption, but it's "regular use" TDP, with the 157W being the temporary maximum under boost.
For example, I'm typing this on an i7-11850H, which has a "base" of 45W and "peak" of 100W - it's currently sat at 2.8GHz using 8W whilst running the usual guff (Windows, browsers, Office, etc). If I give it something time-consuming to do, it'll turbo-boost to ~4.5GHz and go up to 100W for a second or two, and then come down to 45W for a sustained period of time (the exact frequencies being determined by the number of cores under load, etc).
As for use case ... think portable workstation (someone required to work in multiple locations, requires a fair amount of processing power, and can't rely on network storage :)).
The problem is the work I do as a developer isn't bursty; it is long-running services and multi-threaded Java code. It'll suck the life out of any CPU it can get ahold of if you load it down.
"Efficiency cores" are therefore USELESS to me, and so is Intel's whole approach to trying to improve their power curves.
Face it: Intel couldn't design an efficient processor if they had MIC budget to do it...
I think you might be searching for a very long time to find an example of an office task where burst performance for just a few seconds is useful. Idle "office" tasks don't generally need burst performance for long.
I have a fat, and terribly designed "high performance" work prescribed Dell laptop sat here. I routinely prop it up on two bits of scrap to improve airflow. It's the difference between a particular batch job taking 7 and 13 minutes. I can bring that time down a bit more if I point a desk fan at the system. Same job on a "non high spec" office laptop runs comparably.
If you want processing grunt, and have to tolerate (endure?) windows; get a real machine. Far better value and performance.
Intel going the road of throwing more TDP at systems is just a hark back to when it's solution to P4 and Itanium competitiveness was to consume more power because they are out of ideas.