Is that 400 Intel Watts? That's about 1kW in real money
And here's Intel's Epyc response: Up-to 56-core, 4GHz 14nm second-gen Xeon SP chips, Agilex FPGAs, persistent mem
In a highly orchestrated global maneuver, Chipzilla today launched, to much of its own fanfare, its second-generation Xeon Scalable Processors for servers – chips previously codenamed Cascade Lake. A while ago, executives at Intel-rival AMD, which made a big splash of its own with its 32-core Epyc server-class CPUs, told us …
COMMENTS
-
-
Wednesday 3rd April 2019 00:37 GMT Kevin McMurtrie
Step 1 - Invest in ulracapacitors. Lots of CPUs cycling between 20 to 400 watts is going to mess with the low frequency mains transformers unless there's a big capacitor bank on the intermediate power lines inside each server.
Step 2 - Use wealth of investment to prepare for The Rise of the Machines.
-
Thursday 4th April 2019 10:02 GMT Anonymous Coward
"Lots of CPUs cycling between 20 to 400 watts"
It's likely to provide very similar load to existing systems - it appears to be Intel's take on two cores, one die targeting HPC. Intel haven't released socket information as far as I can tell (strange...) and Intel are suggesting that systems will be liquid cooled and compute focused (i.e not supporting maximum RAM capacities).
On top of that, DC's are rarely space limited. They are either power limited (if designed correctly) or cooling limited (if it's not economic to upgrade cooling to match available power). If the DC's are power limited, you're likely just stuffing less boxes into a rack.
-
-
Tuesday 2nd April 2019 18:29 GMT Duncan Macdonald
So - 56 cores instead of 64
The EPYC Rome processors go up to 64 cores (128 threads) unlike the 56 cores which will be available in one SKU only (the 9282) or the 48 cores available in another SKU (the 9242) - all the other processors have the same or fewer cores than the current first generation EPYC which reaches 32 cores.
As the previous commentator mentioned - a 400W Intel power consumption rating implies a much higher peak power draw. A PSU with over 1200W output is needed for each 9282 chip (an 8 socket system would need over 10kW of power supply - BEFORE peripherals !!!)
Icon for the heat dissipation ->
-
Wednesday 3rd April 2019 17:45 GMT muhfugen
Re: So - 56 cores instead of 64
There are massive architectural differences. The number of cores which share L3 cache for one. And the ability for large VMs (or threadpools) to do work without having to span NUMA boundaries and the incurred latency penalties for another. To the number of sockets which they can scale to.
-
Tuesday 2nd April 2019 18:34 GMT druck
Patching nonsense
They can’t patch all of it, because the only way to completely get rid of it is to completely get rid of speculative execution in caching, and if you do that, your shiny modern Core i7 performs as well as a ‘286,”
What nonsense. Speculative execution didn't even come in until the Pentium Pro. An i7 without it would work more like the older non-speculative Atoms, which is bad enough, but still orders of magnitude faster than a 286.
-
Wednesday 3rd April 2019 07:52 GMT Lee D
Re: Patching nonsense
And you don't need to completely remove speculative execution.
You just need to make sure that when you do speculatively execute, that you are completely applying the same memory security principles as when you don't.
The problem Intel had was not "You're trying to think ahead", it was "When you think ahead, you're doing so by bypassing all the security".
It might still mean a change in chip design, rather than a software fix, obviously, but it's not as drastic as "you can't speculatively execute".
-
-
-
-
Wednesday 3rd April 2019 10:03 GMT TeeCee
Re: Die Size
Would make sense. IIRC Intel's planned die-shrink has been canned as a) AMD are already fabbing at smaller sizes than their target[1] and b) they couldn't get their current architecture to sample in quantity.
[1] ...and if you really must run the Red Queen's Race it's bad form to come second.
-
-
-
Wednesday 3rd April 2019 10:28 GMT phuzz
I suspect that'll be a lot less of a problem than you might think, because these will be used in servers and will probably only be powered down a handful of times in their entire lives. Also, BGA is fine if it's manufactured well.
It's much more of a problem if a cheaply built chip is in a games console that's going through big thermal cycles every day.
-
Thursday 4th April 2019 01:50 GMT tcmonkey
True, although you will still get thermal stress induced by changing loads on the chip and the sudden energy burnt when the workload puts its foot down.
It also has the hugely negative downside of not being able to replace/upgrade the two components separately which does sometimes happen, even in server-land. We did CPU upgrades some VM hosts last year for instance.
-
-
-
Wednesday 3rd April 2019 04:14 GMT cb7
"They can’t patch all of it, because the only way to completely get rid of it is to completely get rid of speculative execution in caching, and if you do that, your shiny modern Core i7 performs as well as a ‘286"
A slight exaggeration, but I'll say it again. There's merit in developing cheaper memory that doesn't need 16 clock cycles to get dressed everytime it's asked to go fetch some data.
-
-
Wednesday 3rd April 2019 10:07 GMT Anonymous Coward
Re: Where I'd like to see Optane go...
As far as I'm aware, the Optane secret sauce (for performance anyway) isn't in the memory cells, it's in the interface and the position in relation to the CPU to reduce latency which is why it requires CPU's that support Optane.
So no, you won't see it in microSD.
-
Wednesday 3rd April 2019 07:47 GMT IGnatius T Foobar !
dozens of cores and oodles of memory...
...it's basically turning into a mainframe, which makes sense because that's what a cloud data center really is. With a chip like this, a hosting provider (a real one, not AWS) can fit into a rack what used to take up the entire room. Commoditization is a wonderful thing sometimes.
-
Wednesday 3rd April 2019 09:27 GMT _LC_
Re: dozens of cores and oodles of memory...
Bear in mind that those "56 cores and 112 threads" are usable in single user environments only. Thanks to the multitude of Spectre bugs this chip cannot separate users (Intel is affected much more than others are as they cheated the most with "speculative execution"). In other words, if you are running a big box with various compartments, this isn’t for you as your users would be able to access each other’s data. ;-)
-
-
Wednesday 3rd April 2019 14:34 GMT _LC_
Re: dozens of cores and oodles of memory...
They are claiming fixes for only a few. Others have already been described as "not fixable" by the researchers. That is, they would require a change in hardware design in order to mitigate the problem. The change would have to be more drastic than what Intel wants to put itself through.
-
Thursday 4th April 2019 02:41 GMT doublelayer
Re: dozens of cores and oodles of memory...
First, at least some of those have been patched in software. Second, it doesn't really impact the main point because people are currently doing multi-user environments on the existing intel chips with the same vulnerabilities. For all of those people, the security landscape is the same as it is right now. If you don't care about the vulnerabilities enough to stop using a multi-user system with the old chips, the compression allowed by all these cores could be useful. It might also help if you have a relatively large datacenter as well, as you could compress multiple internal servers onto a smaller number of VM hosts running on these. I'm not sure it's worth the investment, but it makes some sense.
-
-
-
-
-
Wednesday 3rd April 2019 09:25 GMT Korev
Memory bandwidth
There doesn't appear to be a significant increase in memory bandwidth, it'd be interesting to see if the massive core count translates into good throughput in real applications. There's also a high likelihood that network, storage etc. will just become more of a bottleneck.
Bring on the benchmarks :) -->
-
Wednesday 3rd April 2019 10:12 GMT BugabooSue
Buying Intel?
Nope, still not buying.
If it were not for the likes of AMD, ARM, and others, providing some competition, Intel would not be even selling processors at this level. They would be still be strangling the end-users for every damn dollar they can using lesser silicon.
I’m not saying other firms are any better, but competition obviously works. I will continue supporting the ‘underdogs’ as my long-term future in computing depends on it.
If Intel had got their way, I truly believe that we would not be above 2GHz dual-cores on the desktop, let alone the 3GHz+ Ryzen monster I am running today.
I’m all for making a profit, but stifling innovation (and milking the dumb users) to do it - that really sucks.
-
Wednesday 3rd April 2019 12:19 GMT johnnyblaze
All I can say is, go AMD. EPYC will offer far more performance for the buck, and it wouldn't surprise me if AMD's 64C/128T EPYC is half the price of the high end Xeon Platinums. AMD are on to secure 10% of the lucrative server market - and growing. Intel's monopoly days are over, and they're now actually having to do some work.
-
Wednesday 3rd April 2019 12:23 GMT Lusty
Heat
At 400W per socket, a measure I will call "Wockets" from now on, it's a wonder that cloud providers are not cashing in on the heat. My hottub only needs 3KW to heat it, so at 400 wockets we'd only need a couple of beefy servers to run it. Extend that out and we could have Azure health spas next to every data centre with hot tubs, saunas, steam rooms, pools etc. all heated by the DC. With the right setup and enough wockets you could probably have a bakery running using some clever heat exchanger. Or a pizza joint. Yes, the more cores the more pies you get cooked for free. This is the future!
-
Wednesday 3rd April 2019 15:36 GMT Anonymous Coward
True story*
The toilet cubicles at Intel have all have glass walls, floors and ceilings. When someone complained that they felt uncomfortable with this, they were told by marketing and HR not to worry about privacy because, despite initial appearances, the cubicles are fully and properly segregated by walls, and most of the staff have learned to look straight ahead only.
(* allegedly).
-
Wednesday 3rd April 2019 17:34 GMT Paul Shirley
encrypted DRAM
Optane memory also features hardware-based encryption – something no DRAM device is capable of
If you go dumpster diving for DRAM there won't be much data there to decrypt...
Presumably persistent DIMMs are a problem for encrypting data before it leaves the CPU, if you ever have to read the DIMM somewhere else. Have Intel opened a whole new set of security 'opportunities'?
-
Thursday 4th April 2019 02:39 GMT zb42
Am I the only one cynical enough to think that persistent memory is inevitably going to lead to situations where you power the computer off and back on and it remains stuck in an unintended dysfunctional state?
I'm sure they are unusual cases where it is really useful, I just can't see it being worthwhile for typical computer use.
-
Thursday 4th April 2019 05:46 GMT Anonymous Coward
If I played the lottery and won, I'd like to have one with almost entirely NVDIMM init so I could take my Design and Implementation of BSD book, reset to ground, and build as near as I can get to an ACID-compliant OS.
I differ from a lot of my contemporaries and their successors in that I've always thought of any system that stores a value somewhere/somehow as having a database and build accordingly. This would be taking it down to the silicon level, immediately or eventually. One of my Holy Grail projects and notin the Monty Python-esque sense. [Which is still one of my top favorite films.]
Weird? Yep. That's me! General reaction to above? {See icon}
-