WTF?
This is *not* the first PCIe Gen 4 server CPU as Lisa claimed. Not by a long shot. AMD's two years late for that title.
AMD needs fact checking these days!
Chip biz AMD today, after months of teasing, officially debuted the second generation of its Epyc server processor family in San Francisco, promising performance, efficiency, throughput, and security improvements. At a keynote presentation on Wednesday afternoon, Lisa Su, president and CEO of AMD, claimed the second generation …
It's not your sentence, it's AMD's statement. You did a great job putting the required qualifier on it, but Lisa said in the live stream, unless I missed something, "first server CPU to ship PCIe 4.0".
Here's an earlier AMD statement too with a similar weak qualifier:
"will be the world’s first PC platform to support PCIe 4.0 connectivity"
https://www.amd.com/en/press-releases/2019-01-09-amd-president-and-ceo-dr-lisa-su-reveals-coming-high-performance
Unless all those chromebooks that use ARM aren't considered PCs (which granted may be the case on a technicality due to not having a PC BIOS, but in common vernacular they're still referred to as such -- note modern x86 systems aren't "PCs" then in that sense either), then that statement isn't limited to x86 machines, and is as a result a similarly misleading statement from AMD.
+1 to you guys for making sure the qualifier was in your article. That's what I get for listening to the AMD stream and getting a bit pissed off at AMD, then commenting immediately. :) Always appreciate the reg's fact checking vultures!
Ah right, ok. She might be thinking of PCIe 4 expandability. I dunno.
In any case, I'm kinda more concerned about second-gen Epyc's RAM latency, which are offset by the large caches and prefetchers.
C.
In any case, I'm kinda more concerned about second-gen Epyc's RAM latency,
Indeed. Compared to some of the other players it's not the best, and I wonder how much of that comes down to the chiplet architecture and L3 coherence. Prefetchers aren't a panacea, in fact oftentimes they're a downright nuisance -- I've been there on a few different chips over the years and have the scars and T-shirt to show for it.
Still the biggest item I have a problem with is that blasted required PSP. In an era where data security matters more than ever, the PSP is a giant step backward. Just think about the interaction of the AMD-signed PSP with the USA CLOUD act, that particular avenue should give everyone processing medical data etc. nightmares...
Presumably your concern is that the PSP might exhibit flaws much as Intel's management engine did a while back, representing an easy way to snoop around inside a machine? Well, it would certainly have been a topic high up on there "Got to get this right" list one would expect.
However, if your concern is because it makes complying with the requests for access difficult, as I understand it it's up to OSes or hypervisors whether or not they use it. VM encryption is optional, not unavoidable. And I dare say that the PSP would be of big interest to everyone else...
Concern stems from the former primarily. The latter (the tech supposedly trying to protect from requests for access) has not only been broken multiple times already in the real world (allowing malicious hosting providers to theoretically peer in on VMs thought to be safe) but is permanently dependent on an AMD held master key by design. Which means AMD itself can be forced to assist in requests for access.
AMD has refused to release the PSP as open source. Or allow anyone to audit it. Or allow anyone to remove it from their system (the EFI toggle doesn't remove it, BTW). And it's already been found to have a number of fun bugs.
Furthermore, AMD is a USA company. The USA has the CLOUD act. AMD can create malicious firmware (or just cooperate by signing someone else's malicious firmware) that will be accepted by any compatible AMD system already sold.
And no, they didn't get it right and probably couldn't care less about getting it 100% right, seeing as they're not on the hook when PSP bugs leak customer data or allow hijack. Just like Intel isn't on the hook when the ME leaks data or allows hijack.
It all comes down to "just trust us, we know what's best for you". Can you say "false sense of security"?
I want one. GimmiegimmiegimmiegimmiegimmiegimmieGIMMIE!
I'll take an even 128 of the highest end variant, mount them in the same machine, & finally be able to run a decent game of Nethack!
Let's see your framerate lag now! Muh Hahahahahahha!
*Cough*
I need a hobby. =-)P
"finally be able to run a decent game of Nethack!"
I've been running Nethack for a decade and half decently after ATI (pre-DAAMIT) came up with the ASCII acccelerated GPU.
It would be nice to have 2x 64 core CPUs and watch "make -j 256" do it's thing, but I'm afraid of just how much money that will take. I'd probably need 128GB RAM, RAID of 3 NVMe SSDs, that 1Gbps Ethernet gear is going to feel like dialup if it's not upgraded, and I need a graphics card...
Personal test results: x470 Taichi Ultimate 64 GB DDR4 2400 ECC RAM, Ryzen 7 2700x vs. Ryzen 9 3900x in that same machine. Upgrade made 'cause heavy x264/x265 usage.
Result for x264/x265: The 3900x is 85% faster, I expected 60% from the core count and other improvements.
With a bit OC and DDR@3000 95% faster. More OC gives >100% speed improvement, but also > 90°C CPU cores on these hot summer days.
Translating that to Epyc I did expect epic speed improvement.
Oh, and I did compare against my old i7-4960x which is still in use: Per core and clock-cycle 65% more speed.
Intel has no technical answer to this, but their real answer is "who cares?".
At the end of the day, there's an incredible amount of intertia in the medium size enterprise space that seems to rely entirely on Intel - maybe deliberately, maybe not, but they take the CPUs that come in the servers in their price bracket.
Where are the Cisco blades with EPYC? Where are the droves of ProLiant and PowerEdges with EPYC?
It's a shame, because this time AMD has really delivered. More than twice the cores, better IPC, much lower power usage for performance delivered. It delivers in a big way, in every available metric, at a lower price. I hope the market wakes up to this.
Intel continuing to deliver mediocrity (full with security holes) doesn't deserve the success it currently enjoys.
"Intel has no technical answer to this, but their real answer is "who cares?"."
Intel has a technical answer to this, if Intel had managed to hit their roadmaps, Intel 10nm and AMD 7nm parts should have been able to provide simiiar performance although Intel would still segment features giving AMD a lower cost advantage.
The problem is Intel's 10nm process. Pretending that they can ship a few mobile products is just an attempt to meet statements made to the stock markets to avoid upsetting regulators...
AMD: moves from 14nm to 7nm, doubles core counts, cache and improves clock speeds by ~10%
Intel: moves from 14nm to 10nm, core counts remain unchanged, some hardware security fixes added to avoid performance loss, clock speeds down ~10%
['Cos its always dark in Data Centres]
Almost an afterthought that Epyc2 gives more bangs per GW. Its almost as if the datafarm industry is in denial that they are becoming an increasing part of the problem getting to net zero carbon emmisions?
Their growth is a double whammy - consuming power to compute and then consuming more power to cool the computer. You can't just sit back and piggyback the power industry's drive to renewables or can you?
Still any effiency improvement is welcome - but it would be good to see it headlining rather than be a bootnote.
The IO module should hopefully resolve the NUMA issues as it reduces the number of memory paths from 3 distances across 8 domains to 2 distances and 2 domains.
There is also the option to pin memory channels to CPU's to avoid the worst case latency hits as well - I know it's available as a compiler option and I suspect it will be exposed as a BIOS option too.
Good for the competition and all that, and apparently good for innovation. AMD has led in the past on important things, and now Intel is following yet again on multi-chip cores. Personally I'm not following CPU innovation that closely any more, but if AMD can shake things up I'm all for it.
I did note one thing though : "Twitter is able to run 40 per cent more cores per rack (from 1240 to 1792) while maintaining the same power and cooling".
To me that means that Twitter is running more cores, and not using AMD's Epyc to diminish its power and cooling costs. That, in turn, means that Twitter needs that much more CPU power to handle all those 200+ character size tweets that are flooding the Internet.
That is a frightening thought.
It's just semantics. It doesn't mean that they will instantly increase their core count by 40% replacing all current servers. What it means that in the future when they have increased demand or compute or do HW refresh they can decommission ~3 racks of Intel CPUs and replace it with only ~2 racks of AMD CPUs a significant saving for both CapEx and OpEx
Bottom line is 40% higher CPU density is a huge deal at large scale and warrants some risk taking.
It all comes down to how much incentive the HP/Dell sales drones have to push AMD over Intel
The IT industry is eager for more hardware competition, said Moorhead, but AMD, lacking the investment Intel has made in the enterprise value chain, still needs to lean on vendors like HPE, Dell and Lenovo, none of which have much of a recent track record creating demand for AMD kit.
Or maybe Intel need to stop giving the vendors back handers for not using AMD chips.
ONE Combined CPU/GPU/DSP Super-Server Chip !!! 60 GHz GaAs !!! 1.2 PetaFLOPS at 128-bits wide !!!!!!!!! Sitting on the desk at work! 'NUFF SAID !!!! WE WIN !!!!
WE UTTERLY BLOW AWAY INTEL, AMD, IBM and ARM COMBINED !!!!!
.
P.S. The Supercomputer version is 595 times more powerful than Summit at 119 ExaFLOPS SUSTAINED !!! Blows away the ENTIRE Top500 COMBINED !!!!
Again, WE WIN !!!
.