But ...
Will it run Crysis Remastered?
AMD is officially launching its hotly anticipated next-gen X3D desktop processors based on the Zen 5 architecture today, which means The Register can let you know about the 8-core Ryzen 9800X3D we've personally taken for a spin. The X3D series comes specially packaged with an additional slice of L3 cache on top of the pre- …
gave similar results.
He did point out that if your buying this for gaming, the bottleneck isn't the CPU when running high performance, but the GPU.
His summary was yes it's good, but you'd be better off buying a lower spec part and getting better monitors, gpu's and memory.
While the gaming community may be larger than engineers who develop and run numerical codes on their desktops, the time stepping in a scientific simulation is quite similar to in-game dynamics but without the game. In fact, since a numerical run has no graphics-card bottlenecks, the X3D series CPUs with full-width AVX512 are astonishingly fast for small-scale computations.
Woohoo! If more science and debugging can get done faster on the desktop, that leaves leadership-class supercomputing available for larger runs.
Okay. Time for the old nomex knickers - because I'm going to innocently throw some shade here, and ask what's wrong with these numbers? But first, the proviso…
The proviso is that I think these new CPUs are great for games (because the games are written for Windows, not for any other reason). I think that these CPUs might be great for some highly specific business use-cases (because the software is written for Windows, not for any other reason). But for general business and software development tasks? Even server tasks? No. The day of X86 and X64 is done. The future belongs to ARM and not only for its parsimonious power consumption.
Let's look at some numbers. Note that this comparison was done on the CPU Monkey site, which as far as I can tell has no particular bias for one manufacturer over another. Note also that I picked the very cheapest M4 chip that money can buy - and it still toasted the Ryzen in many benchmarks where a test has been run. Had I specced it up a wee bit to a Max or a Pro then it would have smoked the Ryzen across the board (but that would open me up to criticisms of cheating). Note also that I have a couple of Ryzen powered machines (and a Xeon, and an M2) and I like 'em all.
https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_7_9800x3d-vs-apple_m4_8_cpu
What have I missed? Because it looks to me as if the chip in a £599 Mac (which is actually a 10 core, not the 8 core that I used in the comparison on CPU monkey) is faster than a £400 CPU that just gets you the CPU. I mean yes, the PC is upgradable after purchase - which is certainly not nothing - but most users don't upgrade their computers from purchase to chuck time, so that's only a valuable function for people like you and me. Not for the general purchaser.
Scoring - down vote with a valid comment is a valid downvote, and counts me as wrong. Down vote without a comment or with a valueless comment counts for nothing. And I'd genuinely like to understand what's going on here.
I don't want to start a holy war here, but what is the deal with you Ryzen fanatics? I've been sitting here at my freelance gig in front of my Ryzen 9800X3D rig for about 22 minutes now while it attempts to build a simple VB.NET program. 22 minutes. At home, on my 8MHz Dell 286 running Windows XP and Visual Basic for DOS, which by all standards should be a lot slower than VB.NET on the Ryzen, the same operation would take about 2 minutes. If that.
In addition, during this compile, I can’t edit my other files. And everything else has ground to a halt. Even the window is straining to resize as I type this.
I won't bore you with the laundry list of other problems that I've encountered while using this Ryzen, but suffice it to say there have been many, not the least of which is I've never seen programs run faster than those running on other computers. My ZX80 with 1K of RAM runs software faster at times. From a productivity standpoint, I don't get how people can claim that Ryzen is a "superior” CPU.
Ryzen addicts, flame me if you'd like, but I'd rather hear some intelligent reasons why anyone would choose to use Ryzen over other faster, cheaper, more stable CPUs.
We know that the ARM implementation in Apple's Mx CPUs is much wider than the ARM CPUs you find in even high end mobile devices, and we know that ARM is a very efficient architecture. I suppose it simply took a behemoth like Apple to convince the world that ARM is (or always was) good for more than low power devices. Additionally, there's a lot of technical and historical debt still present in x86, despite the best efforts of Intel and AMD over the years to make it more "RISC-like" and efficient.
Apple has, presumably, optimised its CPU cores specifically for the workloads for which they're commonly used. You'd expect them to be pretty good in video and image processing because that's a market Apple targets.
Although there are so few working comparisons on that page that you can't draw much of a conclusion.
Try CPU benchmark instead. 9800x3d isn't uploaded on there yet, but the similarly priced 9900X has over 100 samples
The 9900X blows the M4 10 core out of the water - more than double the benchmark result, with the 9900X achieving 54,824 vs 25,026 from the M4 chip
https://www.cpubenchmark.net/compare/6040vs6171/Apple-M4-10-Core-vs-AMD-Ryzen-9-9900X
The single CPU score for the M4 is 4533, the single CPU score for the 9900X is 4685. (I would guess that if clocked the same they'd be closer too).
The 9900X has 12 cores and 24 threads, the M4 4 performance and 6 efficiency cores, with no hyper threading, yet the AMD is only 54% faster.
That's not blowing anything out of the water. (We won't start looking at the power consumption...)
Not suggesting the AMD isn't fast, if I was replacing my 5900X I'd be getting a 9900X now, the 5900X has been superb and is super quick. But the Apple stuff is very quick too. I have an M1 mac mini, and compared to the AMD when you factor in core count and whatever, it's no slouch.
The Mac chips are massive SOCs. They're very fast and efficient as long as your task fits in its unified hardware. That $/£599 Mini you're quoting has 16GB of memory forever. Some people need to run more than a browser. I've already been wondering if I should upgrade one of my computers from 32*4 GB to 48*4 GB sticks, or wait for 64*4 to be possible with desktop DDR5. I added a Gen5 m.2 stick for swap as a stopgap and decided to wait for bigger desktop DDR5 sticks.
The AM5 socket AMD CPUs don't have the best possible RAM throughput, and that's where the "3D" models come in. That extra cache is a cost-effective solution for desktops compared to adding a ton of expensive wiring like a server system would.
@45rpm
The problem with the Mac Mini is cost.
If you just want the base model, it's incredible value.
But you want another 8GB of RAM? That'll be £200
TWO HUNDRED FUCKING QUID.
Put that in context that will get you 64GB of DDR5 on a PC.
An extra 256GB for an SSD upgrade
Again £200!
Well, you can get a PCI 5 1TB m.2 and still have change
So base is good value, but every upgrade is a scam.
I was looking at the reviews of the Mac mini, and whilst I have no need for any kind of mac... the price of the base model is pretty impressive for the specs.
All you really need to do is hook up some decent external storage and suffer with the smaller amount of ram.
A 4tb SSD in an external enclosure could be yours for less than 200... well, it was the last time I checked. I picked up an 8TB Samsung 870 SSD for £300 last year, then just as I was about to buy some more to upgrade my mediaserver's oldest 6TB HDD's... prices doubled and have remained over £500 ever since.
To put the education edition Mac Mini inter pespective:
Mac mini with 16gb RAM 256gb SSD : $500
Double the RAM and SSD : $1100
You could buy TWO MACHINES for cheaper than doubling the RAM and SSD.
It is 25% more expensive to add 16GB RAM and 256GB of SSD alone than it is to buy an ENTIRE MACHINE with 16Gb RAM and a 256GB SSD.
The simplest way to put it... games require faster access to memory/cache than say more business related tasks such as you'll find with content creation software. Stacking the cache next to the CPU die, sped up this access by a huge margin and increased the cache size a huge amount.
But with the original design, the stacked cache was on top, and there were filler materials that effectively caused heat issues. So the X3D ones ran slower and couldn't be over clocked at all. I've got a 5800X3D in my gaming rig and it's still a beast of a CPU even after 4yrs.
With the 9800X3D, they've moved the stack to below the die, and eliminated the filler layers entirely... As a result, it runs much cooler, more efficiently and allows them to run at the same frequencies as their non X3D counterparts, whilst also allowing for some over clocking if you wanted too.
Now for me, over clocking is dead... the days of getting 20-30% extra out of your CPU died pretty much in the early days of multicore... I personally discount the early to mid 2010's when intels could be clocked higher because they were deliberately selling cpu's below their capabilities due to the utter lack of competition from AMD, who only had the shite FX line between 2011-2017 after the huge success of their Phenom II chips.
these days, if you can get a 5% uplift without using 20% more power... it's considered good.
I actually miss the old days when my 600mhz P3, would run at 900Mhz... or my 1.2ghz thunderbird would run at almost 1.6ghz, or the AMD Barton core 2500 that would run at 3200 speeds simply by changing the front side bus from 166 to 200mhz... The Phenom II, 955 black edition with the C2 stepping that would overlock to almost 4Ghz without any extra voltage and how many people remember overclocking their AMD CPU with the use of the graphite pencil trick, or unlocking cores on some of the Phenom II models.
I even remember my Ryzen 7 3800X, being undervolted when I dropped it into my mediaserver because I didn't need an 8 core 4ghz CPU to transcode and serve up media around the home... that thing ran at 3ghz and less than 1v until last year, when I replaced the CPU cooler and dropped (and bent some pins) it as it came away stuck to the cooler... replaced with a 5600G, and removed the GPU, so the system now runs, cooler, quieter and just as fast.
AMD are marketing it as a gaming chip. The Mac Mini, and indeed anything from Apple, isn't really suitable for gaming, but yes, they are better at some other workloads, so get the right tool for the job you want it to do. There are some workloads where my iPad Air 4 beats the pants off my Threadripper Pro with 512GB RAM. There are other workloads where it just isn't capable of even completing it.
This! 100%.
I run a school classroom with iMacs for the students.
It runs everything I need. Vs code arduino IDE, Creative Suite etc.
Homebrew for python etc.
But Gaming? You need Windows or Console.
That’s why I have gaming machine with a 49” ultrawidescreen and a 4090 in the corner of the lab as well.
I'm still sitting here with my 5800X3D cpu that when combined with my 6900XT GPU, 32GB of ram and making good use of FSR/RSR/FMF tech now available... is giving me a solid 150-200fps or higher in all the games I play... that's on highest in game settings @ 1440p on a 165hz monitor.
Some of the games I've been playing recently
God of War Ragnarok
Jedi Survivor
Red Dead Redemption
Cyberpunk 2077
If my now ancient CPU is still performing that well with the advances in frame generation, and given that I don't use ray tracing (couldn't give a flying fuck about the tiny little bit of cosmetic enhancement it offers whilst destroying performance). Then I sure won;t be building a new gaming system any time soon... Not until that CPU becomes the actual bottleneck in modern games. I might even take a look at a GPU upgrade next year to push it's life out further... if not, then perhaps late 2026 might see me do a new build. I started this current build in April 2020 during lockdown with a 3800X and a 5700XT GPU from my old system.
So seeing the constant improvements to the X3D line, and the fact that these new ones are fully unleashed with the new stacking changes, freeing up the extra speed and overclocking that the 5xxxx/7xxxx 3D ones weren't able too... is very promising. Intel simply has nothing to compete with the performance and efficiency in gaming related systems.
My next build... will be an X3D based system... if anything happens to this one to force me into an early rebuild... 9800X3D will be the only choice I'll consider.
I've got a near identical set up to yourself, same 5800X3D, 6900XT, 32GB Ram. Only diff is I'm on a 3440 x 1440 ultrawide @ 144Hz.
I built the system originally back in 2019 with a 3800X and a RTX 2080, and 16GB RAM. (card was a hand-me-down from an earlier i7 system).
Only upgrade I'm considering is a GFX card upgrade, but I'm waiting to see what AMD release next year. Don't really care for nVidia, and I'm on Linux these days, rather than Windows.
You've picked a good analogy there.
Whilst the top speeds of Ford's may not have increased recently, the engine and fuel efficiency has .
The 13th and 14th Gen Intel parts were like old "muscle" cars that had their engines so overturned and ramped they were literally damaging themselves beyond natural wear and tear.
Intel realised they had hit the limits and had to reset.
Whilst the 200S series may seen lacklustre, they are still good chips for most compute tasks. And form a solid base for the future.
You also have to remember that Intel also have to build experience with their new chiplet/tile designs. Much as AMD have had to with theirs.