Those are some attractive prices for some very desirable specs. I'm impressed, and sorely tempted!
Ever-ambitious chip maker AMD has started shipping the third generation of its Ryzen CPUs, built on the 7nm process. These are the first desktop processors to support PCIe 4.0 standard, which potentially doubles throughput to 16 gigatransfers per second when compared to PCIe 3.0 – something that's increasingly important for …
nice that the new Ryzen Zen2 will work in a current mobo (450 chipset) with AM4 socket, so I have some Ryzens on order.
I also bought an interesting (bargain) Core i5-9400F, happy until I realised that although it has the LGA 1151 socket of the previous gen intel mobos, infernal have tweaked the microcode so that it will only run on their latest 3xx mobos, so I have to buy a new £90 mobo for intel. (Can't buy a refurb as I would need to update the bios to run the 9400F, or borrow a cheap eighth generation intel CPU from Amazon for a couple of hours to do this job) Further, once I have changed my intel CPU, and its new mobo, I guess Microsoft will want me to buy a new Windows license, as I will then have substantially changed my setup!
AMD is so much more reliable, less gougy, their RX 5700 XT is amazing value for 1440p gaming .... tho' PCIe5 is coming soon, so hold-off on the big $$$ mobo upgrades
" tho' PCIe5 is coming soon, so hold-off on the big $$$ mobo upgrades"
PCIe5 is likely to be 2+ years away from commercial release and likely more for the PC market. If Intels plans happen, they would release it on their 7nm node BUT I suspect they will drop to PCIe4 on 10nm/7nm if AMD market share increases push PCIe4 into mainstream components. Intel is betting on the market being prepared to wait.
PCIe4 is very close to the limits current CPU's can provide for IO bandwidth (i.e. Ryzen at 40x PCIe4 lanes giving 80GBps vs memory at ~35GBps). While I wouldn't rule out doubling this in the future, PCIe3 met pretty much all PC IO bandwidth requirements aside from NVMe/SSD and multiport Thunderbolt/USB3.1 which could be covered by PCIe3 8x/16x addon cards.
From a practical perspective, PCIe4 uses too much power for mainstream PC's based on early X570 board issues - PCIe5 would dramatically increase this again (X470 @ 7W, X570 @ 11-15W depending on number of slots, PCIe5 likely >20W depending on number of slots supported)
AMD is only impacted by Spectre and Spectre v4.
AMD has added hardware-based mitigations for both variants of Spectre, which should reduce the performance impact.
This assumes the use of the latest operating system available at the time of release (Windows 10 version 1903 with patches or Linux 5.2 to get all Spectre patches plus all Ryzen optimisations although 4.15+ should be sufficient for most).
... aaand you just pinged the very reason why we're very keen to ditch Intel as soon as there is enough market volume to make sure spares are not a problem (and they fix that pesky Linux 5.0.9 kernel problem that stops the newer distros from booting up).
I am glad AMD have shot so far past Intel that their only remaining move is dropping of price and possibly trousers.
Frankly, I'm hoping AMD will now casually wander into the server market as well, where performance per watt matters as much as just the raw performance in itself..
"Frankly, I'm hoping AMD will now casually wander into the server market as well, where performance per watt matters as much as just the raw performance in itself"
Seems more likely to happen than it was a couple of years ago... Some folks who consume a huge amount of compute have noticed a drop in "per-core" throughput on the Xeon boxes across three generations in a row that isn't being offset by "TurboMode", cache size increases, or the increase in cores per cubic foot. Some of it is down to poorly tuned code, some down to chip errata, and to matters even more fun the performance varies pretty wildly from box to box or even run to run on the same box. Makes tuning very tiresome for everyone involved, and all the effort that goes into tuning new boxes to make them nearly as quick as the ones they replaced isn't going down well.
The "stable performance" thing is critical for big workloads - you need to be able to predict how long stuff will execute in order to hide dispatch latency and startup costs... Throughput that can take a 25% hit on 100% loaded boxes depending on the phase of the moon makes life much harder and more expensive for everyone.
Fun fact about Spectre mitigation. From some recent testing I saw: top end AMD chips take about 150~200 clocks to context switch, top end Intel chips post Spectre take 1000~1200. These exploits really pulled the rug out architecturally from Intel, AMD by luck or design pretty much skated through.
We knew the context switching on Intel gear was pretty bad, but had no idea that AMD was so much better - thanks for that ray of sunshine. In some cases post-fix we've had to reduce the process count on a box, as if the app has suddenly become memory bandwidth limited despite running on hardware that delivers more bandwidth, expensive context switching would explain that anomaly.
With respect to Intel's misfortune, it was a result of design. AMD took the view that they shouldn't evaluate permissions *after* doing the accesses. Intel chose to evaluate permissions during/after access - presumably to mitigate memory access latency. Pretty sure Intel aren't unique in taking that option, there were quite a few papers on reducing memory access latency in the mid-late 90s.
"Boo-ho no fair."
Seriously, long years ago I was shown an implementation of an 8080 processor in AMD bitslice. The idea was that if you had an 8080 design and wanted to double or triple the speed with minimal investment (i.e. not switch to a different architecture and cross-compile and all that stuff) you could implement it in bitslice parts and get that speed boost without any software changes.
We weren't interested - but it's a reminder that AMD has been trying to beat Intel for nearly 40 years.
2901, that was it.
I once worked with a system where for some reason the 2901s (4 of them) were very close to the edge of a rackmounted board with an amazingly grippy backplane connector. In removing it one day, the board flew out of my hand and the ceramic cased edgemost 2901 hit the floor, cracking open. Fortunately we had some in the lab but it was a nerveracking solder job before anyone noticed.
I was amazed at how tiny the die was.
"it's a reminder that AMD has been trying to beat Intel for nearly 40 years."
Around 2004 they were beating Intel. AMD had just released the Athlon64, which not only was the first mainstream* CPU that used a 64 bit architecture (whilst being full backwards compatible with existing 32 bit stuff), but it was also much quicker than Intel's CPUs at the time, and the price was right too.
Of course that inspired Intel to ditch Netburst and come back with their 'Core' architecture, which were a bloody good series of chips themselves, so at least the competition worked in the customer's favour again.
* No of course Itanium doesn't count, it was server only, and only high end servers at that. Also it was rubbish and not backwards compatible.
I sure hope that this means more mainstream laptops and desktops become available with AMD options. It's ever so difficult to buy a pre-built AMD machine.
It is ever so difficult to buy a pre-built AMD machine, isn't it?
Let's look at the situation ten years ago:-
"It's ever so difficult to buy a pre-built AMD machine."
The expectation is that pre-built AMD machines will be more common. One of the major limiting factors with recent generations has been the lack of motherboard options - the release of such a broad range of parts with the latest Ryzen's show motherboard manufacturers have invested in them this time around.
You'll probably have to wait until Christmas to see the majority of them and 2020 to judge clearly.
And on going graphics crashes as well:
Just one of many similar posts. In my case (and at least one other guy I know) it is still not fixed so buying a Nvidia card instead. Really AMD why can't you provide working drivers/support?
If AMD never existed, I'd shudder to think where we would be now.
Perhaps still trundling along with 32 bit architecture and quad cores for only the richest of the rich?
Although I grant Intel wouldn't have seen the need to make the instruction set so pitifully inefficient, in one of their many attempts to stifle competition.
We'd all be using Intel Itanium. That was Intel's path for 64bit CPU's but AMD64 spoiled those plans in a big way.
We'd all be using Intel Itanium. That was Intel's path for 64bit Server CPU's but Intel spoiled those plans in a big way by failing to deliver; AMD simply capitalized on Intel's failure and delivered efficient 64bit for servers AND mainstream. These days, AMD is showing Intel how to beat 14nm (by shipping 7nm Belgian waffles)
"We'd all be using Intel Itanium."
If Intel had persisted with Itanium for 64-bit support, I suspect the world would have moved off x86.
It's hard to say which with hindsight given that POWER/SPARC would have likely retained more commercial value, so ARM would have been the likely alternative but we would have had to wait 10+ years...
Intel is definitely behind now. When the first generation of Ryzen came out, though, they had doubled the floating-point support, to correct the issue that Buldozer had only half as much floating-point as Intel chips. Unfortunately, Intel doubled what they put in their chips too, so while in general Ryzen was a huge improvement on Bulldozer, in this area they didn't make relative progress.
This was supposed to have happened again: AMD designed the current Ryzen chips so as to be competitive against the 10nm Intel chips with AVX-512 support. It's only because those Intel chips aren't here yet that AMD is ahead... instead of behind with too little, too late, once again.
So when the next generation of Ryzen comes out in a little over a year, I don't expect another floating-point doubling to catch up with Intel.
Maybe Intel needs to wake up, but it seems to me that so does AMD.
I'm not so sure. AMD have done a couple of deeply structural things that not only allows them to scale quite well, but also to avoid bottlenecks that have plagued us for a while, all the while doing that on 7nm while Intel hasn't even managed to get 10nm production-stable yet.
So more power, more speed, fewer bottlenecks and all of that at lower power demands. Oh, and fewer backdoors, let's not forget that one.
Intel will have to work very hard to catch up, let alone overtake AMD..
Intel coined AVX512, and thus has to invest silicon surface that largely goes unused in the first generations to get a critical mass. IOW it is seeding it to get people to use it.
AMD doesn't, so only upgrades the vector unit when it actually becomes commonly used, and spends the surface one whatever is needed now (either making cheaper or other features)
Keep in mind that there is barely enduser -512 silicon out there atm, and worse the standard is hopelessly fragmented in substandards.
Some Xeon servers had some older substandard, but clock down when heavily used, or implement a 512 instruction using two 256 pipes (as AMD does for AVX2 currently in ryzen 2x00)
Intel performance seems to be taking a big hit due to Spectre/Meltdown mitigation. My I5 machines crawl these days compared to a few years ago. The architecture advantage Intel used to have seems to have been based on the fragile inter-core security, especially the Hyper Threading, which is bad news for an I5 laptop, which is a 2+2 core
I recently gave away my old AMD Phenom box to a friends kids. (Replaced with a Ryzen shortly after they were released as the Phenom box was like a decade old with periodic upgrades of the graphics card etc)
Apparently it's currently running Fortnite at decent res at the "epic" graphics settings level, unlike a friends 2 year old alienware box that struggles and sits on lower settings. And my old box loads faster and has a better framerate, apparently. You can imagine the sort of crowing those kids are doing.
And this is from a box that did run Crysis when it was a new game and "can it run Crysis?" was a current joke. By any benchmark it should be utterly destroyed by the modern gear that everybody has, yet apparently Specture/Meltdown has reduced the new chips performance so much that it's competitive again.
Having followed amd's resurgence closely before and since their seismic Zen CPU processors, I can assure you that Intel's cupboard has already been emptied of potential countermeasures.
They have been in trouble since Zen in early 2017 & at best have mounted some decent rear guards (as you would expect after so many years of almost total domination of a conservative market) during what has clearly to insiders, been a rout.
CPUs have always been a very monopolistic duopoly (both started at the same time in the ~same neighborhood ~50 years ago), so this role reversal is one of the most amazing biz stories of all time.
As can be imagined, Intel has plenty of moats embedded in the market, chiefly control over the predominant pre-built PC suppliers. Savvy who have freedom to switch - the millions who assemble the 6-10 sub components themselves, have long been flocking to Zen - the available (mindfactory.de e.g.) recent numbers indicate an amd intel split of ~60/40 by volume and 50/50 by revenue in this "canary in the coal mine" market.
They have been fortunate to have a diminishing but plausible case among big spending competitive gamers, whose sole metric is what drives their expensive graphics cards fastest at lower resolutions. With new Zen2, neither dominates the other.
The climactic perfect storm is now upon Intel. A spectacularly superior Zen2 has arrived after they have; badly betrayed their pre-built & other partners, gouged and treated customers with contempt, had a huge fail in going from years ahead to years behind in manufacturing process and they have very serious problems arising from security problems due to ill advised hot rodding shortcuts in their past designs.
Incredible as it sounds, Goliath is in deep, deep trouble. The bigger they are, the harder they fall. Its all about confidence, and their halo is rapidly tarnishing. Even the ~PC illiterate are getting the message - Japan e.g. has always been a ~all intel, and now seems split equally in the DIY market.
To top it all, they cannot resort to the key advantage overwhelming scale advantage usually grants - predatory pricing.
Perhaps the killer aspect of Zen architecture is scalability. They can take a ~single, relatively simple & high yielding processor chip, and recombine it in a myriad of multiples to address the widely differing power levels of all segments of the market - from ~lap tops to mega servers.
Contrary to other posters here - it is in data center where amd is the biggest threat of all. There are now extremely few user cases where intel have a competitive pitch to make. Intels moats dont work nearly as well with big boys like the big 7 cloud providers.
It is this market where the real money is. Consumer PCs is a relative side show, but a very nice complement for AMD. Servers demand perfect chips. AMDs modular architecture mean lower binned or partly defective chips can be combined into competitive products. An entry level desktop 6 core cpu e.g. is 2x 4 core chips combined, each with one core de-activated. ~Nothing is wasted.
This ingenious aspect of Zen architecture gives AMD an overwhelming cost advantage. Whatever intel offer, amd simply match it with 30% more power and 30% cheaper, and still make 50%+ margin at intel set market prices.
Their failure in manufacturing has deep consequences too. They now have eye wateringly expensive plants unable to produce competitive product. This, at best, underutilised capital, is a huge burden for them.
While they had a technical lead, owning factories gave them huge market power. Now they are years behind specialists like tsmc and samsung, who do subcontract manufacturing for the likes of amd, the factories are a huge liability.
All intel can do is concede market share in an orderly retreat, & keep as much of their high margins as they can for as long as they can - keep that share price propped up til insiders can sell their shares w/o a panic.
A curious twist to Intels problems, is a recent mini boom for them, for the inglorious reason that their server chips are now up to 20% slower due to security patches, which of course means servers need 20% more chips installed to do the same workload.
This has led to some deceptive recent revenue results for intel.
Me and my son did this too. Although I never lost faith in AMD. All my main PCs (gaming etc) have run AMD since the Athlon days. (About 1999?)
Never felt it was a good thing to let one giant dominate, and AMD always had a good upgrade path philosophy, and good bang for buck (even when the ultimate speed wasn't there). Cheap decent motherboards were always available, for example, and they didn't force expensive RAM on you just to have the latest, shiniest.
The only reason why I chose Intel when I last upgraded my personal workstation was because it allowed me to save quite a significant amount of money by not having to replace all the DDR3L RAM modules from the previous system. This of course made me feel a bit silly when Meltdown got revealed just about a year later... Come next upgrade, it's time to return to the AMD fold.
I've been AMD for years here. Got a couple of Intel boxes, but both picked up second-hand for a song. My desktop is an FX8350. I think it's about time to recycle done of those parts down to rebuild my home server (put in a more frugal CPU), and take the Athlon X3 from the server and build an unRAID box for backups.
That'll leave me room at the top to build out a Ryzen machine.
As I was explaining to a colleague yesterday, I'd take a bit of a performance hit and a bit of cost to have AMD, just to give Intel since competition. They need to be kept honest-ish.
Yep, and you both misunderstood. If I was being clearer I'd say that if I had to, I'd take a bit of a performance hit and a bit of extra cost just to put my money AMD's way. I know that makes me a brand whore, but hey-ho.
In this case it appears to be a bit of a no-brainer.
Single core or multithreaded benchmarks?
It does depend on what you need it for. Video rendering sounds great on it. With the majority of game engines STILL only using a low number of threads, you may still get better performance out of a lower rated chip with better single core performance*
*if there is one - I'm not diving into benchmarks as I currently have no upgrade plans for my PC, except maybe a GPU refresh at some point.
I just want a mid-range machine which can last the six or seven years my old laptop did.
This would have meant an Apple laptop, not too long ago, well, before those idiots turned into the idiot tax operation (making the battery not user-swappable, then soldering RAM and SSD).
The soldered components have turned out to be a boon for us: it is nigh impossible to reformat a properly locked down Macbook so they're now mostly stolen for parts.
We now have finders fee labels on the machines and the welcome screen, and SmartWater (also labelled) makes component re-use that little bit more worrying.
The resulting savings in both insurance and GDPR fine avoidance make the non-upgradability a complete non-issue.
Hmmm, not sure.
a. I still don't like a machine stolen, even less, actually, because I personally don't have the volume to make insurance cheaper so it's usually a choice between being careful and taking the hit if it happens or paying over the top because of less careful people pushing the actuarian statistics. And I could actually stick one of these "reward if found" things on the bottom - it's not something I see often and I like the idea (although someone could nick it then for the reward :).
b. I don't expect a laptop to last more than four years, not because I'm hard on gear but simply because technology moves on. Thus, I plan for replacement. If I need to do this in less than 4 years because of theft or me doing something stupid, OK, but usually four years is fine.
c. I tend to buy the one-but-top model, because that's usually the one with the best balance between price, features and longevity (the top models usually come with an extra upmark because they're top), and I have been doing that since I have been buying laptops. That means I have always had reasonably powerful and costly laptops, but I have changed parts in that time exactly once, and that was in the days when SSDs started appearing in the middle of my renewal cycle.
Valid points, but I will just mention in passing that when I have bought a new laptop more or less the first thing I have done is cloned the hard drive for a better one (before, server grade, nowadays SSD) and put the old drive in the safe.
It's a relatively cheap rescue plan and provides a simple and effective diagnostic if anything goes wrong; if restoring the hard drive restores function, it isn't a hardware fault.
I bought the Dell Inspiron 13 7000 2-in-1 touch screen AMD Ryzen 7 2700U, 12g, 256g SSD last year. Nothing came near it in Intel space for price and performance. Ryzen has moved on since to bigger, better, faster, but dammit for $600 (it's current Amazon price) I love this thing.
It's light, its fast and lets me go back and play Skyrim when I'm on the move. The memory cap (16G I think) may limit you, but there are some damn good Ryzen laptops out there which go higher if portability isn't your biggest concern.
You just need to shop around. I've never seen this with Ryzen in the UK so I bought mine on a US trip. Then bought another the next trip for my kid.
Success is basically going to boil down to pricing.
The Ryzen chips are looking good - single threaded performance is almost up there with Intel, multithreaded scenarios can be better. For a general desktop/media box they deserve serious consideration, for gaming it's a bit less clear. Provided they can fix what's looking like an RDRAND issue preventing recent Linux kernels from booting that is, microcode update soon no doubt.
Navi - that's a bit more uncertain. I wasn't expecting much, but I have to admit they're better than I expected :
Faster than Vega 64 whilst using much less power, and being cheaper than most of the cards.
At least as fast as RTX2060, sometimes somewhat faster, at the same price as an RTX2060.
Unfortunately, also :
Poor initial driver support for both Windows and Linux
Hot, noisy 5700XT. 5700 appears to be a bit better.
OpenGL support on Windows still lacking
No RTX features
Personal verdict : Wait for third party cards, hope for price cuts, Nvidia still a better choice here. Whilst AMD are improving it's unimpressive that they've moved to a smaller process node, and NVidia are wiping the floor with them on noise and temperature.
I will not be efficient if it does. AMD have made a lot of noise about their 'special' relationship with MS and the way it allocates threads across cores. YMMV but the main issue will be motherboard drivers and trying to get the USB ports to work. I had this issue with a Ryzen 5 1600 and was not possible to install XP got it working with Win 7 eventually though.
Might just be me, but I stopped using AMD CPUs about 15 years ago when I had several machines with "issues" / sudden failures that had AMD CPUs. That's made me wary of them since, though the Ryzen 7 chips appeal.
Always had AMD GPUs (dating back to when they were ATI and the Rage series when everyone else and their dog instead wanted a 3Dfx Voodoo card)
3 machines later and I'm still using the used HD6770 graphics card, albeit its getting long in the tooth and considering a 5700 Navi card as a replacement, don't really want an nvidia card (too many others with the same thing and competition is bringing down prices)
Current machine is a used HP Z600, upgraded with a pair of X5670 Xeon CPUs, 48GB DDR3 ECC and a SATA SSD, not solent but a heck of a lot quieter than the previous Core2Quad box.
That there seems to be such a big lead time on delivery (ordered mine on Monday morning.. Got told it would start shipping in 4-7 working days. At least it gives me time for the rest of the kit to turn up).
Normally I wouldn't get so taken but I was about to upgrade to a 2700x since it seemed about right pricing wise, but the 3700x from reviews and tests I've seem takes cake, eats cake, then taos Intel on the sholder for more so I'm taking the investment. It will also be the first cpu since the late 90's that I've not immediately paired it with an aftermarket cooler since even the bloody cooler seems better than intels offerings by a (profit) margin.
Its also the first time in years I've bought something so close to launch day...i feel slightly dirty even admitting that.
There is a YouTube video that puts a Ryzen 5 with a RX 580 against a FX 6350 with same GPU. Most of the games tested yielded 20% increase in favor of the Ryzen, most of the time the Ryzen was bottle-necking the GPU. Most people use there PCs for gaming and have a dedicated rig for video production/etc. I was nearly caught on upgrading until I watched that video.
I cannot justify spending money on kit when my 6 year old processor ($60) keeps up with the latest AAA games. I think it is all hype to get you to keep up with the Jones'.
"In internal testing, AMD's overclocking team used liquid nitrogen to push a 3950X past 5.375GHz. It will have a suitably eye-watering price of $749."
While I totally applaud AMD's return to the table, those LN2 testings don't get me excited at all.
LN2 is NOT a viable way of cooling anything, it's a one way thing unlike any air or water loop.
So, since no-one will ever use it for gaming/computing at all, why even mention it ?
Biting the hand that feeds IT © 1998–2020