Gamers?
Is there even one game that benefits from having more than 4 cores? And 4K video editing, really? I thought professional-grade video editing suites made use of GPUs for rendering.
Intel's offered some more detail about the Core i9 range of desktop CPUs it announced in May. Here's what Chipzilla has planned for us all when these chips start to go on sale. The 12-core products will appear as of August 28th. 14-to-18-core kit will go on sale as of September 25th. Processor Name Core i9- 7980XE X- Core …
I am told by some of the hard core gamers in my sphere that if you want to do VR at 240hz then having 8+ cores @ 2..8Ghz or better is usually required. As I'm poor, and still working on a video card from 3 years ago and a Sandy Bridge-era CPU, I cannot confirm this.
Apparently VR is a thing that some people do. I don't understand. Why do you need VR to play Scorched Earth?
Apparently VR is a thing that some people do. I don't understand. Why do you need VR to play Scorched Earth?
This is probably one of those questions that you're best not trying to answer unless you've got plenty of money. If you try it and realise why you need it, you'll resent the expense if it's out of your reach.
Well given the fact that most games don't need more than 4 cores aside, they tend to bolt on the following background tasks...
* Stream encoding to upload to Twitch, etc.
* Watching streams, youtube
* Downloading torrents, etc
Just because the game only uses a subset of cores doesn't mean the rest of the system isn't churning away on other processes.
Well given the fact that most games don't need more than 4 cores aside, they tend to bolt on the following background tasks...
Except it's cheaper and easier to do most of that with another machine!
I've got an old clunker with a dual core AMD something or other in it, and a lump of ram which deals with the day to day of torrents, media streaming etc etc. It's pretty much always on, uses very little power, and produces even less noise.
If I wanna play a game, then I'll spin up one of the big boys with a good GPU, play the games, then turn it off again, leaving the old clunker still streaming and torrenting.
I can't remember the exact words, unfortunately, but he did say something about it during a TV interview with Sue Lawley back in the late 80s. It was in a discussion where he explained Microsoft philosophies such as "New releases are not there to fix bugs, they are there to add features" and he mentioned that either 640K was not a barrier to software development or that 640K was adequate for the intended use of a PC. If I recall correctly this was about the time of the release of the first extended memory boards and of a version of Lotus 123 that demanded the extra memory. It was also the time that the business unit I was in flipped to using Macs with 4 or 8 Mb of memory and Excel because 123 was creaking at the seams.
"I think there is a world market for maybe five computers."
... Thomas Watson, IBM, 1943 and it isn't apocryphal.
"I think there is a world market for maybe five Clouds"
Which is worrying. 19th C. potato famine comes to mind.
By the late 1970s it was evident that there would be a clock speed limit and more performance would need a network of CPUs. Except the bottle neck is RAM and I/O. Not enough L1 Cache per core. Also the transputer was inherently a better architecture with local RAM per CPU. Serial interconnect needs to go 32 or 64 times faster than parallel, but at high speed the the interconnect is far easier to design (parallel traces need the same delay) and uses less chip area and pins than parallel. So I/O to slots, RAM, peripherals and additional CPU slots should all be serial except on chip. Even then if there are many cores with shared I/O using serial might be same speed per word and use less chip area.
Ivor Catt did some good articles in the 1980s on this.
Pity that Thatcher sold off Inmos.
Did Tony Hoare write Occam, or design it or just write papers about it?
If Inmos was going nowhere it was due to fixation on Intel and lack of investment in Tech in UK, where companies relied on Military or BT spending and increasingly owned / controlled by asset strippers or bean counters with no vision.
"Did Tony Hoare write Occam, or design it or just write papers about it?"
None of the above. Tony Hoare (now Professor Sir C. A. R. Hoare) originated the theory of Communicating Sequential Processes, which was the foundation of the transputer concept. He is listed as "the inspiration for the occam programming language". David May created the architecture of the transputer and the development of Occam is not credited other than to "Inmos". However my friend was the person who wrote the Occam compiler.
"If Inmos was going nowhere it was due to fixation on Intel and lack of investment in Tech in UK, where companies relied on Military or BT spending and increasingly owned / controlled by asset strippers or bean counters with no vision."
I'm not convinced by the above explanation. Thorn EMI had underestimated the scale of investment needed and didn't realise until too late that booming transputer sales had been achieved by shipping as much product as possible but not investing in development. It was a slightly cynical exercise in making the company look a bargain for investors. My friend blamed the point-to-point link technology as a bottleneck in the technology.
If you are interested in a potted history, including the financial, political and management cock-ups see the Inmos Legacy page by Dick Selwood on the Inmos web site.
"Ivor Catt did some good articles in the 1980s on this."
Ivor Catt wrote some interesting articles but I wouldn't describe them as good.
His ideas got as far as the Ferranti F-100/L microprocessor which had an internal serial architecture. The trouble was, compared to the TI 9989, another military microprocessor of the era, it was treacle to liquid helium. I know because I was on a project which used both of them.
One place where Catt went very wrong was his assumption that power doesn't scale with clock speed. Another was that timing jitter wasn't fixable. With TTL and ECL there was truth in this; if you could clock an ECL circuit at 500MHz it would be hard to parallel due to timing problems and didn't use 10 times the power of the same circuit at 50MHz - because most of the ECL power consumption was its analog circuitry, even at DC.
The coming of VLSI and CMOS destroyed both of Catt's assumptions; it became possible to parallel 64 data lines with clock speeds in the GHz range, which he never foresaw. As CMOS power scales very roughly with clock frequency for given design rules, a good parallel one will always beat a good serial one.
It isn't a pity that Thatcher sold off INMOS but it was a disaster that she didn't save ICL. Politicians used to mantra that they couldn't pick technology winners, but for some reason that never applied to companies that made things that went bang, only to things that were slightly beyond the grasp of civil servants with degrees in Classics and a poor maths O level.
Yes. power consumption is non-linear with clock, a square law. Higher speeds have been achieved by lower operating voltage and also smaller (related) gate area to reduce capacitance. That's partly why 14nm isn't 14nm in the sense that 90nm is 90nm. Not all aspects have been scaled down.
That's why in the last 15 years number of cores and architecture rather than actual clock is the biggest change.
Did you re-read Catt lately or try to design an motherboard?
The issue isn't on chip (Catt wasn't espousing the F100L, which was rubbish) but BETWEEN chips. PCB design of CPU to RAM is a horror story at high clocks and wide buses.
ICL was moribund long before Inmos. The UK was first with commercial computing, but by 1960s along with consumer electronics was destroying it. Read "The Setmakers".
'The government never should have let the public own computers..'
My Dad, to his son with a career in infosec, on why he didn't need to know anything about protecting himself/identity or get on the internet*.
*yet oddly not dedicated to his beliefs to call that son on his copper line push button corded phone to look something up on that same evil should-be-banned internet for him...
No, from running xosview while playing a ton of steam games, I can say games do not use more than 2 cores. For example, KSP uses one core, then a tiny bit of another for mods like MechJeb. The new Oddworld uses 2 cores pretty heavily, but nothing else does. (note: I don't play FPSes, so I have no data on things like Modern Warfare or Call of Duty. I mostly do "sandbox" games.)
The rest of my 8 threads sit there idle.
I was interested in this because I wanted to see where the bucks I spent on my machine and my graphics card were being used.
However, video editing tools avidemux2 just munch on all the cores when transcoding as ffmpeg is written to use multiple cores well.
Many of those lanes will also be consumed by NIC and other on-board devices. It's probably the same two x16 and one x8 setup Intel has rinsed and repeated for a decade - still wondering when they'll get the memo that people want general purpose slots to plug items into their general purpose computer...
Even with Rizen you'll see better performance in games but it particularly shines when streaming or recording too. Having more cores just generally keeps things a lot smoother.
The problem I increasingly have with Intel isn't core, it's locking down functionality on boards artificially behind paywalls purely to market them as different models. That's why my next CPU will be AMD, right now I've got an i7-6700K which is no slouch for video processing but there's little reason to head back to Intel and pay the premium.
Uhm, no. Hardware encoding is usually better quality as it does the exact same thing but is much faster and therefore can use more iterations...
Uhm, double no. Video encoding is almost always a three way trade-off between speed of encoding, visual quality of the outcome and bitrate of the outcome.
Hardware encoding is more limited in terms of codec features and options, because putting the algorithm in hardware reduces the amount of options compared to the flexibility of software. Especially so in consumer hardware encoders, which are small independent dedicated pieces of silicon in the CPU/GPU.
Now, this is dead easy to see because of CRF (Constant Rate Factor) in x264. You can tell an encoder that you want the visual quality of the outcome to the level indicated. It is trivial to produce one encoding using x264 and one encoding using a hardware encoder, both with the same CRF setting. The outputs will be visually comparable in quality terms, but the hardware encoded video will be larger in size.
So hardware encoders; faster output, same visual quality, higher bitrate. These are lower "quality" videos than a software encoder would produce, for a given meaning of "quality". For "scene" releases, no-one is using hardware encoders, because they produce lower quality videos.
What broadcasters use is not relevant to how consumer video encoding offload chips function.
You think broadcasters use one of nvenc (Nvidia), Quick Sync Video (Intel) or Video Coding Engine (AMD)? Evidently not, as you know they use high end hardware encoders like DVEO that bake the algorithm in to silicon.
I clearly stated that I was talking about consumer hardware video encoders, and I'll repeat it again: for a given bitrate, software encoders produce higher quality output than consumer hardware encoders. The only thing that consumer hardware encoders do better than software encoders is speed.
If you are arguing otherwise, and don't want to appear foolish, an hour spent reading doom9 might help.
Your clients probably aren't so interested in overall quality, so they're willing to sacrifice quality for speed (and thus turnover). OTOH, if you were say a BluRay mastering firm with a more generous time budget, you'd probably take a different approach.
Also, historically, GPUs are less suited for a job like video encoding because the balance of quality and speed produces workloads that are less conducive to parallelization (think divergent decision making that can hammer memory or spike the workload).
"Hardware encoding is more limited in terms of codec features and options, because putting the algorithm in hardware reduces the amount of options compared to the flexibility of software"
Flexibility ! = quality. Given a requirement you can design a hardware codec to do whatever codec / settings you want to - it will be much faster in hardware.
"So hardware encoders; faster output, same visual quality, higher bitrate."
But therefore, for the same given encoding time, a hardware encoder will give a higher quality output / and / or at a lower bitrate.
But therefore, for the same given encoding time, a hardware encoder will give a higher quality output / and / or at a lower bitrate.
No, not really. The hardware encoder cannot
Encoders have "presets", ways of controlling how the encode works, and "levels", what features are available to use in the targeted decoder. Eg, streaming to a STB you might have level 5.1 content, but streaming to a mobile you might have level 3 content.
Software encoders tend to have many presets to determine how much prediction/lookahead to use in encoding a frame. The more lookahead you use, the more efficient the encoding can be, and the smaller each frame can be whilst still encoding the same visual quality. Therefore, in software encoders you can optimise your encode to give the lowest bitrate for the chosen quality. Most videos that are made for distribution are encoded using the preset "ultraslow", because this reduces the file sizes significantly at the expense of a lot of speed.
Consumer hardware encoders don't do this. They have short lookaheads, which keeps the speed high. They use fixed length GOPs, (i-P-B-B-P...), where as x264 will use irregular ones (better quality, better compression). You can't really make it go slower with higher quality per bit (although you can make it go faster with lower quality per bit).
Execution threads /= cores, especially the way Intel implements them. For most workloads I care about, the benefit of these threads is at best modest, and at worst negative.
The more important issue is that these 18 cores are supposed to see the main memory through just four memory channels shared between them. Given that you can saturate this memory subsystem with just two cores, and will almost certainly saturate it with half a dozen, the benefits of having another twelve cores sitting around are questionable for any real-world usage. Cramming more and more processing elements at the end of a thin straw connecting you to the memory system is not a solution; there must be a more sensible way of using these transistors.
I am sure these CPUs will perform fantastically well on a few, carefully-selected benchmarks and will look amazing in demos. For the real-world usage, you'd be better of with quarter of the CPU cores and a few extra bucks in your pocket.
Lastly i9 are ridiculously expensive, so more GPU capacity maybe better value, in part because GPUs maybe better for parallel signal processing.
Yes, most of those i9 cores will probably chock unless reserved for only 2 hyper-threads mostly working with code and data in the core L1 cache; the more context switches and L1 cache misses the slower the code will run!
"Given that you can saturate this memory subsystem with just two cores, and will almost certainly saturate it with half a dozen, the benefits of having another twelve cores sitting around are questionable for any real-world usage"
That's what the large chunk of on CPU cache memory is for.
From my point of view there's the 'bang for your buck' factor to consider as well, at least for the 'average' home user.
I have a desktop I game on, stream my rubbish gaming occasionally and do all the things gamers do while playing. Granted I don't have much time to game these days but more cores/threads makes that experience smoother and I found the recent AMD Ryzen 5 1600 (OC'd to 3.85Ghz) to be a good match, especially considering the relatively low price.
Would I like an 18 Core/36 Thread i9? Probably. Could I justify the significant extra cost for my, probably quite common, use case? No. Same goes for the Ryzen 7 1700/1800 though - the extra cores don't add up to a useful performance boost for the price, in my use case.
As always your mileage may vary - but the real winner of all this is the consumer. Actual competition between Intel and AMD is a GOOD thing. Whichever camp you prefer.
I love competition. Do you really think Intel would release these if not for the AMD Ryzen Threadripper? I can't wait for actual benchmarks from independent testers on both the i9 and Threadripper. These are obviously niche products, but it puts pressure on the prices for mainstream products, which means our wallets win.
We need to remember how good of a design Ryzen is. Rumors are the yields of the Ryzen are great. But the beauty of the design is that AMD can link cores together in a mesh. So when Intel needs a 16 core CPU, they have to make a large one. And the larger the die, the lower the yields. When AMD needs to make a 16 core CPU, they just make two 8 core ones and mesh them together. I can buy a 16 core Threadripper for $999, or a 10 core i9 for $999. The choice is easy. But the best thing is I actually have a choice. Intel must copy AMD's mesh design. But even if Intel started today, it would still take over a year to get to market.
The next thing I hope is that the Vega video card is a winner. We need to put pressure on NVidia's prices now. I love competition: lower prices and better products. What is not to like?
According to this table (from PCGamer) it can manage 3.4GHz with 18 cores, 42.GHz with two cores.
You could run WinNT quite nicely on that, although you'd need to tweak a BIOS setting to disable hyperthreading or else the number of cores on the top-end part would be too large.
I don't know if you could run Win95 on it. Do these things still have a mode where they can run 16-bit instructions?
I haven't really cared about desktop CPU performance in years. The limiting factor on performance these days seems to be whether the bloated apps have consumed all the memory and started thrashing, if whether the badly written JavaScript has got stuck in an infinite loop or the anti-virus has taken it upon itself to scan every DLL load.
More cores is nice, but only because it gives you a working core to use to kill the aforementioned JavaScript process.
Or is the 10 core budget version the highest performer?
Highest base clock, highest boosted clock, lower rated power useage.
For everything but the most obscure workloads individual core performance is likely to trump the number of cores once you get above, say, 8 (possibly 4 or less) especially with two threads per core.
The only answer that anybody could give you and be right is: It depends.
Heavy Multithreaded CPU workloads that aren't being palmed off to the GPU will definitely benefit from the extra cores at a lower frequency. Also there will probably be similar potential for overclocking across the chips so you'd probably be able to get any of these chips screaming along at 5GHz+ with watercooling.
If you're gaming, most games will bottleneck on the graphics first even down at the mid i5 range which reach similar frequencies anyway. Bringing streaming into the mix, more cores are handy as it means that any encoding and CPU manged network activity isn't using the same core(s) you're gaming on.
Video editing is dependent upon your setup but there will probably be some part of the workflow which is CPU intensive.
VR gaming could probably use more cores due to the number crunching required to prevent motion sickness but is also highly GPU dependent.
so yeah. It depends.
This post has been deleted by its author
Xeons are slightly different beasts. This price range of ~$2k gives you 14 cores with a base speed of 2GHz, turboing up to 2.8GHz and Xeons don't overclock in the same way as the Core processors. You'll also get all sorts of datacentre gubbins and probably improved warranties etc.
For a Streamer, the i9 processors are a better deal.
This post has been deleted by its author
The short version (and I'm trying to get this comment in before the Intel and AMD fanbois start fighting) is that AMD hasn't been making competitive CPUs for a few years, until now.
They've been making cheap CPUs, but Intel are still making the fastest. With their new Ryzen (silly name) architecture, it seems like AMD are finally at least in the same race as Intel, so I suspect you'll start to see them being used by more OEMs. AMD are cheap because otherwise nobody would buy them, they've been losing money to stay in the game.
Unless you're looking at the high end, the AMD chip will probably be better value for you.
AMD has better value CPUs if you don't need the absolutely fastest available. It has had this for a long time now. With Ryzen they may actually now compete, or beat, Intel in the top performance level too.
Cheap motherboards for AMD are easier to find, and AMD traditionally has had good upgrade paths for faster CPUs on older motherboards (i.e sockets). Meaning often RAM and Mobo investments can be kept for longer.
Sadly Ryzen isn't available for AM3+ sockets, so there is a definite break with the previous generation AMD CPUs. (AM3+ has had a good run though).
I have run AMD in all my PCs for the last 18 years, so someone may want to add Intel info and correct me on the value aspect..
P.S: There was a debacle about Intel's compilers fixing the binaries to run much faster on Intel CPUs, in effect making benchmark software (as well as actual applications) favour Intel. IRL AMDs are quite fast.
P.P.S: "Is there something about AMD i am missing - and why don't vendors use AMD more ?"
There is a lot of business decision making going on, with lock-ins, Intel leveraging it's size, sales trickery, and so on. Comparable to MS vs the rest.
P.P.P.S: The value of having at least one other player competing with Intel is immense. That's one reason II never abandoned AMD.
This post has been deleted by its author
Because Intel gave out rebates to those that used Intel only. At one point Intel was giving Dell close to a billion dollars a year i rebates. When AMD offered to give HP 1 million free CPUs HP turned it down because the amount of money they would lose from intel was to great. Things have started to change in the last 5 years though.
Until recently AMD haven't been comparable except at the low end - their APU offerings are ok because the standard of bundled GPU is better (for desktops) than the Intel alternative. They also do some interesting embedded options.
With Ryzen, at the high end they're not quite as good as Intel, but a lot cheaper. If you don't need the absolute fastest single threaded performance they're a decent deal.
They haven't kept up with virtualisation enhancements like Intel, though, aside from the new encrypted memory options, which is a pity at the server end.
For a low end box, I'd have no issue using AMD. For a reasonably high end desktop that's mostly concerned with running lots of processes, but also needs to be quite fast, I'd also consider AMD. For an all out gaming box I'd go Intel, and for virtualisation I'd look at a Xeon.
For an embedded firewall I'm looking at an Alix APU2. AMD Jaguar core, fanless, decent encryption support on chip.
The "high end" of this HEDT thing is all about bragging rights: Intel scrambled to release an 18 core part because they weren't about to have AMD deliver "moar coars" .
So now they have problems getting the heat out of this thing and the AMD part has an advantage there because its "heat generating area" is larger and distributed (because it has 2 or 4 widely spaced die under a bigger slug of copper).
So I guess we're about to find out if Intel's "process advantage" is real ... right ??
This post has been deleted by its author
Intel's i9 is a solution in search of a problem. As a gamer who builds his own boxes, unfortunately, I don't even remotely see how i9 helps gamers.
There are basically two big bottlenecks to gamers today: PCIe lanes (aka more graphics cards and m.2 SSDs please!) and memory bandwidth.
Throwing more cores at the problem is, at best, just making things worse. It lowers the top-end GHz. (To date that still matters a lot. It's why we OC. Duh!) Gamers need the opposite of the i9: less cores on a larger die with a better thermal interface and lower voltages so that they can push the need for speed with a chip that OCs well.
If Intel really wants to help gamers, they need to ditch the more cores = better concept and get back to basics: Faster is better, bandwidth is your bottleneck, and cooling is king. It's a recipe as easy as π.
18 cores. Miam.
Got a 4 core i7-6700 since 2015. I also slapped in 32GB of DDR4-3200.
Can I justify upgrading ? Not really. Doesn't matter. I want one of these babies. I'll get one in 2019 probably. With 64GB of DDR5 (by then).
I'll be able to push 7 Days to the full 3840 x 2160 of my widescreen. Finally.
Of course, by then another game will come out that will put my rig to its knees. As usual.
To be honest the bottleneck in most PC's is the GPU not the CPU, I've been running a Hasswell 5820K 6 cores clocked to 4.5 per core for a while now and I have tried 3 different GPU's GTX980 , TITAN X, GTX1080TI on a ASUS IPS Gysnc 144htz 27" .
Each card is happy to run everything at 1080p at 144htz , at QHD 2560 X 1440 the 980 wont hit 144htz in modern FPS's with all the goodies but the other two are happy , and at 4K UHD the TITAN X drops sub 144 htz every now and then but the 1080 seems flawless unless you start plugging in second monitors.
4K play back/encoding the 980 is utter shit , the TITAN X is ok but the 1080 wins hands down (and the cpu or how many cores makes no difference).
Spend your cash on a good monitor and GPU , and spend what is left on a mediocre CPU ...works a treat.
Simple rules for gamers
1. always spend more on your monitor and gpu than anything else.
2. you only need SLI if you have more than one monitor or you are trying to get 2 crappy cards working which is usually pointless.
3. you don't need a 60 core x 100 ghz to run Dota2/LOL/WOW/CSGO.
4. V.R ready does require big blue or quantum computing for a 10min on rails zombie train shooter
Paying a higher price for a CPU to get the highest possible performance - particularly when the cost of the rest of the system reduces the percentage extra one is paying for higher performance - is not irrational. Which is part of why Intel can get away with its current pricing.
My use case is cross platform and embedded development, my daily driver for this is a dual core Xeon E5-2697v2 (24C/48T) from the Ivy Bridge era. If you have thousands of source files to compile for multiple targets, or for the edit-compile-debug loop, it makes it a relative breeze. Going to the more mainstream i7-7700K or even Ryzen 7 1800x is really quite a disappointment for productivity (relatively speaking, of course!)
I strongly suggest you look at ccache. If it is an option (it should) you may cut compile time by a factor of 100.
In case you cannot use ccache and have some money to splash, Naples has 32 cores (64 threads) per socket. Dual socket systems should be available, that's 128 threads for you.
To put it simply, a thread is a single line of commands that are getting processed, each application has at least one thread, most have multiples. A core is the physical hardware that works on the thread. In general a processor can only work on one thread per core, CPUs with hyper threading can work on up to two threads per core. For processors with hyper threading, there are extra registers and execution units in the core so it can store the state of two threads and work on them both.
"For processors with hyper threading, there are extra registers and execution units in the core so it can store the state of two threads and work on them both."
Are there though? Pretty sure the horrible inefficiency of logical cores I see on most systems (especially craptops) comes down to that NOT being the case. It's just trying to execute two threads in the same compute resource and, frankly, there just ain't enough to go around. It's why in my software I tend to limit execution by physical cores. Logical cores are only good for background processes and services. ;)
(Which, in today's OSes at least, is a useful thing to have. But only helps in that it frees up the junk processes to run in their own hell of ineptitude so that everything else has real compute power to run on.)
Yes -- the OP is incorrect ; execution resources are shared and not particularly increased
for the second thread. The reg files are doubled up, but it is about utilising at higher fraction
of a fixed superscalar resource.
However, most cores on most code are idle most of the time. Waiting on memory. The sharing of execution resources isn't really why most people see "hype threads" as not working so well.
The more fundamental problem is the sharing of the L1 cache and the bus to L2. L1
thrashing in particular can be as painful to watch as two keystone cops vying for the same door
and neither making it through.
However
I use Macs & Final Cut Pro X for a living, & the loads appear to be distributed between GPU & CPU (couldn't tell you *exactly* what is doing what) from seeing the performance of a variety of machines that I use...so yes, a big fast new chip (I assume this is whats going in the forthcoming iMac 'Pro') would be very much welcome...also in the audio processing side of things its 100% CPU load
Most people seem to recognise that more cores/threads is not important above a certain number. The issue, as always, is getting enough data into and out of the CPU.
I would have thought that the deficit that Intel have regarding PCIe lanes is far more important to the high end people than actual core count. Ryzen will have 128 PCIe lanes and Intel are still stuck with 44, that amounts to a potentially huge data throughput deficit for Intel i9 processors and there is no indication from Intel that this will change.
The caveat is wait for real systems to turn up and crunch the numbers on, but on a system wide basis, it's looking far better for AMD than it has for years. Of course the system builders and MoBo manufacturers have to take advantage of those extra 84 PCIe lanes for Ryzen to really shine.
"If you are willing to pay that amount of money ... is it not better to go for a Xenon CPU ?"
If the year falls before the year < 2022, yes. Technically, these are the old/current gen Xeons rebadged for desktop, I see nothing different than the exact 14c I have besides you aren't guaranteed anything with these i9's (xan. they even run in parallel?). I've been running 2 OK'ish 14 core ES chips for nearly 2 years, cost $300usd total for both. $720 for 128GB Hynix and $450 for a supermicro mobo. Still less than 1 of these CPU's.
I think if I cared about gaming and streaming and all that, I clearly would buy AMD (actually I might stitch these CPU's and buy AMD anyway).
There are fundamental limits on what may be achieved with multiple CPU architecture.
It may not mean anything soon.
Graphene transistor could mean computers that are 1,000 times faster
Less than a month since the above link we have this:
And this:
SAMSUNG Electronics Presents a New Graphene Device Structure