very droll, AMD, very droll :-D
AMD today shed light on its upcoming server workstation roadmap, revealing details on its first six-core processor, expected to be released next year, and a 12-core offering, due by 2010. The upcoming dodeca-core chip will use AMD's next generation socket platform, dubbed "Maranello." In the second half of '09, AMD plans to …
Four, then eight, then sixteen do seem like natural numbers of cores to offer, and so AMD is taking a risk in bucking this psychology.
But I can see their goal; instead of being slightly behind Intel in introductions, as the technology advances, they will occasionally leapfrog ahead of Intel.
As to why the great concern over AMD's Barcelona problems: unlike Intel, AMD is in a position where it can't afford to stumble. AMD needs to do very well so that we continue to have an alternative.
As the supply and need of 8 core processor is still climbing slow...very slow...
I don't see the great need of the any multi-core processor more than 4....
Up till now, multi-core processors are still having a lot of bottlenecks and issues waiting to be resolved...
First one, the interconnection inside processor and between the main board.
When more and more processor core has been combined into one, designer and programmer will need to work hard to ensure the data exchange, load balancing and resource sharing among the cores are smooth and fast enough catching the processing power.
Taking the quad-core processor for example, connections between processors are simple, it's like a cross in a square box. All the individual core could have their one L2 cache and or share among each other, yet it will give a total of less than 12MB. And both HT3.0 and PCI-E 2.0 are able to handle the huge amount of data transaction between processor and main board.
Well, look at those more than 4 core now. Either the 8-core from Intel or the coming-up 6/12-core processor, need proper and complex transaction lane/route. Since both the two chip-makers had integrated the memory controller into the processor, then bandwidth will be the key. And if we look at the current dual-channel ram strategy. I guess, the channel will split-up to support different core-group. Core 0, 1, 2, 3 in one group using channel A while Core 4, 5, 6, 7 using channel B. Programmers for both processor and OS will need to make sure that workload is balance by add-in function like, downgrade virtually from 8-core to quad-core when the OS is not natively support 8-core or the software are not multi-thread processing. Last one, power consumption and thermal control. Though engineers are trying very hard to reduce the processors' power consumption now a day by reducing the transistors' size but at the same time their squeezing more and more transistors into smaller package. Which means the more jobs the processor can do now, and more heat generated.
Isn't two cores on one chip taking a leaf out of Intel's book?
Surely then AMD won't be able to say it's a 'true' 12 core chip (going on their Intel's quad core isn't a real quad core).
Personally though I'm not fussed. I can't find owt to tax the quad core chip I have now.
People make a big deal of it because they're all secret AMD fans. :) And its extremely disappointing when the underdog screws up on something as elementary as getting the damned thing to work as its designed.
How can you be competitive with a leviathan like Intel if you claim to make 'true' quad cores and then screw up the manufacturing process?
As for the 6, 12, 24 et al core ideas... I don't honestly see the point in progressing that way. Surely the chip manufacturers should be working on increasing clock speeds efficiently, stably, before they add more energy consuming cores unused by the majority of applications?
I've got a dual core on my desk at work - even excel 2007 uses both cores. Running multiple applications uses multiple cores as well, so my machine IS using the 2 cores properly. At the moment I have 9 applications in the task bar, and god knows how many in the background (anti virus, firewall, software audit, etc)
My PC at home that plays games & also does PVR duties is also using both of its cores quite happily, the main game I play is multi-core enabled and the PVR software can do multiple things in parallel.
We've got servers that I manage at work with 32+ cores... And they are all used.
To quite a standard internet comment "You are not the world".
Also, increasing clock speeds isn't as simple as turning a crank and getting some more MHz out of it - the fab method that the companies are using at the moment have a theoretical maximum speed that we're now approaching. This is the main reason that they're going multicore (to get the performance increase another way).. Although the fact that power consumption is proportional to the speed SQUARED really doesn't help.
Your 12 core laptop will indeed have most of its cores sitting idle while waiting for main memory, without pushing your memory bandwidth all the way to 100%.
What they'll be waiting on is latency as each cache miss shuts down a virtual hyperthread for dozens of cycles.
The two ways to increase the percentage of your memory bandwidth that is actually used is to either pre-fetch everything you might possibly need (and then not use most of that) or to throw so many threads on the grill that by the time one of them runs out of steam another will have had its memory request served.
If only we had a reasonable hard-time OS we could absorb most of the processing power in the machine into the megamulticore monster and hollow out the rest of the PC. (Winmodems, dumb graphic chipsets, controllerless multigigabit ethernet or wireless and so on.)
I'm not quite sure why people keep going on about this bug in the Barcelonas. Firstly it's fixed, secondly no one I know has hit the bug, nor did Tom's Hardware manage to in their testing. I agree it's doing AMD no favours but as someone else said, I've seen the same from Intel.
Back on to cores: I'd be interest to find out how scaling works on a cores versus multiple CPU set-up. I've seen some evidence that for a properly multi-threaded application like Sybase (or Oracle, but I've done fewer tests with that) you benefit more from extra CPUs than from extra cores. I imagine this is to do with the large degree of IO operations from a database. I would speculate that less IO driven multi-threaded applications may benefit more from cores than CPUs.
Anyone any experience with this?
"Your 12 core..... for dozens of cycles." by Henry Cobb
Well it's true that the on/off switch here does matter. Personally, I've done a lot of testing on many computers, laptops and servers with most of the common processor models. Ranging from old 386 series to quad-core Xeon, from old K6/2 AMD till Turion. What Henry had told here is a fact. Most of our computer when running on daily used programs like MS Office, Windows, Media Players, Internet Browser and so on, actually rely more on inter hard drive and ram transfer. Most of our time spent waiting for a program to load, was time being spend seeking data and state module processing. In average, generally none of our computer workload reach even 50 percent if you'd ever calculated. So, multi-core doesn't seem to be a very critical technology to us.
It's more to those who need data processing and calculation power. Example, folding@home or earth stimulation. These computing tasks could work with very little or fixed set of data while used up all processing power all time until they get the job done. Therefore, data transfer rate between ram and processors, core between core had played a very critical role. If you'd run folding@home or any similar grid-computing software, you'll find that these software had been designed to run separately on individual core. Well, honestly, it does bring a number of benefits.
"Anyone any experience with this?" BY Matt
Currently, I've a few database in-house from MS SQL and Oracle. Databases doesn't require too much processing power unless they need to do some calculation. What they'll need is HDD and ram access bandwidth which include HDD data seek time, transfer bandwidth, ram access latency, and overall system I/O handles. The larger the database and query range, the longer time it takes to dig-out data you required. The database vendor could have told you that their database system can run or process 32/64 or maybe more and more query at a time, but don't forget one thing, if the database was stored on a single HDD, single channel ram, then you'll might as well forget about the simultaneous job handler. And even if you have the server running on raid-5 with 6 or more high speed SAS/SCSI HDD, the maximum I/O and transfer latency between Storage controller and ram/processor will limit your performance too.
But you will see rapidly diminishing returns on a *desktop* machine. There are only so many processor-intensive things that need to be run simultaneously after all, though at least having a dozen cores will allow you to cope with a much high volume of crapplets and AV/antimalware programs at once.
Servers can, and pretty much always will benefit from being able to run more processes... many web apps (for example) can be trivially parallelised, although filesystem and database access cannot be sorted out so easily.
The next trick is going to be blazingly fast concurrent IO to keep up with all these processes. It won't be much good if everyone and their dog can get a handful of 12 core+ monster chipsfor pocket money, if they're then going to have to invest in eyewateringly expensive 10gigE or infiniband interconnects, and an armful of SSDs running in parallel.
6 cores will be a 3x2 pattern, 12 would be a 3x4.
8 cores would be 4x2, so why not go for 9 and have 3x3?
chips are square, so it makes no sense to have an 8 core chip, except that computer people are used to base 2 counting. AMD have proven this is not relevant here by releasing 3 core chips.
It may also cost less, the 12 core could be a 16 core 4x4 grid with the "broken" cores switched off. When creating that size of processor in one go I bet they will get errors, they may just be planning for that
Personally, I think that given that we already have quad core and nothing to do with it all on a desktop, a radical redesign of the front side bus is required.
I don't think 24 cores is at all feasible if all 24 are plugged into the same 64-bit interconnect. Massive lag should probably occur at that point.
It looks like someone is going to have to design a multi-64 bit lane FSB to cope with the data streaming to/from the cores and the RAM/IO thingys.
And THAT will be a major engineering feat.
Maybe on-board SATA interconnect between each core and each peripheral ? Or are we going to see segmentation between cores - one group with HDD priority, one group with GPU priority, and the rest take whatever is left when available ?
The possibilities are endless, but plonking 24 cores on today's bus is a non-starter.
Yes I know the intended focus was the server market. However there are possible benefits to be gained in the 'common' PC market also.
Not completely useless. Many Mac Pro for example have had eight cores available for a while now. Considering that there often is a significant speed increase between a Mac Pro with two Quad Core processors and a Mac Pro with one Quad Core processor. So if there is a noticable difference between systems with eight cores and those with 'only' four cores in the world of Apple surely it could not be out of reach for the Windows world. And in any case we would expect the Linux world to seize this opportunity. For the moment the big issue seems to be hardware related - the classical chicken and egg problem. There are not many 'common' motherboards for the PC enthusiast which accept (for example) two quad core processors.
Not only a multiple processor problem, but also a memory issue - when will a mainstream version of Windows deal with more than three GB of ram without requiring tweaking? Recently we upgraded our PCs to 4 GB without this being experienced as giving much benefit to the user. While when the Mac Pro machines were upgraded to 32 GB it turned out to be of great help to the users.
GNU/Linux terminal servers love many processes, a bunch for each user. I regularly run 700 processes on a single core in 2gB. Multiple cores are very useful on these machines and they are more like desktop machines than web servers.
All of the terminal servers I build have a bottleneck somewhere. On my current machine it is the LAN because I can't get my PHBs to give me a gigabit/s backbone. With gigabit/s you can easily handle 50 X clients on one NIC. Boards with three or four NICs are available. Multiple cores on 64bits pushes the bottleneck back into the CPU/memory subsystem. 2 cores is limiting, very limiting. 4 cores is great. Dual socket/4 cores is heavenly. When they talk about 8 or 12 or 16 cores, I know the bottleneck will be in the CPU/memory area. I think it is clear they need to stop ramping up the cores and start doubling the cache instead. If you can keep entire processes in the cache, you win, big-time. I think this can already be seen with Vista. It runs fairly decently on an Intel Celeron M chip with large cache and is slow as molasses on dual core AMD64 with a smaller cache.
With GNU/Linux adoption growing at 50% per annum and LTSP a great way to run GNU/Linux, AMD should look at a chip designed to run huge numbers of processes with lots of cores and huge caches. Otherwise, we may have to go with Intel, as painful as that may be... Intel is a near-monopoly. If they destroy AMD we will pay dearly, but AMD has to give us a reliable option. They did with the Athlon, and AMD64. Can they keep doing it?
Hi, AMD orginally told us 8 cores, but since Intel partner accidently leaked the new Intel 6 core discrete cpu, its logical AMD would got to 6 core first. A desktop computer would not benefit much(except for gaming, Unreal engine supports 4 core cpu now) with regular apps but when software starts parallel processing with all the cores at once, wow! I predict we will all have cluster computers some day...but we need software too. If AMD really wanted 8 core they could easily do a dual quad core cpu. The difference is the Hypertransport 3.0 connecting them to prevent bottlenecks.
The future of multi-cores any more is to run multi-instances. as Hypervisors become commodities, OS licensing will have to follow, and we'll be running discrete apps on their own mini-OS. No more conflicts when each app has its own personal OS ... Hell, dedicate a core to an instance... plenty to spare when you have a dozen !!)
The people beating AMD over their bugs and praising Intel are retarded, thats plain and simple...
how many of you were around and remember the many Intel fiasco's, all of which, every single one, Intel refused, i'm mean absolutely ball faced lied and refused to admit there was a problem, I remember like it was yesterday when Tom's hardware and Anandtec brought Intel to its knees over the P3 1133 fuck up, even after they had pinpointed the issue intel still refused to admit it, sent engineers to Tom's labs and tried to silence him, it was only after they realized they were nailed dead to rights and wasnt going to shut Tom and Anand up that they admitted there was a bug and recalled 80,000 CPU's.
Then lets not forget the Rambus fiasco, or the many chipset screw ups...
At least AMD has enough honor to say to the world, hey we fucked up, our $x$ CPU's have an error, we cant ship them, we'll let you know when they're fixed and ready...
Mines the green one with the "Opteron, The Smarter Choice" logo on the back
AMD's processors have come out on top in terms of cloud CPU performance across AWS, Microsoft Azure, and Google Cloud Platform, according to a recently published study.
The multi-core x86-64 microprocessors Milan and Rome and beat Intel Cascade Lake and Ice Lake instances in tests of performance in the three most popular cloud providers, research from database company CockroachDB found.
Using the CoreMark version 1.0 benchmark – which can be limited to run on a single vCPU or execute workloads on multiple vCPUs – the researchers showed AMD's Milan processors outperformed those of Intel in many cases, and at worst statistically tied with Intel's latest-gen Ice Lake processors across both the OLTP and CPU benchmarks.
While Intel has bagged Nvidia as a marquee customer for its next-generation Xeon Scalable processor, the x86 giant has admitted that a broader rollout of the server chip has been delayed to later this year.
Sandra Rivera, Intel's datacenter boss, confirmed the delay of the Xeon processor, code-named Sapphire Rapids, in a Tuesday panel discussion at the BofA Securities 2022 Global Technology Conference. Earlier that day at the same event, Nvidia's CEO disclosed that the GPU giant would use Sapphire Rapids, and not AMD's upcoming Genoa chip, for its flagship DGX H100 system, a reversal from its last-generation machine.
Intel has been hyping up Sapphire Rapids as a next-generation Xeon CPU that will help the chipmaker become more competitive after falling behind AMD in technology over the past few years. In fact, Intel hopes it will beat AMD's next-generation Epyc chip, Genoa, to the market with industry-first support for new technologies such as DDR5, PCIe Gen 5 and Compute Express Link.
RSA Conference Intel has released a reference design for a plug-in security card aimed at delivering improved network and security processing without requiring the additional rackspace a discrete appliance would need.
The NetSec Accelerator Reference Design [PDF] is effectively a fully functional x86 compute node delivered as a PCIe card that can be fitted into an existing server. It combines an Intel Atom processor, Intel Ethernet E810 network interface, and up to 32GB of memory to offload network security functions.
According to Intel, the new reference design is intended to enable a secure access service edge (SASE) model, a combination of software-defined security and wide-area network (WAN) functions implemented as a cloud-native service.
A drought of AMD's latest Threadripper workstation processors is finally coming to an end for PC makers who faced shortages earlier this year all while Hong Kong giant Lenovo enjoyed an exclusive supply of the chips.
AMD announced on Monday it will expand availability of its Ryzen Threadripper Pro 5000 CPUs to "leading" system integrators in July and to DIY builders through retailers later this year. This announcement came nearly two weeks after Dell announced it would release a workstation with Threadripper Pro 5000 in the summer.
The coming wave of Threadripper Pro 5000 workstations will mark an end to the exclusivity window Lenovo had with the high-performance chips since they launched in April.
Patch Tuesday Microsoft claims to have finally fixed the Follina zero-day flaw in Windows as part of its June Patch Tuesday batch, which included security updates to address 55 vulnerabilities.
Follina, eventually acknowledged by Redmond in a security advisory last month, is the most significant of the bunch as it has already been exploited in the wild.
Criminals and snoops can abuse the remote code execution (RCE) bug, tracked as CVE-2022-30190, by crafting a file, such as a Word document, so that when opened it calls out to the Microsoft Windows Support Diagnostic Tool, which is then exploited to run malicious code, such spyware and ransomware. Disabling macros in, say, Word won't stop this from happening.
After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.
These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.
"These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
By now, you likely know the story: Intel made major manufacturing missteps over the past several years, giving rivals like AMD a major advantage, and now the x86 giant is in the midst of an ambitious five-year plan to regain its chip-making mojo.
This week, Intel is expected to detail just how it's going to make chips in the near future that are faster, less costly and more reliable from a manufacturing standpoint at the 2022 IEEE Symposium on VLSI Technology and Circuits, which begins on Monday. The Register and other media outlets were given a sneak peek in a briefing last week.
The details surround Intel 4, the manufacturing node previously known as the chipmaker's 7nm process. Intel plans to use the node for products entering the market next year, which includes the compute tiles for the Meteor Lake CPUs for PCs and the Granite Rapids server chips.
Having successfully appealed Europe's €1.06bn ($1.2bn) antitrust fine, Intel now wants €593m ($623.5m) in interest charges.
In January, after years of contesting the fine, the x86 chip giant finally overturned the penalty, and was told it didn't have to pay up after all. The US tech titan isn't stopping there, however, and now says it is effectively seeking damages for being screwed around by Brussels.
According to official documents [PDF] published on Monday, Intel has gone to the EU General Court for “payment of compensation and consequential interest for the damage sustained because of the European Commissions refusal to pay Intel default interest."
Amid the renewed interest in Arm-based servers, it is easy to forget that one company with experience in building server platforms actually brought to market its own Arm-based processor before apparently losing interest: AMD.
Now it has emerged that Jim Keller, a key architect who worked on Arm development at AMD, reckons the chipmaker was wrong to halt the project after he left the company in 2016.
Keller was speaking at an event in April, and gave a talk on the "Future of Compute", but the remarks were unreported until picked up by WCCF TECH.
Biting the hand that feeds IT © 1998–2022