very droll, AMD, very droll :-D
AMD today shed light on its upcoming server workstation roadmap, revealing details on its first six-core processor, expected to be released next year, and a 12-core offering, due by 2010. The upcoming dodeca-core chip will use AMD's next generation socket platform, dubbed "Maranello." In the second half of '09, AMD plans to …
Four, then eight, then sixteen do seem like natural numbers of cores to offer, and so AMD is taking a risk in bucking this psychology.
But I can see their goal; instead of being slightly behind Intel in introductions, as the technology advances, they will occasionally leapfrog ahead of Intel.
As to why the great concern over AMD's Barcelona problems: unlike Intel, AMD is in a position where it can't afford to stumble. AMD needs to do very well so that we continue to have an alternative.
As the supply and need of 8 core processor is still climbing slow...very slow...
I don't see the great need of the any multi-core processor more than 4....
Up till now, multi-core processors are still having a lot of bottlenecks and issues waiting to be resolved...
First one, the interconnection inside processor and between the main board.
When more and more processor core has been combined into one, designer and programmer will need to work hard to ensure the data exchange, load balancing and resource sharing among the cores are smooth and fast enough catching the processing power.
Taking the quad-core processor for example, connections between processors are simple, it's like a cross in a square box. All the individual core could have their one L2 cache and or share among each other, yet it will give a total of less than 12MB. And both HT3.0 and PCI-E 2.0 are able to handle the huge amount of data transaction between processor and main board.
Well, look at those more than 4 core now. Either the 8-core from Intel or the coming-up 6/12-core processor, need proper and complex transaction lane/route. Since both the two chip-makers had integrated the memory controller into the processor, then bandwidth will be the key. And if we look at the current dual-channel ram strategy. I guess, the channel will split-up to support different core-group. Core 0, 1, 2, 3 in one group using channel A while Core 4, 5, 6, 7 using channel B. Programmers for both processor and OS will need to make sure that workload is balance by add-in function like, downgrade virtually from 8-core to quad-core when the OS is not natively support 8-core or the software are not multi-thread processing. Last one, power consumption and thermal control. Though engineers are trying very hard to reduce the processors' power consumption now a day by reducing the transistors' size but at the same time their squeezing more and more transistors into smaller package. Which means the more jobs the processor can do now, and more heat generated.
Isn't two cores on one chip taking a leaf out of Intel's book?
Surely then AMD won't be able to say it's a 'true' 12 core chip (going on their Intel's quad core isn't a real quad core).
Personally though I'm not fussed. I can't find owt to tax the quad core chip I have now.
People make a big deal of it because they're all secret AMD fans. :) And its extremely disappointing when the underdog screws up on something as elementary as getting the damned thing to work as its designed.
How can you be competitive with a leviathan like Intel if you claim to make 'true' quad cores and then screw up the manufacturing process?
As for the 6, 12, 24 et al core ideas... I don't honestly see the point in progressing that way. Surely the chip manufacturers should be working on increasing clock speeds efficiently, stably, before they add more energy consuming cores unused by the majority of applications?
I've got a dual core on my desk at work - even excel 2007 uses both cores. Running multiple applications uses multiple cores as well, so my machine IS using the 2 cores properly. At the moment I have 9 applications in the task bar, and god knows how many in the background (anti virus, firewall, software audit, etc)
My PC at home that plays games & also does PVR duties is also using both of its cores quite happily, the main game I play is multi-core enabled and the PVR software can do multiple things in parallel.
We've got servers that I manage at work with 32+ cores... And they are all used.
To quite a standard internet comment "You are not the world".
Also, increasing clock speeds isn't as simple as turning a crank and getting some more MHz out of it - the fab method that the companies are using at the moment have a theoretical maximum speed that we're now approaching. This is the main reason that they're going multicore (to get the performance increase another way).. Although the fact that power consumption is proportional to the speed SQUARED really doesn't help.
Your 12 core laptop will indeed have most of its cores sitting idle while waiting for main memory, without pushing your memory bandwidth all the way to 100%.
What they'll be waiting on is latency as each cache miss shuts down a virtual hyperthread for dozens of cycles.
The two ways to increase the percentage of your memory bandwidth that is actually used is to either pre-fetch everything you might possibly need (and then not use most of that) or to throw so many threads on the grill that by the time one of them runs out of steam another will have had its memory request served.
If only we had a reasonable hard-time OS we could absorb most of the processing power in the machine into the megamulticore monster and hollow out the rest of the PC. (Winmodems, dumb graphic chipsets, controllerless multigigabit ethernet or wireless and so on.)
I'm not quite sure why people keep going on about this bug in the Barcelonas. Firstly it's fixed, secondly no one I know has hit the bug, nor did Tom's Hardware manage to in their testing. I agree it's doing AMD no favours but as someone else said, I've seen the same from Intel.
Back on to cores: I'd be interest to find out how scaling works on a cores versus multiple CPU set-up. I've seen some evidence that for a properly multi-threaded application like Sybase (or Oracle, but I've done fewer tests with that) you benefit more from extra CPUs than from extra cores. I imagine this is to do with the large degree of IO operations from a database. I would speculate that less IO driven multi-threaded applications may benefit more from cores than CPUs.
Anyone any experience with this?
"Your 12 core..... for dozens of cycles." by Henry Cobb
Well it's true that the on/off switch here does matter. Personally, I've done a lot of testing on many computers, laptops and servers with most of the common processor models. Ranging from old 386 series to quad-core Xeon, from old K6/2 AMD till Turion. What Henry had told here is a fact. Most of our computer when running on daily used programs like MS Office, Windows, Media Players, Internet Browser and so on, actually rely more on inter hard drive and ram transfer. Most of our time spent waiting for a program to load, was time being spend seeking data and state module processing. In average, generally none of our computer workload reach even 50 percent if you'd ever calculated. So, multi-core doesn't seem to be a very critical technology to us.
It's more to those who need data processing and calculation power. Example, folding@home or earth stimulation. These computing tasks could work with very little or fixed set of data while used up all processing power all time until they get the job done. Therefore, data transfer rate between ram and processors, core between core had played a very critical role. If you'd run folding@home or any similar grid-computing software, you'll find that these software had been designed to run separately on individual core. Well, honestly, it does bring a number of benefits.
"Anyone any experience with this?" BY Matt
Currently, I've a few database in-house from MS SQL and Oracle. Databases doesn't require too much processing power unless they need to do some calculation. What they'll need is HDD and ram access bandwidth which include HDD data seek time, transfer bandwidth, ram access latency, and overall system I/O handles. The larger the database and query range, the longer time it takes to dig-out data you required. The database vendor could have told you that their database system can run or process 32/64 or maybe more and more query at a time, but don't forget one thing, if the database was stored on a single HDD, single channel ram, then you'll might as well forget about the simultaneous job handler. And even if you have the server running on raid-5 with 6 or more high speed SAS/SCSI HDD, the maximum I/O and transfer latency between Storage controller and ram/processor will limit your performance too.
But you will see rapidly diminishing returns on a *desktop* machine. There are only so many processor-intensive things that need to be run simultaneously after all, though at least having a dozen cores will allow you to cope with a much high volume of crapplets and AV/antimalware programs at once.
Servers can, and pretty much always will benefit from being able to run more processes... many web apps (for example) can be trivially parallelised, although filesystem and database access cannot be sorted out so easily.
The next trick is going to be blazingly fast concurrent IO to keep up with all these processes. It won't be much good if everyone and their dog can get a handful of 12 core+ monster chipsfor pocket money, if they're then going to have to invest in eyewateringly expensive 10gigE or infiniband interconnects, and an armful of SSDs running in parallel.
6 cores will be a 3x2 pattern, 12 would be a 3x4.
8 cores would be 4x2, so why not go for 9 and have 3x3?
chips are square, so it makes no sense to have an 8 core chip, except that computer people are used to base 2 counting. AMD have proven this is not relevant here by releasing 3 core chips.
It may also cost less, the 12 core could be a 16 core 4x4 grid with the "broken" cores switched off. When creating that size of processor in one go I bet they will get errors, they may just be planning for that
Personally, I think that given that we already have quad core and nothing to do with it all on a desktop, a radical redesign of the front side bus is required.
I don't think 24 cores is at all feasible if all 24 are plugged into the same 64-bit interconnect. Massive lag should probably occur at that point.
It looks like someone is going to have to design a multi-64 bit lane FSB to cope with the data streaming to/from the cores and the RAM/IO thingys.
And THAT will be a major engineering feat.
Maybe on-board SATA interconnect between each core and each peripheral ? Or are we going to see segmentation between cores - one group with HDD priority, one group with GPU priority, and the rest take whatever is left when available ?
The possibilities are endless, but plonking 24 cores on today's bus is a non-starter.
Yes I know the intended focus was the server market. However there are possible benefits to be gained in the 'common' PC market also.
Not completely useless. Many Mac Pro for example have had eight cores available for a while now. Considering that there often is a significant speed increase between a Mac Pro with two Quad Core processors and a Mac Pro with one Quad Core processor. So if there is a noticable difference between systems with eight cores and those with 'only' four cores in the world of Apple surely it could not be out of reach for the Windows world. And in any case we would expect the Linux world to seize this opportunity. For the moment the big issue seems to be hardware related - the classical chicken and egg problem. There are not many 'common' motherboards for the PC enthusiast which accept (for example) two quad core processors.
Not only a multiple processor problem, but also a memory issue - when will a mainstream version of Windows deal with more than three GB of ram without requiring tweaking? Recently we upgraded our PCs to 4 GB without this being experienced as giving much benefit to the user. While when the Mac Pro machines were upgraded to 32 GB it turned out to be of great help to the users.
GNU/Linux terminal servers love many processes, a bunch for each user. I regularly run 700 processes on a single core in 2gB. Multiple cores are very useful on these machines and they are more like desktop machines than web servers.
All of the terminal servers I build have a bottleneck somewhere. On my current machine it is the LAN because I can't get my PHBs to give me a gigabit/s backbone. With gigabit/s you can easily handle 50 X clients on one NIC. Boards with three or four NICs are available. Multiple cores on 64bits pushes the bottleneck back into the CPU/memory subsystem. 2 cores is limiting, very limiting. 4 cores is great. Dual socket/4 cores is heavenly. When they talk about 8 or 12 or 16 cores, I know the bottleneck will be in the CPU/memory area. I think it is clear they need to stop ramping up the cores and start doubling the cache instead. If you can keep entire processes in the cache, you win, big-time. I think this can already be seen with Vista. It runs fairly decently on an Intel Celeron M chip with large cache and is slow as molasses on dual core AMD64 with a smaller cache.
With GNU/Linux adoption growing at 50% per annum and LTSP a great way to run GNU/Linux, AMD should look at a chip designed to run huge numbers of processes with lots of cores and huge caches. Otherwise, we may have to go with Intel, as painful as that may be... Intel is a near-monopoly. If they destroy AMD we will pay dearly, but AMD has to give us a reliable option. They did with the Athlon, and AMD64. Can they keep doing it?
Hi, AMD orginally told us 8 cores, but since Intel partner accidently leaked the new Intel 6 core discrete cpu, its logical AMD would got to 6 core first. A desktop computer would not benefit much(except for gaming, Unreal engine supports 4 core cpu now) with regular apps but when software starts parallel processing with all the cores at once, wow! I predict we will all have cluster computers some day...but we need software too. If AMD really wanted 8 core they could easily do a dual quad core cpu. The difference is the Hypertransport 3.0 connecting them to prevent bottlenecks.
The future of multi-cores any more is to run multi-instances. as Hypervisors become commodities, OS licensing will have to follow, and we'll be running discrete apps on their own mini-OS. No more conflicts when each app has its own personal OS ... Hell, dedicate a core to an instance... plenty to spare when you have a dozen !!)
The people beating AMD over their bugs and praising Intel are retarded, thats plain and simple...
how many of you were around and remember the many Intel fiasco's, all of which, every single one, Intel refused, i'm mean absolutely ball faced lied and refused to admit there was a problem, I remember like it was yesterday when Tom's hardware and Anandtec brought Intel to its knees over the P3 1133 fuck up, even after they had pinpointed the issue intel still refused to admit it, sent engineers to Tom's labs and tried to silence him, it was only after they realized they were nailed dead to rights and wasnt going to shut Tom and Anand up that they admitted there was a bug and recalled 80,000 CPU's.
Then lets not forget the Rambus fiasco, or the many chipset screw ups...
At least AMD has enough honor to say to the world, hey we fucked up, our $x$ CPU's have an error, we cant ship them, we'll let you know when they're fixed and ready...
Mines the green one with the "Opteron, The Smarter Choice" logo on the back
Biting the hand that feeds IT © 1998–2021