fee fie fo fum
I'm certain Intel is thinking only in terms of fees and keeping them out of the hands of its rivals. Oh and as large and as many as possible, naturally.
The name "Many Integrated Core" doesn't roll off the tongue – and even Intel doesn't know whether to pronoun MIC as "mick" or "mike" – so with the future "Knights Corner" x86 coprocessors intended to thwart the coprocessor plans of Nvidia and its Tesla family, Chipzilla is settling on the brand name of Xeon Phi to peddle its …
"What both Intel and Nvidia are focused on – and what AMD seems to have forgotten – is that the HPC market is projected to grow at more than 20 per cent in the next five years"
Try doing some research next time rather than copying Intel press releases verbatim, it may make you look smarter.
Or pop over to www.SemiAccurate.com and read one of their latest articles: "AMD Announces CodeXL At AFDS", or the other story about them working with Autodesk and Maya....
AMD has been pushing their concept of Fusion/APU's for the low end and add in cards for the high end for HPC for years, and pushing development of OpenCL as the open/portable way of getting their GPU's to work as co-processors for the heavy parallel maths jobs.
It is Intel with their complete failure as a graphics engine (formerly call Larrabee) the are late to the party.
Who the F could think Intel is ahead of AMD in the co-processor arena -- Intel is afraid of releasing their Xeon Phi because even at 22 nm it can't compete with half a 7970 - which is not even a compute offering - by AMD.
The true fact here, is that Intel is so afraid to lose the CPU part of HPC to more open licenses that they're ready to sink billions in a failed architecture just to say they're in the market AND get vendor support to lock customers in crap tech once more.
Fact of the matter, a pentium core sucks as a co processor, it can't ever be interesting to do vector processing and pure math on an multipurpose CPU - it does not make sense at all.
On the other hand, this article was most likely posted just to flamebait people into commenting - way to go el reg.
Are you serious? What CPU + massively parallel compute platform exactly does AMD offer that matches Intel Xeon + MIC or even Xeon + Nvidia Tesla? Powerpoint slides about a "heterogenous system architecture" that will fuse together x86 and GPU ISAs somewhere down the road (coupled with AMD's past performance of delivery on roadmaps) don't impress me.
I won't go as far as saying it's a bad idea but the Intel MIC just doesn't make sense to me:
i) Using x86 architecture cores for HPC; isn't that going to mean a lot of redundant x86 instructions that need to be implemented in the hardware/microcode but which will be rarely used in typical HPC workloads?
ii) 8GB RAM seems grossly inadequate for 50+ x86 cores; for 50 cores that's just 160MB/core, which doesn't seem enough if they're going to be running x86 application code, which itself will need some degree of OS support running on the same core and sharing that 160MB allocation.
iii) The benefit of being able to run existing x86 code unchanged on the MIC seems highly questionable and hardly efficient. Is this not a brute force approach when a more elegantly designed MPP solution would be far more efficient?
Intel are very clearly not stupid but I'm beginning to think that, having failed to replace x86 themselves, with their own iAPX, i960 & Itanium IA-64 architectures, the only horse they have left to flog is the ageing and inelegant x86. I can't really see this as a viable long-term proposition.
Please explain how Intel can make No. 150 in the Top500 with an MIC test system (that provides 1176 MFLOPs/W) if x86 co-processors are a fundamentally bad idea.
Maybe - please just consider the possiblity - it's the "RISC elegancy" crowd, that has been dealt blows and setbacks time and time again over the last 15-20 years, who are wrong and not the victims of a vast anti-RISC Intel + MS (and possibly IBM, NWO etc.) conspiracy.
Your paranoia is showing through there; you've got yourself all in a tizzy about RISC vs. x86 when that wasn't what I was talking about.
x86, as an HPC 'co-processor' doesn't seem like a good idea because it's a general purpose instruction set; HPC is all about efficiency but incorporating a large and comprehensive instruction set, a significant proportion of which won't be used in typical HPC workloads and which just wastes space, is inefficient.
You are aware that, for a considerable number of years now, x86 has been implemented via microcode on what is essentially (but not purely) 'RISC'y hardware? Furthermore, if Intel were so satisfied with x86 why did they bother with iAPX, i960 and IA-64? Do you think that all those organisations that are using large systems based upon POWER and even SPARC are doing so just out of spite and resentment at x86?
No, x86 is fine for general purpose computing, even though it _is_ clunky and inelegant, but this is because it's comprehensive, well known and widely supported. Only a tiny proportion of people involved in software development (mostly compiler devs) need to care about the underlying architecture; the rest of us only interact via relatively high-level abstractions.
Btw, similar arguments apply to the embedded space too; this is another area where x86 doesn't make much sense and isn't widely used (The i960 did make some inroads to this area but it is now largely dominated by ARM and IBM Power architectures).
About the only place where I can see this could be of benefit over a GPU style co-processor is in ERP/Web type loads, where you have highly threaded/highly parallel small integer type work loads.
i.e. Large parallel payroll jobs, or very dense web servers, where each thread does very little workload wise but you have alot of users.
But for HPC not a chance.
This depends on the meaning of "x86 instruction set" in Intel's marchitecture for this "MIC The Knight" chip.
If it is x86 compute and control flow instructions only - no luck. You want the whole protection, descriptors, paging, etc shebang for a multiuser general purpose compute job like webservers or payroll. If this chip has had it stripped in the name of HPC you might as well chuck it in the bin. It will be useless for heterogeneous general purpose computing.
I think that this is the most significant issue. However, I believe that one of the benefits that Intel has claimed for MIC is that it'll run existing x86 code, which implies that it has the 'full' x86 architecture.
Having said that I guess it's possible that they've removed some of the more general purpose underlying hardware to make it more efficient in HPC and then re-implemented around the missing hardware via slightly less efficient and more complex microcode to make it fully x86 compatible again.
Either way though, it doesn't seem that it can excel in both scenarios.