yah-de-yah
As long as it runs DOOM5 at 100FPS, I couldnt give a toss
AMD's new graphics architecture isn't merely about painting prettier pictures. It's about changing the way computers compute. As first revealed last month at AMD's Fusion Developer Summit, the chip designer has gone out of its way to ensure that its future APUs – accelerated processing units, which is what the company calls it …
It's commonly known that Intel's top end CPUs are a decent bit faster than AMD's top end offerings...
But Intel doesn't even compete in the same universe as the top end AMD GPU offerings...
So I could tell you exactly how those benchmarks would look. Anything that is heavier on the GPU, AMD would win, anything that is heavier on the CPU, Intel is likely to win. Shock!
First, it is not "they" that need to support Linux-based OSes.
Second, Linux may very well get support (or at least better support) for IOMMUs before Windows.
If it's open, it's usually available. That's why nVidia drivers exist for Linux, but Radeon drivers are still "experimental": ATI didn't support an open driver interface.
It would be "they" if they chose to keep the critical details of the hardware secret, so that no one could program it themselves.
Fortunately, that's pretty unlikely if they intend the thing to be used for general-purpose programming, rather than just for one Direct X driver to be written for it, with everyone else just calling it (or complaining that there isn't also an OpenGL driver).
But since it's derived from video card technology, I can understand the fears, even if they are wholly unwarranted in this case, as you note.
So in summary....
in AMD's FSA the VLIW-based SIMD is replaced with a GPGPU architecture, created with a CU + GPU by Scaling tasks across its Vectors on a per Wavefront basis. This gives a great QoS for the GUI, and will be called GCN, according to the AMD CTO.
Right?
AMD moans about Intel but Intel does one thing that AMD never does...Advertise.
So whan folks go to buy a PC they get a choice of Intel or AMD.
Well they've never heard of AMD but they get to hear the Intel jingle at least three times a day on TV so they buy from the one brand they have heard of.
Simple.
AMD needs to sack their marketing team, get a new one and start spending on some jingles etc. after all there is only one other company doing it in their field so how hard can it be?
Even Acer has adverts in the UK.
As for no-one wanting AMD, I build my budget PC boxes with AMD CPUs in them. Why? The saving of using AMD enables me to put a 60GB SSD in the box. Customers dont notice the difference between an Athlon II or a i5 but they notice the SSD. They also get USB3.0/HDMI/usable integrated graphics, which they dont get on the budget Intel motherboards.
Since AMD took over ATI the driver have improved a vast amount, and since the days of the 3000series cards they have been just as good as nVidia on Windows, and recently they have been better at just about evrything.
In the last year or so they have provided Linux drivers on par with or maybe even better that NV. True there latest cards may need the LInux crowd to wait a few weeks, but that is because they are NEW chips which need NEW drivers, as opposed to NV whihc has done naff all apart from re-naming old chips with the occaional speed bump due to process maturity.
Oh dear, I've really got my mortarboard in a tizzy this morning...
"You program it in a scaler way" - that would be "scalar".
On a more relevant note, a company that has become all but irrelevant tries to claw its way back by designing things that nobody wants and that simply don't perform.
(Now sits quietly and waits to receive the ceremonial Inverted Thumb of Death from the AMD fanbois)
You, sir, are of the highest intelligence because you happen to have, with the unusual sharpness of a Moriarty-like mind, recognized, in a flash as it were, what capitalism is all about.
Except for the "all but irrelevant", "things that nobody wants" and "simply don't perform" parts.
Have a cookie. It's a bit mouldy.
I heard game and multimedia developers choose Intel compiler for its excellent optimization characteristics and resulting executable has tendency to look for genunineintel to perform well. Otherwise, on AMD, it is horrible.
So, if its true, how will AMD manage to convince developers to use their cpu/gpu optimization while they sit idle with thousands of high end apps ignoring their old fashion CPU?
As a user hating Intel brand itself, I am almost convinced to go with i5+nvidia gpu on Windows 7 because of the sad fact above. Oh, I also decided to boycott ATI because it refuses a very easy win7 recompile of Vista driver.
for without AMD Intel would still be selling 500mhz Pentium 5 at $1000.00 per unit.
AMD is far from irrelevant. Now VIA is irrelevant.
AMD is taking RADEON and changing the core of computing.
Intel has Sandy Bridge which is a core duo glued to crummy graphics.
Intel has no graphics capability. RADEON was an expensive purchase but just maybe they knew what they were doing when AMD overpaid for it.
All that is needed for many tasks is an FPGA co-processor. Load an FPGA image that does exactly what you want, the CPU sends whatever data is needed to the FPGA, the FPGA squirts the result out of the other end. For stuff like transcoding video this would be a PoP. You could have FPGA images in a repository, just like you do with linux packages. This is not difficult to implement. You can have several versions in the repos, one for each vendor (the synthesis and place and route for each vendor is different), just like you do with i386, x64, ARM, MIPS, PowerPC.........
"All that is needed for many tasks is an FPGA co-processor."
Indeed. I've wanted a motherboard (or add-in board on a fast bus) with a few smallish FPGAs to play with for a while too.. load a trascoding image into these 2, a decrypt into that one, code generation ones into this guy - swap 'em around when you've finished one job etc etc. It wouldn't cost pennies, but shouldn't be too pricey either - there's probably more work into getting the memory access sorted and the IO traces properly matched than anything else on the hardware side, and that's just fiddly rather than expensive.
Obviously this would be mostly, though not entirely, just to play with - but surely there's nothing wrong with that !
Errrrrrr.... that there are a lot of the hardware evaluation boards that are designed for people to experiment with CPU and FPGA hardware architectures - which sounds very much like the thing Tim is expressing an interest in. These boards usually ship with a Linux kernel and a GNU toolchain as these tools, and open source in general, are a very powerful and effective way of enabling the intended experimentation and evaluation.
Is that ok with you?
AMD's problem is getting major software develoeprs to code for this new architecture. The vast majority of developers code with Intel / NVidia in mind, as they have the lion's share of the market.
Without market share and the marketing power that comes with it, AMD can build an amazing processor and still fail, as no-one will be interested in taking advantage of it, for fear of marginalising their software.
I am doing work with AMD GPU's and OpenCL is what I currently use.
There are pro's and con's but it provides me with the ability to
run my code on/across a wide range of compatible systems.
My devbase runs on the big servers at work and a ~200UKP desktop system with a ~70UKP GPU and home.
"The vast majority of developers code with Intel / NVidia in mind, ..."
You what? The only people who code with NVidia in mind are the folks who write drivers for their cards. A high nines proportion of the programming community *can't* write NVidia-specific code in their usual coding environment. Similarly, although it is slightly easier to write CPU code that runs on Intel but not AMD hardware, hardly anyone does it because most the compilers for most languages won't let you and for the rest it is fairly tedious.
There is no catch-22 for this or any other radical architecture. Here's why...
If AMD can create a C compiler that generates decent code for the heterogenous system (and by decent, I mean faster than just using the homogenous part, even for algorithms that aren't embarrassingly parallel) then the Linux kernel will be ported within a month of the systems being available in shops. Others will provide cfront-style front-ends for all common languages. In many cases, this is already how compilers for such languages are written, so this will be trivial. Once you have a complete Linux distro running, faster than is possible on a homogeneous offering, Microsoft will announce a Windows port because if they don't then they'll spill oceans of blood in the server market, where Linux already has enough market share to be taken seriously.
If AMD can't create such a compiler, then their new architecture just isn't as great as they are claiming and no-one (outside AMD) will care.
There is nvidia specific kit in the highend desktop computation market. These are expensive machines with multiple special purpose GPU cards. They utilize a proprietary nvidia API. However they are by no means general purpose machines. They are very specialized, much like a computing cluster.
This is a small niche. Seems to do well for particular problems. Not at all general purpose.
AMD seems to be trying to take that approach onto the desktop. Dunno if will really matter there though. Between Doom and XBMC, most people seem to be pretty well set with the current kit.
"If AMD can create a C compiler ..."
I agree with everything you said, however, playing along with the nonsensically exaggerated title of the blog post, your recipe can't/won't topple nuttin'.
It's high time AMD goes all in. Start from scratch. Throw out everything everyone assumes about how software and systems "have to" be developed and operate - forget the bad stuff, improve on the best stuff to create a top to bottom re-imagined stack and life cycle - complete with OS (not another clone of a clone of a clone of unix pretending to be special), GUI and software development tools (compilers? There's dinosaur) and apps and an unprecedented "contract" with the user.
Make it run everywhere, but run best on AMD chips. Focus on mobile, but support old-fashioned non-mobile devices too. :-) "Sure, Android devices are good, but have you seen Android+ ?"
~$100,000,000 <5 years. Completely underground.
Risky, but toppling isn't for the timid.
@DanFarfan
If I can just reply to myself...
"Microsoft will announce a Windows port because if they don't then they'll spill oceans of blood in the server market, ..."
I'm *assuming* that this heterogeneous thingy is fully virtualisable. Current GPUs are not, which is why you don't get bleeding edge games performance in a VM. However, it is obvious (to me!) that if you are depending on the GPGPU for performance on general benchmarks then you won't be considered AT ALL in the modern server market unless you preserve that performance advantage under virtualisation.
So, um, AMD could block their main route to market if they choose to omit virtualisation support in the first release.
AMD currently has a slightly larger market share than Nvidia for desktop/laptop graphics doesn't it? So is Intel / NVidia "the lion's share of the market"? Just as likely to be Intel / AMD I'd have thought... unless I'm missing the point and you're talking about something other than graphics.
Agree with the basic issue you raise though - the nuances of the dominant cpu (Intel) are far more likely to get an optimised code path compared to carrying out a major re-write to support a radical improvement (AMD) but with a smaller installed base. AMD need to make it very easy for developers to get a decent advantage from this new hardware.
Aren't we at the Itanium/x86_64 point again?
Surely the problem with all of these APU or GPGPUs is that suddenly we will have processors that are no longer fully compatible, and may run code destined for the other badly, or possibly not at all!
The only thing that x86 related architectures have really had going for them was the compatibility and commodity status of the architecture. For a long time, things like Power, PA, Alpha, MIPS, Motorola and even ARM processors were better and more capable than their Intel/AMD/Cyrix counterparts of the same generation, but could not run the same software as each other and thus never hit the big time.
Are we really going to see x86+ diverging until either AMD or Intel blink again?
I'd be pretty pissed if my password was being stored hashed without a salt. I'm not naive, I know it happens, but it should be be illegal under DPA for lack of due care.
Not saying this is an infallible solution, but I'm always amazed when a service warns users that their passwords have been compromised.
Since I couldn't understand most of the rest of the article I dropped into Pedant Mode.
I know the Reg writers don't have time for things like correct spelling, grammar or semantics; but when you give links to definitions like TLA could you please use a correct (or more correct) definition?
In this case the Wikipedia definition allows for the fact that in most instances of TLA, the "A" stands for abbreviation, not acronym as PCMag seems to think.
GCN is a three letter abbreviation, not a three letter acronym.
LOL - you may not like Wikipedia - that's irrelevant.
My point is that the author of the article, and PCMag.com (and apparently you and AC) do not understand the difference between an abbreviation and an acronym. They are not synonyms.
Here is a link you can use... www.dictionary.com
Like I said - I'm in pedant mode, but TLA is not a TLA by your definition.
"...do not understand the difference between an abbreviation and an acronym. They are not synonyms."
What gave you that impression?
An acronym is a pronoucable abbreviation.
I always assumed that TLA *incorrectly* stood for "Three Letter Acronym".
The dictionaries may have corrected that oversight but it doesn't make them right in terms of their popular usage, and the usage I have seen and heard for the past 20 years contradicts your assertion.
"TLA is not a TLA by your definition"
And why not? Are you saying that "TLA" is not an acronym?
Using your dictionary link, "acronym" means: "a word formed from the initial letters or groups of letters of words in a set phrase or series of words"
So regardless of what "A" stands for, "TLA" is an acronym; and a three-letter one at that. Granted, acronyms are also abbreviations (but not vice versa).
I really don't understand what you're being pedantic about.
Already some available, and we are due many more, Arm systems (CPU+GPU on die) using similar if not the same techniques. Available from NVidia now I believe.
However, these are a bit pants, but by 2013 there will be quad CPU core Arms with built in GPU (with vector cores and associated HW to help with encode decode 2D/3D etc), running at tiny wattage that will blow Intel and AMD chips with their high power consumption (and in some cases LESS performance overall) away.
And these will be mobile devices with the performance we currently see on the desktop.
Just use the Khronos API's to code for them (OpenGL, OpenCL, OpenVG etc), and it doesn't matter whether you are on Arm or x86.
Should be fun!
I've always known TLA stood for "Three Letter Acronym", if you stop being a pedant for a minute it acutally makes sense to be called "Three Letter Acronym" instead of "Three Letter Abbreviation"!
TLA is actually a Three Letter Acronym itself, and in no sense an abbreviation!
It doesn't matter what the expansion of TLA is - although I have to question the logic of your remark...
"The dictionaries may have corrected that oversight but it doesn't make them right in terms of their popular usage, and the usage I have seen and heard for the past 20 years contradicts your assertion."
...that's just silly talk. But if you're doing modern English GCSEs I can imagine how you came to that conclusion.
My original point, which appears to have completely passed you by, is that the author of the aritcle ( Rik Myslewski) indicated that GCN is a TLA (see the bottom the first page of the article).
But since the link that Rik used points to a definition of TLA as "Three Letter Acronym" - then GCN is not an acronym, it's an abbreviation.
@ByeLaw101 - what on Earth are you on about? How can you "pronounce" GCN? Are you Polish? They're the only people I know that seem to be able to pronounce words without vowels.
In the past, ATI GPUs, although they could be used for computing, didn't seem to me to have the kind of support for that application that Nvidia GPUs had.
I'm really happy to hear news like this; being able to put multiple cores on a chip meant that it was possible to design chips with significantly improved floating-point capability, and so I've been waiting for that to happen.
If you ask me, all Intel does is blow hot air...I have been building computers for 15 years and in the real world, AMD has always worked better for me,seemed faster and more reliable...not to mention a hell of a lot cheaper...cpu+gpu is gonna be a winner...now would be a good time to buy some stock...
The original Athlon scared Intel and Opteron/x86-64 really made Itanium look stupid. Intel, though, have always caught up quickly and crushed them with brand strength (like no-one ever got fired for buying IBM or Microshaft). I suspect Intel's fabrication plants also have far better economies of scale. AMD frequently makes losses whereas Intel keeps on coining it.
Uh sorry to be a denier Zog, but while it is true that AMD really had a very good chip with the Athlon compared to Intel's Pentium IV, they then did exactly what Intel had done and sat on their laurels to maximise profit. Because facilitating good research is obviously a waste of money if your current chip is better than your rival's. Hey, can't blame 'em: Intel does the same. Funnily enough, Intel only got back in it by accident with Core (from the mobile guys) and the tables were turned. And we haven't moved since (4Ghz anyone?) and are now in hype mode trying to flog parallel processing.
Silicon CPUs won't go much above 4Ghz for good physical reasons. Assuming otherwise was the big mistake Intel made with the P4 design, which let AMD get ahead for a while.
Parallel processing of one sort or another is the only way we're ever going to get chips 10x or 100x the power of a current Intel or AMD CPU.
Doesn't help that Intel compilers disable optimization of non-intel cpus. Thus many of the benchmarks are invalid if they are using Intel compilers for building the executables.
People need to remember, too, that intel bribed and threatened PC makers with high chip pricing if they did any business with AMD. It did a lot of damage to AMD, but now after losing in court and after AMD has released their APU's, bully Intel is beginning to pee in their pants..
Intel is, like always, spending millions in dollars in misleading advertisings on things that they haven't release yet. Similar to the misleading and failed efforts: Larrabee, Itanium, etc.
The 3D transistors that they so much advertized as the first ones to have it, is all hot air, because they still don't have it, and TSMS will have it right before or close to when Intel will have it.
This post has been deleted by its author
Any decent dictionary is regularly updated to match a word's current popular usage. This is perhaps the only time when public opinion is never wrong! Wikipedia is pissing in the sea if it seriously thinks it can tell everyone to change their understanding of an expression's meaning.
Indeed, for me (and just about everyone I know) it was' Three Letter Acronym' several decades before there WAS a Wikipedia. Whether we should call it that is irrelevant - the fact is we do!
"@ByeLaw101 - what on Earth are you on about? How can you "pronounce" GCN? Are you Polish? They're the only people I know that seem to be able to pronounce words without vowels"
Ah... I see your point, my bad. Although I suppose you could pronounce TLA as "Ta...LAAAAAA!" ;)
Also, you should note that everyone really loves a pedant ;)
Unless you are a lady, in which case you are a...lady. Have a pint!
But I see that Will Godfrey is now completely missing the point. I knew it was a mistake to quote Wikipedia!
Will...please put aside the actual meaning of TLA (as you point out - you consider semantics to be a matter for popular vote). Forget that people don't give a crap about the documented meaning of words these days. Blimey - I hope you don't write mission critical code for a living!
In the article the author pointed to a reference that said TLA stands for "Three Letter Acronym". He was referring to GCN, which is not an acronym, it is an abbreviation.
Ironically if he had pointed to the Wikipeia entry instead of the PCMag entry he would have been fine because Wikipedia allows both expansions of the abbreviation TLA (which is, by the way consistent with the OED definition of TLA).
Flippin' heck! It was only supposed to be a mildly pedantic comment in the first place! Now it's turning into Vogon Poetry!