"the jump to C++ AMP is minimal"
Ok... I'll bite.
But I warn you, I'll spit it out if I don't like it :P
How much then will it cost, hmmm ?
(I haven't even upgraded to 2010)
Microsoft has announced a new technology designed to help C++ developers build massively parallel applications. Known as C++ Accelerated Massive Parallelism – or C++ AMP for short – the technology will be included in the next version of the company's Visual C++ compiler, and Microsoft plans to open up the specification for …
Methinks what Sutter and Microsoft are not telling you, for perfectly legitimate proprietary intellectual property protection and exclusive exploitation reasons ie the boundless fortune available to C++AMP Followers and SMART Programmers, is that the jump also requires a quantum leap into alternative mindset systems for the easy Presentation of Future Projects and Virtual Reality Promotions and Premieres ......... CyberIntelAIgent Launches for Power and Control of the SMART IntelAIgent Space which Shares for Exploitation and Ab Fab Lab Research and Development.
But that is a field which all SMART Intelligent Services are also busy kitting out with their own user-friendly magic buttons and invisible levers/virtual connections and deep underground channels of communication.
Microsoft, and Uncle Sam, late to the party again .... and playing catch up/gossip gather up for metadatabase reverse engineering of phished private stock options to discover Base Mine and Core Ore Sources ...... aka Creative Universal Meme.
re: "the jump to C++ AMP is minimal", amanfromMars 1, Thursday 16th June 2011 08:14 GMT
Methinks you are you some kind of a nuTBot?
key words: Ab Fab Lab Research and Development, Base Mine, C++AMP Followers, Core Ore Sources, Creative Universal Meme, CyberIntelAIgent, SMART IntelAIgent Space, SMART Intelligent Services, SMART Programmers, Virtual Reality Promotions, alternative mindset systems, boundless fortune, deep underground channels of communication., exclusive exploitation reasons, invisible levers/virtual connections, legitimate proprietary intellectual property protection, metadatabase reverse engineering, phished private stock options, quantum leap, user-friendly magic buttons ...
"Nvidia's CUDA, for example, is tuned to its own GPUs, and Sutter admitted in a post-keynote Q&A that "if you want to get the absolute best performance from one vendor's GPU, you will hardly be able to do better than that vendor's GPU stack". Then there's open-source OpenCL – hardly a vendor-specific approach to GPGPU computing."
This is one of the delights of C++ - there are so many options to choose from. I prefer my code to be as portable as possible - across hardware, OS and compilers. Obviously, this cannot always be achieved but there's nothing about parallelism that requires Windows and Visual Studio. Selecting this technology would just make things harder to deploying on Linux,OSx, etc.
Windows is hardly the first thing that pops into people's heads when they're looking for compute cluster platforms, but it can do a perfectly reasonable job especially if you already have the devs and sysadmins and license agreements to hand. This is probably more of a dogfooding exercise; I'll bet MS have been using it internally for their own stuff and see no reason not to provide it to everyone else too.
I wonder why OpenMP wasn't good enough though. Maybe it just didn't cover enough of the parallel tasks they wanted. Nonetheless, OMP is well supported by several compilers across several platforms (including several versions of Visual Studio), and OMP code can be built just fine in a single threaded fashion using non-OMP capable compilers. I'll bet AMP doesn't have that same advantage.
I believe the man was just pointing out that it is getting easier all the time to write massively distributed and computationally intensive applications, which makes previously hard problems solvable by a larger group of developers (of which I imagine some whizzkid teenager could be part of).
But I guess if you want to intentionally ignore his point to allow you to make a snide comment then thats your call.
is always speed. I come from a background in high performance scientific computing, and I'm just about old enough to remember HPF and how poor it could often be when running on distributed memory machines. We nowadays use MPI which requires significantly more programmer effort then either HPF or the more modern openMP but is almost always significantly more scalable. For our main fluid code we could only scale a given problem realistically to about 32 processors in HPF, about 64 in openMP and to the entire machine (600 processors) using MPI. The same code has more recently been scaled to 10s of thousands of processors in it's MPI format. Even the best compilers are no substitute for good human design, and if you're talking about hundreds of thousands of cores then I just don't see any scheme like openMP or a rejuvenated HPF ever being useful unless your workload is embarrassingly parallel.
I don't really know that much about non-scientific parallel workloads, but for the type of things that I work with better algorithm design to improve data locality will always help more than more sophisticated compiler design.