back to article Can Larrabee Lazarus stunt Nvidia's Tesla?

Our pal TPM writes here about how Intel is re-targeting the Larrabee (or a Larrabee-like) processor from being a graphics card replacement to providing HPC processing power in a tidy package. It would act as a co-processor a la Nvidia's Tesla or AMD's GPUs or even FPGAs. The key difference between Larrabee and other solutions …

COMMENTS

This topic is closed for new posts.
  1. This post has been deleted by its author

  2. Stephen Booth

    Is x86 important?

    I'm not sure x86 compatability is important. HPC codes are not written directly in x86 assembler and to a very large extent are re-compiled for each system rather than being distributed as binaries.

    Historically the HPC market has been happy to migrate to whichever architecture provides the most value for money. The current dominance of x86 chips in HPC is more due to the price performance benefits diven by the mass market in x86 chips than by any requirement for x86 compatibility from the HPC market. This would argue that graphics chips have the advantage due to selling in two markets (unless of course intel subsidise their HPC only chip).

    Programmability is an issue but the PGI compiler already targets nvidia GPUs to accelerate normal fortran codes. There are problems with targetting attached co-processors with their own memory but these would also be experienced if the attached processors are x86 cores.

    Of course x86 compatibility does mean you might be able to build a large MPP system out of ONLY Larrabee chips which might save some money.

    1. Steve Roper
      Thumb Up

      Add to that

      the fact that x86 architecture is comparatively inefficient with lots of overhead and clock-cycle-intensive instructions due to its CISC structure, whereas the modern GPUs are much more streamlined, and I can't see the Larrabee becoming a big market player in the HPC arena any time soon.

      1. Eddie Edwards
        Thumb Down

        1990s called

        1990s called, they want their CISC vs RISC argument back. You have been following x86 arch since, say, the 486, right?

  3. sdsantini

    x86 compatibility is ultimately important

    It's all about the applications, and if someone can readily port to the Larrabee-like CPU and take advantage of the CPU power, then that solution will win.

    Supposing the cost of conversion is reduced to trivial - as is today - your old code can run compatibly on new CPU, or recompile and use advanced features - it's be game over.

    There will always be specialty processors for specific tasks, but if the existing eco-system can be carried to that caliber cpu, then the future will be interesting.

    1. Robert Hill
      Boffin

      I suspect...

      that you have never really coded for GPUs, or know much about what they are used for.

      Think weather simulations, nuclear explosion modelling, discrete element analysis, crypto, financial modelling, numeric simulations of other random but difficult types, and you get the idea.

      In short, stuff that is usually highly custom, highly secretive, and very much NOT your off-the-shelf applications such as those that run on your standard x86 processor. So, porting your existing x86 applications is really not all that interesting...and most of them would not be massively parallel applications to begin with.

      In the GPU space, most "legacy" code is written for nVidia's CUDA programming model, only because they got there first....well, actually MasPar got there first with commercialised SIMD (Single Instruction Multiple Data) processing back in the early '90s, but all I can do by mentioning them is date myself...as usual. ;-) And while the Transputer chips that powered MasPar's boxes were interesting, they didn't follow the cost/volume curves of commercial GPUs, and hence are but a footnote of UK computing history...

  4. ROlsen

    Other advantages than x86

    One of the things I was looking forward to with the Intel chip is that it is less parallel than the GPU's but more parallel than a few cores. I have a simulation that has gained some from porting to GPU but it really requires an intermediate amount of parallelism and better random memory access. Intel's chip looked like it would be in the sweet spot for me, and I can't be the only one in this situation.

  5. zooooooom

    @Stephen

    "Programmability is an issue but the PGI compiler already targets nvidia GPUs to accelerate normal fortran codes. There are problems with targetting attached co-processors with their own memory but these would also be experienced if the attached processors are x86 cores.

    Of course x86 compatibility does mean you might be able to build a large MPP system out of ONLY Larrabee chips which might save some money."

    This may reflect the state of play today, but in a world of standardized coherent interconnects (HT/CSI etc) I'm not sure that it needs to be true. Surely the advantage of x86 as the coprocessor ISA is that it becomes easier to build a coprocessor that simply looks like another logical (set of) cores and can coherently address common memory without the penalty associated with the accelerator being on the wrong end of an an IO channel (GPU cards) and/or having to manage memory explicitly (cell)?

  6. Anonymous Coward
    Anonymous Coward

    "Intel, with all its capital and expertise"

    Really? I'll give you they've got Capital, but I'm not sure their expertise counts for much.

    Outside the world of x86, they've delivered a string of failures, from iAPX432 to IA64 via the non-event i2O and a string of integrated graphics failures. WiMax is dead too.

    At one point they even owned StrongARM, whose relatives are now in almost every worthwhile box of consumer electronics. But they sold it. Fools. Now Intel only survive in the netbook market by dint of being able to leverage their Wintel deals. Once enough folk see what ARM/Android on phones can do, which sensible person will want Wintel netbooks?

    Intel's attempt to define the "industry standard 64bit" market is a laughable irrelevance, on short term life support while customers who value HP-UX, NonStop, and VMS are still allowed (by HP) to buy them. It's already clear that those OSes can run on non-IA64 hardware, it's only internal face-saving and politics that stops them moving to x86-64 as a sensible de-duplication effort. Other than a short term need for a tiny number of massive-SMP massive-memory systems too big for today's biggest Proliant, why do we need Integrity as well? And don't say "RAS" unless you can back it up with real concrete examples that Proliant can't match.

    Intel weren't even going to do x86-64 till AMD forced their hand by showing that it could be done affordably and in a timely manner with AMD64.

    It'll be interesting to see what happens to formerly interesting software companies like Wind River and Virtutech (Simics) now that they are under the dead hand of Intel management.

    Intel look to be heading the same way as Microsoft have gone in recent years. Sell.

  7. Bill Neal
    Joke

    Yes, but...

    can it run Crysis 2?

This topic is closed for new posts.

Other stories you might like