back to article Nvidia launches not one but two Kepler2 GPU coprocessors

The wait for the "Kepler2" GPU coprocessors based on the company's GK100 GPUs is over. That's the good news. The bad news is that you may have to hurt someone – or call in a favor at Nvidia – to get your hands on one, because so many people are going to be looking for one to goose their number-crunching. And you'll have to get …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Two questions:

    1) If GPU's are better at performing calculations that CPU's, why do Intel, AMD, etc. still bother with CPU's. Couldn't they just use GPU's on the motherboard instead.

    2) Where do you plug the monitors in on this graphics card?

    1. Dcope
      Holmes

      1) GPU are better at these types of calculations maths/simulation, a CPU has to cope with a lot wider range of number crunching.

      2) you dont, this is like leaving an older GPU in your system and setting it in the control panel to be used for Cuda/Co Prosessing, you can do it and not have a monitor attached.

      1. Richard 12 Silver badge

        To expand on (1)

        GPUs are "massively parallel", because their architecture was originally optimised to do the same set of T&L calculations to every single pixel on your monitors.

        So a General Purpose-GPU is really great at doing the same f(x) to a huge dataset, whereas a CPU is good at doing many different tasks to a small dataset.

        If significant parts of your task can be boiled down to "foreach x do f(x)", GP-GPU is going to really speed things up. Otherwise it probably won't.

  2. IHateWearingATie
    Trollface

    Yes, yes, yes,

    .... but what FPS will I get on Battlefield 3?

  3. CADmonkey
    Alert

    Never mind Battlefield 3, I want more FPS from AutoCAD

    Autocad is double precision and single-threaded. Will these cards help?

    1. Richard 12 Silver badge

      No

      The only thing that can help you is to throw away Autocad and get a CAD package instead of a glorified sketchbook. It'll help your customers as well, les stupid mistakes...

      (Autocad is a drafting package, it's never been CAD. They have tried to fix it, but at heart it's just not solid modelling.)

      1. detritus

        Re: No

        May I ask what you would suggest instead, Richard 12?

        I just agree with your assertion is all — AutoCAD is useless to me with the way I work.

        1. Richard 12 Silver badge

          Re: No

          Solidworks and Vectorworks are both good and well respected, which one is better for your business depends the kind of plugins you need.

          Solidworks still seems to have the better FEA tools, while Vectorworks has by far the best lighting simulation.

          I don't know enough about the other plugins to say either way.

          Solid modelling catches so many stupid errors. I only wish more architects would start using it, and stop putting sprinklers, ducting, low ceilings and my 2m racks all in the same place...

      2. CADmonkey

        Re: No

        For someone who makes effective use of italicisation, you talk a lot of crap.

  4. Anonymous Coward
    Anonymous Coward

    At this rate I am expecting the "Tesla" monicker to be slapped on disposable razors like they do with "Platinum" "smooth-chinned Swashbuckler-Delta" and "Glassfibre-reinfoced-polyester-VXR-II" :P

    Maybe Greggs could do a "Tesla" jumbo sausage roll? ;)

    Still, thankful for small mercies, at least they didn''t call it "Cold Fusion" or "Juicing" or "Atkins-Diet"

  5. Anonymous Coward
    Anonymous Coward

    grunt boxes

  6. Angry_Sup
    Coat

    Wide Version Consolidation

    How well will these chips work in the Tesla auto systems?

  7. mhoneywell

    A request for Tim

    I find this stuff fascinating and can happily bore my wife rigid with stories on the fantastical powers of chips nowadays. But I always feel slightly uncomfortable claiming to understand how NVidia got into the HPC game, and now seems to be working as a rocket up its rear end. I obviously wasn't paying attention when the transition took place. And to be honest, I'm not sure I understand the technologies that make it all fit - i've not been so close to tech for the last ten years.

    So the question, Tim, any chance you could do a history, in laymans terms, and just explain what's going on here. I feel like I need to know more about HPC and these strains of chips that are pushing their way into the market?

    1. tpm (Written by Reg staff)

      Re: A request for Tim

      Hey Marcus

      Here's a great place to start with how Nvidia got here:

      http://www.theregister.co.uk/2011/11/15/sc11_huang_keynote_exascale/

    2. Anonymous Coward
      Anonymous Coward

      Re: A request for Tim

      It's one of those meteoric rises a decade in the making. As far as I can tell, things started with some academic research into "GP-GPU" in the early 2000's. NVIDIA took this seriously and started developing the CUDA language and modifying their shader design to support GP-GPU, with G50 launch in 2006 as the first CUDA capable product. There were a lot of problems with this first launch-- poor support for double precision, no libraries, no ECC memory, but it was good enough to get developers started and we saw things like phys-x and some financial applications start to pick up.

      Over the next several years, NVIDIA invested a ton of effort in porting various important libraries and applications to CUDA so that HPC developers would have a lower learning curve to use the new technology. They started making GPUs specifically for compute (mainly targeting industrial users) which helped grow the developer base and get a better sense for the important tradeoffs.

      When the first ECC enabled cards came out a couple years back (~2010) all the other necessary pieces were in place for the tech to take off in supercomputers, and the power/perf win was huge enough that customers were willing to take a multi-million dollar gamble. Once the first center in China bought in and became the number 2 supercomputer at a relatively tiny cost, everything opened wide.

  8. Anonymous Coward
    Go

    Yes, but will it run Crysis?

    No, seriously. These cards are obviously intended for "bigger" things than gaming, but could you use them as an additional card to improve performance further? And if so, would you be limited to games what use CUDA functions, or would it be like SLI and apply to everything using the GPU?

  9. Mikel

    That's a lot of perf

    They're getting some pretty premium prices for these things though.

This topic is closed for new posts.

Other stories you might like