back to article Hey coders – get a sense of hUMA: AMD to free GPU from CPU slavery

AMD is to manufacture microprocessors that connect their on-board CPU and GPU components more intelligently than ever before. The upcoming chips will utilise a technique AMD calls Heterogeneous Queuing (hQ). This new approach puts the GPU on an equal footing with the CPU: no longer will the graphics engine have to wait for the …

COMMENTS

This topic is closed for new posts.
  1. Michael H.F. Wilkinson Silver badge

    Sounds very interesting

    Anything that allows more flexible access to all this compute power would help extend the raneg of algorithms you could run on them.

    1. Dave 126 Silver badge

      Re: Sounds very interesting

      You've mentioned before that the applications you use are CUDA-accelerated, Mr Wilkinson. Has anything changed? (I know that some previously CUDA-only assisted applications are incorporating OpenCL in light of the upcoming AMD-powered MacPro)

  2. Joerg

    Just more marketing b*ll

    AMD is doing nothing new. Anyone knowing PCI-Express specs should be able to understand that AMD is adding nothing really here.

    1. Dave 126 Silver badge

      Re: Just more marketing b*ll

      If all AMD were doing was promoting the technology, that would be important in itself - since the technique requires developers to adopt it.

    2. Roo

      Re: Just more marketing b*ll

      "Anyone knowing PCI-Express specs should be able to understand that AMD is adding nothing really here."

      I'm guessing that you are referring to all that DMA & address hackery stuff that PCI-E provides, and I agree that wouldn't be a new thing for AMD to crow about. I think the "new" bit is attempting to standardise how work is specified and dispatched to the GPU, if this reduces the number of crappy binary only GFX drivers that would be a good thing for the developer & customer too. :)

    3. Anonymous Coward
      Anonymous Coward

      Re: Just more marketing b*ll

      I disagree: while it is possible to implement a similar solution using PCIe trickery, it is complex and might not work as efficiently as you'd expect, and will almost certainly break if you decide to change your hardware setup. AMD are introducing a software API which will presumably be both efficient and forwards compatible.

  3. Anonymous Coward
    Alert

    How about viruses ?

    Doesn't this create a whole new means for virus-writers to infect people's computers ?

    Instead of infecting things through the CPU, a new kind of virus would be able to run on the GPU and not be bothered with things like Adres Space Layout Randomisation or even Data Execution Prevention.

  4. Paul J Turner

    It's all BS

    No matter how hard you try and dress it up and shout 'New Technology', 'New Architecture', the fact is that main memory is already inadequate for the CPU alone, let alone sharing it with the GPU.

    If that wasn't the case, we wouldn't have THREE damned levels of cache between main memory and the CPU, now would we?

    Given the effectively random addresses that a second accesser of memory makes from the CPU's point of view, those cache lines are absolutely essential, how many levels of cache is the GPU going to get as the next step in trying to make a crap, penny-pinching idea work at last, after all these years?

    1. Anonymous Coward
      Anonymous Coward

      Re: It's all BS

      If bandwidth between RAM and the CPU is already a bottleneck, surely it is better to have a RAM>GPU path when appropriate than a RAM>CPU>GPU path?

    2. Bronek Kozicki

      Re: It's all BS

      actually, you are wrong. The caches are needed for all the trickery needed to reduce the latency; however adding another consumer, while increasing bandwidth utilisation, may not necessarily increase the latency (as long as the demand stays below max, which is almost always the case).

      Obviously, you still need RAM for frame buffer dedicated to GPU only, because here bandwidth demand is very high, but sharing memory between CPU and GPU means you no longer need to copy the textures. Just let the CPU put them somewhere in RAM, and GPU will just use them. You should be able to do the same for shaders.

    3. Alan Brown Silver badge

      Re: It's all BS

      Faster ram is being worked on, but for the last 25 years it's always been "just around the corner" whilst dynamic ram somehow kept moving the goalposts by going faster and faster.

      If/when new ram technology shows up, there's a good chance it'll change a lot of things as there's a good chance it'll not only be fast, but also non-volatile and that's something that even static ram never really managed to achieve.

      I'd say that'd result in lower power consumption but dram power consumption is down in the noise compared to the average display's power draw.

  5. Joe K

    Next-gen

    Sony already pushed this into AMD's custom APU in the PS4, with some unconfirmed reports that Xbox One has it too, though the lack of unified memory in MS's console cripples the concept somewhat.

  6. Robert Grant

    Not as exciting as G-Sync, I reckon.

    Though I'll need a new monitor.

  7. Bronek Kozicki

    interesting

    The most interesting part, for me personally, is how they have implemented task queue for the CPU. And how it cooperates with OS scheduler.

This topic is closed for new posts.

Other stories you might like