back to article Are accelerators the cure to video's power problem or just an excuse to peddle GPUs?

There's no denying the fact that streaming video is insanely popular – now accounting for upwards of 80 percent of all internet traffic, according to some estimates. Video is also an incredibly resource-intensive media. As reported by the Financial Times, an ammunition plant in Norway found that out the hard way. The plant, …

  1. Rich 2 Silver badge

    This makes no sense

    Buying very expensive GPUs to transcode your video (the article doesn’t really say transcoding from what to what) might save you some network bandwidth and storage but even using AV1, that is only going to gain you a year or two before the quantity of traffic bumps up to fill the space - a bit like the argument of “bigger roads just attract more cars”.

    And the servers are still going to guzzle mega watts of power.

    So yes - this sounds like a sales pitch to flog GPUs and not an even vaguely useful “solution” to the problems cited

    1. Disgusted Of Tunbridge Wells
      Facepalm

      Re: This makes no sense

      “bigger roads just attract more cars”

      [eyeroll]

      The argument the anti-car fanatics actually make is that bigger roads worsen congestion.

      Also if you can save bandwidth and either:

      A: deliver increased video quality for the same bandwidth utilisation, or

      B: deliver the same video using less bandwidth

      Then surely that's a good thing - whether or not usage will increase in the long term. And if usage increases, that demonstrates that things are good ( just like how increased train use after privatisation compared to falling train use and the Beeching cuts before it proves that privatisation was successful ).

      The question surely is whether the fancy card costs more than it saves (B), or whether the increased video quality is worth the price of the fancy card (A).

      1. jmch

        Re: This makes no sense

        “bigger roads just attract more cars”

        well, duh!! "bigger" (actually wider) roads = more throughput = less traffic, making it more attractive to use private car vs public transport, so all other things being equal, it will obviously attract more cars. Whether that will, in fact make overall congestion worse depends on many many factors particularly whether you are widening just one road without widening feeder roads.

        Anyway, to return to the bandwidth....

        Bandwidth required for 4k streaming is about 25Mbps using already-current coding/decoding. Maybe you could for example code a video stream at source even better... but the current algorithms are already pretty good, I doubt you could squeeze more than 5%, maybe 10% bandwidth reduction. Then you have to encode it once at the server end and decode it 100 times at the user end, so each user device has to support both the new hardware requirements and software encoding. There's not really much rationale to need a new step-change-more-powerful GPU in every user device to save what is, ultimately, not a lot of bandwidth.

        Just allow standards to gradually improve organically as they mostly always have.

      2. Rich 2 Silver badge

        Re: This makes no sense

        I wasn’t suggesting that aiming for better video compression wasn’t a good thing. I just don’t see it as coming anywhere close to solving the issues being discussed

        And forget about the car thing - it’s not important

      3. Ace2 Silver badge

        Re: This makes no sense

        Oh hey, Disgusting! Haven’t seen you in a while.

        Those anti-car fanatics… in the planning department at the DOT.

        What an idiot.

      4. Michael Wojcik Silver badge

        Re: This makes no sense

        Yeah, it's not like Braess's paradox is a real thing or anything.

        But, hey, rant away. Why let accuracy and logic stop you?

    2. Groo The Wanderer

      Re: This makes no sense

      I have to disagree. Acceleration of any compute-intensive work load always reduces the total power drain. Trying to run a GPU's workload on a CPU would require such a mammoth cooler to deal with the resulting heat and power dissipation it boggles the mind.

      Acceleration is about designing circuits tuned to specific tasks; if your workload doesn't require custom circuitry, you likely won't bother with custom silicon to implement it, but use any of a number of burnable technologies with a "generic" device. Because the circuits aren't general purpose, their power requirements and heat dissipation are usually dramatically lower than CPU-based processing of the same workload. i.e. Net gain on power, heat, and cooling requirements.

      The problem is the GPU vendors are trying to convince the public that they "need" to run those compute-intensive loads like ray-tracing in the first place; they don't. It is pure eye candy that doesn't affect playability or watchability when it comes to fun-factor, which is all anyone really cares about with games and such. When I run older DX9 and DX10 games on my 3060Ti, it doesn't even spin up the fans. What has changed is the perceived expectations of the public as to what are "good enough" graphics.

      1. Michael Wojcik Silver badge

        Re: This makes no sense

        Acceleration of any compute-intensive work load always reduces the total power drain

        Until you add more work, which would happen quickly. That was, I believe, the original point.

        1. Tomato42

          Re: This makes no sense

          oh no! people want progress and do more things! it's the end of the world! We should have kept it at mechanical calculators level, not this whole computational fluid dynamics models. /s

  2. Anonymous Coward
    Anonymous Coward

    Transcode?

    A server transcoding is a waste of energy when it can house all resolutions. I'm sure billion dollar companies can or already do this.

    The client is where the problem is. VVC/h266 certainly can help but, not when nearly no clients currently support it (besides some few OTA options).

  3. mark l 2 Silver badge

    Disk space is cheap, even if Netflix needs to have 50+ copies of the same movie to accommodate every bitrate, resolution and codec they stream it to end users in, its a pretty trivial cost in storage space and means that the end users should get the best quality video if its been encoded correctly.

    Where as from my experience GPU encoded videos tend to be larger in file size and yet less quality than the same source material encoded using a CPU. So there seems little benefit to anyone other than those streaming a live event to use a video accelerator to encode on the fly.

  4. CommonBloke

    Seems more appropriate for livestreams

    Since static content can be copy-pasted and served, there's no further processing needed for the video proper.

    Livestreams, on the other hand, are more likely to make the best use of AV1 and this GPU acceleration.

    As to what will actually happen, whatever is most profitable while being less costly to companies.

  5. Ideasource

    Loose standards are wasteful

    Instead of mastering to all these different bit rates and resolutions,. Pick a reasonable set and allow market pressure to push device manufacturers to comply.

    I pay for Netflix, but I still watch Netflix's shows from the pirate streams.

    The quality is higher and more consistent without vbr mucking things up.

    I still initiate a playback through Netflix so the performing artists can get paid.

    But the whole situation seems rather silly.

    Netflix paying all that money to maintain an inferior experience.

    Seems it would be more efficient for Netflix to operate as a tipping platform and just let the pirate streams carry the experience.

  6. Groo The Wanderer

    I'd rather a good hardware-accelerated implementation of Tidal's android player; between that and my antivirus on the phone, I lose almost 1/3 of my former runtime.

    Hardware acceleration and specialized sub-devices, sometimes on this new "tiling" assembly process, is really where the future has headed since the '80s. Got a compute-intensive workload? It started with "smart" devices that had their own processors that could respond to commands/requests, eventually shifted to discrete floating point accelerators, and has continued in incremental steps for over 40 years now.

    Realistically, it is the only way to deal with Moore's law - by reducing the number of compute jobs the CPU itself has to do in order to lengthen its useful lifespan.

    When it comes to media, I'm always against heavy compression. See "Tidal" above.

  7. Ace2 Silver badge

    The article stated that software encoding tends to produce better results. How could that be? If you’re using the same parameters, I would hope the SW and HW solutions would produce very similar output.

    Maybe the claim is that the HW engines tend to have more limited settings.

  8. Michael Wojcik Silver badge

    You could just watch less TikTok or Twitch

    Well, no, I can't. I'm already at zero.

  9. bo111

    Call to action for environmentally aware people

    Let us all manually reduce video resolution, unless high is necessary. This will allow considerable energy and compute savings (encoding, decoding, transmission etc).

    iPhone and Android may want to use proximity sensors to automatically reduce image resolution, when no one is watching or is far away. Dear YouTube, allow audio only on your videos.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like