back to article AI caramba, those neural networks are power-hungry: Counting the environmental cost of artificial intelligence

The next time you ask Alexa to turn off your bedroom lights or make a computer write dodgy code, spare a thought for the planet. The back-end mechanics that make it all possible take up a lot of power, and these systems are getting hungrier. Artificial intelligence began to gain traction in mainstream computing just over a …

  1. b0llchit Silver badge
    Facepalm

    ...have already produced tailored silicon for inference on smart phones, and startups are becoming increasingly innovative in anticipation of edge-based inference.

    The fallacy of optimization: you reduce the energy consumption for the individual and simultaneously increase the number of individuals.

    And it looks like the increase in use is a significant factor larger than the optimizations. Thus, the energy consumption is still increasing. If optimization really would be making a dent (on a larger front), then we'd see a global decline of "carbon usage". Unfortunately, at the large scale, we see an increase or at best a flat line.

    1. Mike 137 Silver badge

      The fallacy of optimization

      @b0llchit

      Well said! This has even happened with LED lighting. In many places we now have increased energy use and more light pollution as people install many more room lights and swathes of domestic "low energy" outdoor lighing where there was none before

    2. Binraider Bronze badge

      The Jevons paradox, namely that making things more efficient increases demand is very well established, dating back to 1865. He reasonably accurately predicted the downfall of the British Empire on the basis of rate of coal consumption versus available stocks - only staved off a little by other fuel types coming in.

      It's no different with computing I'm afraid. Max power consumption of a 486 DX2/66 was about 6 or 7 watts.

      Today, an "average" CPU - Ryzen 5 3500X - TDP of 65W. Obviously vastly more potent, but there are many more PC's in circulation today than there were in 1994. Hell, I have 3 in the house (admittedly, not used all the time!) In most use cases, most of that power isn't being "productively" used... Though to be fair the proportions of 486's used for productivity might not be that far removed from current CPU.

      AI though brute force of processing I increasingly find distasteful. It's CPU hungry, training it requires experienced scientists and data scientists to work together, and calibration mostly consists of tuning models to return the results you expected to see in the first place through other means. At least this is true in the fields of electrical and mechanical engineering. It would be known in other circles as a confirmation bias, and frankly, unless you are really prepared to dig carefully, seems to be a near universal outcome of so many machine learning applications. See also, Monte-carlo models tuned to produce the manager wanted to hear.

      Case in point. I like books on military history. Amazon runs flashy algorithms to advertise to me things that relate to books I've bought before. But did it really need those flashy algorithms to work out that having bought a few books by James Holland, I would probably be interested in Max Hastings?

      How much power is burned, needlessly to do this? And can needless compute (and therefore power) be saved by cutting down on bad/expensive/pointless analysis. Until such time we have Fusion power and literally more power than one could rationally use per head knocking about. If the species doesnt destroy itself that quickly first.

      Another obvious example is the energy being burned for Crypto to obfuscate transaction chains - a role cash used to play, albeit cash is harder to move around. I wouldn't mind if the energy was carbon free, however, when marginal coal/gas has to be burned to push crypto bits around, governments should be stepping in to curtail it. What was it, power use estimates up to levels equivalent Argentine national demand? Ridiculous. (All in the name of what has become a Ponzi scheme, too.)

      1. Anonymous Coward
        Anonymous Coward

        You can see this with software as well: as engineering has become more and more efficient and you can have two or three engineers creating something quickly (rather than 20 over the course of months, say) you just give prominence to second-tier disciplines such as UX that (correctly) refuse to become more efficient.

      2. Rich 10

        Cash is still easier to move around - the cost of a cash transfer for payment of an item is minimal compared to say, selling Bitcoin to pay for something. Cash is much more easily traceable because the systems have been in place for so long and the rules are well established. But AI is changing that - you want to buy something or move crypto around, it's getting a lot easier to track. Tax man must always be paid. Ask Al Capone.

  2. Pascal Monett Silver badge
    Stop

    "Counting the environmental cost of artificial intelligence statistical analysis machines"

    FTFY

    We do not have AI. I will not stop bringing that point up.

    1. Anonymous Coward
      Anonymous Coward

      machines and entropy

      I'd guess we can regard training anything as a process of reducing its entropy, so if we knew something about its number of states and the energy required to switch them, it might be possible to estimate the energy required to reach a given level of functionality, at least in rather general terms.

      I suppose then it is a question of whether Beast-machines or Silicon-machines are more economical for the planet, and who gets to decide which should be propagated...

      1. EarthDog

        Re: machines and entropy

        evolution has already optimized Beast Machines

        1. veti Silver badge

          Re: machines and entropy

          Unfortunately, they are optimised for an environment that is long gone, and getting further away every year.

        2. Jaybus

          Re: machines and entropy

          Oh, no. beast machines continue to be optimized by natural selection. Also, the beast machines are optimized by trial and error, just like the silicon ones. And if one considers the millions of years it takes to optimize beast machines, I'm not sure how energy efficient the process is.

    2. jmch Silver badge

      Yep, no real intelligence there, just glorified pattern matching that doesn't know what it's doing. Throwing more power at it won't change the fact that, fundamentally, there is no intelligence there. That's before even beginning to consider that even the most cutting edge brain biology and biochemistry has still eff-all idea of how human intelligence actually works.

      Apropos of nothing, a low-power laptop processor consumes about 40W. A human brain consumes about 20W.

    3. Martin Gregorie Silver badge

      Yes.

      If the device can't explain how it arrived at the answer it just gave you, then it obviously isn't intelligent.

      1. Blank Reg Silver badge

        I guess that leaves me out as I very often know the answer but don't remember why I know.

      2. Throatwarbler Mangrove Silver badge
        Paris Hilton

        "If the device can't explain how it arrived at the answer it just gave you, then it obviously isn't intelligent."

        A lot of people fall into that category, especially when the topic of politics arises.

        1. veti Silver badge

          Yep, this is fairly typical goalpost manoeuvring designed to deny that AI is happening.

      3. LionelB

        Well, octopuses can't do that. Crows can't do that. Chimps can't do that. I can't even do that sometimes. Some pretty good chess-playing software, on the other hand--which no-one would describe as "true AI"--can (in principle) do that.

        It's a remarkably poor criterion for "intelligence".

    4. Il'Geller

      The AI that you see finds information in its context and subtext. That is, the computer literally understands the texts. This is the real AI.

    5. Michael H.F. Wilkinson

      They don't even do statistical analysis. If they did, they would be able to put confidence limits on their classifications at least.

      The power issue is a serious one. Last year we published a "hand-crafted" method for stereo vision for a gardening robot that was perhaps not quite as accurate as some deep learning methods, but ran happily on a Raspberry Pi, as opposed to a powerful laptop with GPU. The accuracy was good enough for the purpose of the robot not running into trees.

      1. Il'Geller

        If one uses NLP he has statistical analyses: it shows how important information is. Mentioning something in passing, in one phrase in a huge text, has less value than the same phrase in a text of two sentences.

      2. LionelB

        They don't even do statistical analysis. If they did, they would be able to put confidence limits on their classifications at least.

        Not necessarily so; sometimes it is just too damn hard to calculate confidence intervals analytically, and/or too computationally-intensive to use surrogate data methods (e.g., bootstrapping). This does not imply that the ML method itself is not statistical.

        I also suspect (though I'm not sure) that some ML techniques may indeed allow calculation of confidence intervals.

        Agreed that power consumption is a serious issue.

      3. Jaybus

        It's really just a multi-dimensional linear equation, right? xM = y, where x is an input vector and y is the output vector. M is an unknown matrix. Training involves presenting a set of x for which y is known, and it is repeated until xM, for some set of x, is within a defined error margin of the corresponding set of y. The training is statistical, but not the end use. Once trained, that is the coefficients in M are calculated, it could certainly be deployed on a Raspberry Pi as well.

    6. LionelB

      AI?

      Whatever you think "artificial intelligence" actually means.

      If you think it means "artificial human-like intelligence", then clearly no, we do not have it. But does it mean that? Should it mean that? Do you think humans have a unique claim to intelligence? How about insect-like intelligence? Octopus-like intelligence? Machine-like intelligence?

      (And what if humans are, after all, basically "statistical analysis machines"? That's not a frivolous question: recent thinking on human cognition and intelligence is leaning in that direction; cf. the Bayesian brain hypothesis.)

      FWIW, I suspect that the future of AI is not likely to be human-like, but something rather alien, that we will nonetheless recognise as intelligence.

      In the mean time, why don't we just stick with "machine learning" - we all know what that means in practice.

    7. Justthefacts

      Semantics, aka does it matter?

      How much of what AI replaces is intelligence anyway?

      Take e-commerce chatbots, replacing a human “customer service agent” on the phone.

      But the human often has extremely limited agency. They only know what is on their screen, and if you ask them to do something non-standard, they mostly simply don’t have the means to do it. The chatbot effectively passes the Turing test because the human is required to act like a robot.

      You as a customer may feel insulted by not being able to talk to a human being. But ultimately that’s just a way to smooth your feathers, while not changing the outcome of resolving your issue.

      TLDR;

      AI is just a more efficient way than employing a human to read the customer-service-agent script.

  3. Primus Secundus Tertius

    Human alternative

    The human being runs on about 75 Watts. The brain - that with which we think we think - uses a fair chunk of that.

    1. jmch Silver badge

      Re: Human alternative

      About 20W I believe.

      Would be interesting to match for example a chess or go human grand master vs program that not only runs on 20W, but only includes models that have been trained with 20W, instead of vs beasts like deep blue

      1. Randy Hudson

        Re: Human alternative

        Modern engines running on an iPhone (5W) can outplay any human

        1. anonymous boring coward Silver badge

          Re: Human alternative

          Not sure why you got downvoted. I believe you are correct.

          1. ITS Retired

            Re: Human alternative

            The iPhone is an Apple product, that's why. No other reason. An no, it wasn't me down voting.

        2. jmch Silver badge

          Re: Human alternative

          The question isn't if the engine can outplay a human with 5W power, I know that it's possible. But that engine has probably been trained through many millions of games in a high-power environment.

          What I'm not sure would work so well is to also train the engine using a low power processor.

          1. veti Silver badge

            Re: Human alternative

            Yes, so? Any serious human player has probably read multiple books distilling hundreds of years' worth of analysis and experience. Let's not pretend anyone works out winning chess strategies from first principles.

        3. EarthDog

          Re: Human alternative

          But of course the human brain is doing much more than that at any one moment; processing visual light spectrum data, maintaining gase exchange processes, pumpumping and monitoring nutrition and waste products throughout the human, processing sound waves, building planning maps of futures events (I'll get done with this and have then have a pint), etc.

          I don't think there is a machine yet that could approach this amount of tasking in 20W. You'd basically have to run a the giant factory with thousands of subsystems 24/7 on 20 watts to match the human brain.

      2. LionelB

        Re: Human alternative

        Of course for a level playing field, you'd want to factor in the energy consumption of the entire history of human evolution.

    2. cdegroot

      Re: Human alternative

      Just what I wanted to say - the consensus seems to be that the brain uses around 20 watts. I'll call this applied mathematics stuff "artificial intelligence" as soon as they approach the intelligence of, say, a dog using, say, 20 kilowatts.

      1. Mike 137 Silver badge

        The arthropod alternative

        I've been following developments in "artificial intelligence" since the mid-80s and I annoyed several pundits back then by saying "when you can build a bot the size of a bumble bee that can do everything a bumble bee can do with the endurance and performance of a bumble bee, then you'll have accomplished something".

        So far as I know this hasn't yet been achieved, so creating an adequate replica of human mentation still seems quite some way off.

        1. msobkow Bronze badge

          Re: The arthropod alternative

          Personally I took AI back in 4th year university in the fall of '86. Back then, both they and the fusion research teams were saying they'd be in production in "about 30 years."

          It is 2021. As my calculator counts it, that is 35 years.

          And they're both still saying "in about 30 years."

          No, we don't have intelligence on this planet *period* if people think a statistical pattern matcher is "intelligence."

        2. jmch Silver badge

          Re: The arthropod alternative

          That's a very high bar. Certainly a bumblebee level of intelligence is still quite far away (let alone with a comparable power-use)

          But a bumblebee bot requires far more than rudimentary intelligence. I'm not sure even now we can build a miniature drone that size which has the endurance of a real bumblebee (particularly collision resistance and time needed between feeds/recharges)

          1. LionelB

            Re: The arthropod alternative

            Cf. Orgel's Second Rule: "evolution is cleverer than you are".

            (I believe some wag added a third rule, which is that "Leslie Orgel is cleverer than you are".)

            1. veti Silver badge

              Re: The arthropod alternative

              Evolution takes centuries of time and hundreds of thousands of lives. If we're talking about efficiency, it's many, many orders of magnitude more expensive than any AI training regime.

              1. LionelB

                Re: The arthropod alternative

                Indeed; see my previous post.

          2. Mike 137 Silver badge

            Re: The arthropod alternative

            My observations of bumble bee intelligence suggest it's very like modern AI. They seem to develop a visual/olfactory template reinforced by trial and error success rate. When something changes they're flummoxed. For example. I used to have bumble bee nests in my wood shed. The door was a bit ancient and had a quite small hole rotted in its bottom which they used to get in and out. However with the door wide open they hovered about, unable to cross the threshold in either direction.

      2. LionelB

        Re: Human alternative

        And how much energy do you think training the human brain requires? (You might want to factor in not just lifetime learning, but also the entire history of human evolution.)

      3. Jaybus

        Re: Human alternative

        Let's be more generous than that. Dogs are among the most intelligent beings on the planet. I'd be willing to call it AI if it approached the intelligence of a frog, regardless of power consumption. Consider, though, if it were possible to dedicate 100% of the processing power of a frog brain to a single task. I feel certain it would be capable of, say, driving a Tesla.

  4. Neil Barnes Silver badge

    A language processing model might be able to understand

    Understand? Or just transcribe? I know which way I'm voting; with Pascal.

    1. LionelB

      Re: A language processing model might be able to understand

      If you can tell me what "understand" actually means.

      Anyway, I'll be voting with C - Pascal is decent as a teaching language, but not so useful in the wild.

      1. Jaybus

        Re: A language processing model might be able to understand

        To be safe, perhaps we should just vote with paper ballots.

  5. Anonymous Coward
    Anonymous Coward

    a hotdog shaped van

    ... perhaps teach to it recognise both vans and hotdogs? Then it could decide which is most likely.

    Of course the number of "other things with an appearance somewhat like a hotdog" might be quite large, leading to an equally large, albeit slightly different, training problem :-)

    1. EarthDog

      Re: a hotdog shaped van

      https://www.oscarmayer.com/wienermobile

  6. Mike 137 Silver badge

    "we're not doing enough to make AI more energy efficient"

    Has anyone examined the quality of the machine code? AI is probably written in an abstracted high level language - very likely using external libraries - and almost certainly this leads to bloat as in all other software these days.

    1. EarthDog

      Re: "we're not doing enough to make AI more energy efficient"

      Most programmers I've met have been really sloppy. Now imagine if you will layers upon layers of sloppy code from firmware level, to frameworks and libraries, to the actual AI implementation...

    2. Jaybus

      Re: "we're not doing enough to make AI more energy efficient"

      "I is probably written in an abstracted high level language"

      Worse. They are mostly written in Python.

  7. Edwin

    The bigger processing power waste

    is in cryptocurrency, where the processing load for mining is pure waste.

    Wouldn't mind seeing that addressed, but I suspect most crypto miners don't care and it would presumably be impossible to regulate.

    1. Cav

      Re: The bigger processing power waste

      This is exactly what I was going to comment. At least AI modelling has a useful outcome. If we are concerned about the environment, as we should be, then Bitcoin, etc, mining should be frowned upon far more than AI. Similarly, pointlessly looking at cats online or playing video games.

      1. JDPower666

        Re: The bigger processing power waste

        It is, but this article is about 'AI'

  8. msobkow Bronze badge

    If you want tight code, hire a seasoned programmer who understands algorithm analysis and tuning.

    If you want "good enough" code, hire body-shop programmers to churn something out for you.

    If you have an unlimited CPU budget, throw AI pattern matching at simple analytical problems; that'll learn 'em.

  9. Il'Geller

    The idea is to make personal AIs, using personal computers-phones. If it’s done, the demand for them (as well as for energy for the AIs creation) will be significantly decreased. I beliave many tens of times. Why?

    Such the personal AIs can be trained much faster, because all the reqiered annotations can rapidly be obtained, following the owners (of the AIs) textual habits. This aloows to save enormous money! And is good for the energy consumption, ecology, etc.

  10. EarthDog

    More data !=better models

    Dirty and biased data will lead to a bad model. Adding more dirty and biased data does not help at all. Meanwhile a smaller data set of clean and representative data will give better results.

    It also looks like they just are discovering the concept of parameter sensitivity analysis. If a parameter doesn't cause an appreciable change to a model either throw it out or stop training it.

    1. CrackedNoggin Bronze badge

      Re: More data !=better models

      Some day Mommy won't be around to spoon feed the the b-AI-by.

    2. Mike 137 Silver badge

      Re: More data !=better models

      "Dirty and biased data will lead to a bad model"

      Even ambiguous data is a problem, as quite possibly is training and even some underlying algorithms. There's a recognised condition called underspecification, where use of machine learning yields multiple equally weighted alternative predictors in training but they diverge in the operational context.

      It was even posited mathematically in 2014 that any system ultimately running on a Turing machine is intrinsically incapable of resolving certain problems.

  11. Anonymous Coward
    Anonymous Coward

    "More importantly, your autonomous vehicle must be able to stop in dangerous conditions that rarely ever arise."

    Like emergency vehicles on the sticking out in road, men in reflective jackets, or an 18 wheeler truck pulling out? Sorry, progress cannot wait.

  12. Anonymous Coward
    Anonymous Coward

    From what I understand the Google NN in the study were being used for enormous language models. One proper training cost a million dollars in electricity.

    But how often was that done? Probably not everyday. Maybe once a month? To me it sounds worthwhile in terms of research. It's a project involving hundreds or thousands of people, and the results are shared.

    In the end most human activity uses resources, and there is a lot of waste in there. How many jetloads of people fly across the country each year to party at Miami beach? It's many thousands I am sure.

  13. Draco
    Joke

    Sounds "Dodgy" to me

    (Do I really have to explain how I got from Jesse "Dodge" to the title?)

  14. Robert Grant Silver badge

    It found that the cost of training BERT on a GPU in carbon emissions was roughly the same as a trans-American jet flight.

    Activate journalism mode:

    a) Is that a lot? (El Reg unironically using hard to follow units?) How many of those flights happen every day?

    b) What power source is that based on? Lots of these companies use carbon-neutral fuel sources, so do we know the provenance of the energy used for this task?

    Deactivate journalism mode.

  15. a_yank_lurker Silver badge

    Misuse of Artificial Idiocy

    I often wonder how often AI is misused on situations were there is an efficient, simpler solution. The cooking rice example, could a relatively simple web search pulling up sites with instructions on how to cook rice. Also, as was pointed out above, GIGO still rules even with AI.

    On hot dogs, dachhunds are often could wiener (hot dogs) over here in Feraldom. Plus it also is a synonym for show off. Context matters and can AI properly discern the context.

  16. ecofeco Silver badge

    There is also the human labor

    Perhaps I skimmed too fast, but I did not see mention of the legion of Mechanical Turks, and their counterparts, who work 24/7 to assist current "AI."

    They too are using stunning amounts of power.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021