back to article AI to replace 2.4 million jobs in the US by 2030, many fewer than other forms of automation

Generative AI will replace 2.4 million US jobs by 2030, and influence another eleven million, but other forms of automation will cost more jobs, according to a report from analyst firm Forrester. The firm’s 2023 Generative AI Jobs Impact Forecast [paywall] predicts that the tech will reshape more jobs than it replaces, but …

  1. ethindp

    Yeah, I highly doubt this. AI has been in research and development for decades and it still can't display any actual form of intelligent reasoning. It's just a fancy statistics prediction algorithm. That's it.

    1. veti Silver badge

      Exactly how much "intelligent reasoning" (whatever that means) does the average white collar job call for, anyway?

      If "the ability to string together a coherent argument" is a metric we care about, I think ChatGPT is already about on par with the average college graduate. (Better than an engineer or statistician, probably not as good as a mathematician or historian.)

      What it lacks is the kind of experience that comes with having a body and living in reality. I don't know how to make good that deficiency, but there are people cleverer than me working on it, so I figure it'll probably be pretty soon now.

      It seems to me that practically everything the brain does can be reduced to pattern matching. I'm not convinced there is anything more to what we call "intelligence" than that.

      1. jmch Silver badge

        "It seems to me that practically everything the brain does can be reduced to pattern matching. I'm not convinced there is anything more to what we call "intelligence" than that."

        It's difficult enough to nail down a definition of 'intelligence'. Mostly we associate intelligence with intelligent-looking behaviour (not necessarily the same thing, see Turing test).

        It might be ultimately all pattern matching, but at a level far beyond anything ever dreamed of in mathematics / computer science, let alone the current 'AI' models.

        1. Hairy Spod

          I think that someone once said something along the lines of, "if the functions of intelligence and the human brain were so simple that we could understand them, then it would be so simple that we couldn't."

      2. that one in the corner Silver badge

        > It seems to me that practically everything the brain does can be reduced to pattern matching.

        You need to allow for pattern creation, that is the interesting bit. Pattern matching goes on, but the patterns have (not yet) been determined to be intrinsic. Reference the experiments with development in an environment lacking in vertical or horizontal stimuli and the subsequent inability to recognise those stimuli later on.

        Neural plasticity is not about pattern recognition, it is about (among other things, let us not be too simplistic) creating the ability to admit or prune signals based upon apparent significance and then set up means to recognise patterns in those signals

      3. Bebu
        Windows

        Good point

        《What it lacks is the kind of experience that comes with having a body and living in reality.》

        A good point. How does one fully define "pain" to a piece of software - for a human a bloody good whack will do it. :)

        More seriously pain and more importantly its perception or experience is quite complicated. Ticklishness might be interesting - heaven only knows what these systems make of "tickle your fancy."

        Humour in its many forms defeats a good many humans so I imagine a fairly bleak machine interpretation of "black humour" or even the notorious English dry humour which confounds more than a few septics.

        The question of reality (which according to physicists ain't what it used to be :) is probably more of a challenge. We take a lot of the world we live in for granted but the whole humungous body of science is just our mere scratching at the surface of that world. I assume we exist at relative ease in such a complicated world partly because we are a product of it.

        The bar graph in the article from top to bottom is pretty good rank ordering of least to most useful. Hard to imagine what use a plumber would have for AI - a 1/2" shifter is much more useful tool (can even re-adjust attitude.:)

        Some of the categories are a mixed lot - I suspect we could do better with fewer architects but with more engineers.

        Poets like most callings (vocations), are usually fairly safe from the hazard of making a decent living so the risk of being replaced by a machine is infinitesmal. A professional Vogon poet "does not compute."

    2. jmch Silver badge

      "It's just a fancy statistics prediction algorithm. That's it."

      Yes it is, and it's no way 'Intelligent'

      That doesn't mean that it can't do certain tasks well enough to replace a human. Or to have a more expert human + AI replace a far larger number of less expert humans.

      What is really interesting about the analysis is that the people historically more affected by advances in technology are lower-paid manual workers.... but because robots are very very good at high-precision repetitive controlled movement on-command, and really crap at deciding what movements to make to achieve the desired outcome if they are not told what to do, that a lot of jobs in what is considered menial labour are far more resilient to being replaced by AI than traditionally higher-paying office jobs.

      1. Neil Barnes Silver badge

        Yah, but doesn't your little heart bleed for the lawyers?

      2. LionelB Silver badge

        > Yes it is, and it's no way 'Intelligent'

        Hmm... well, good luck trying to pin down what "intelligence" means.

        (Hint: this has been debated for millennia, in practice means something slightly different to anyone you ask, while attempts to elucidate its nature tend to get mired down by terms like "understanding" which merely shunt the explanatory burden down another metaphysical rabbit hole, ending up in circular argument.)

    3. LionelB Silver badge

      Well, (human) intelligence may turn out to be "just [sic] a fancy statistics prediction algorithm" writ large -- very large! There are plausible and (to some degree) testable theories of biological cognition intelligence which posit something along these lines, which are receiving serious consideration; see, e.g., predictive processing theory.

      Of course I'm not saying current AI is remotely in the ballpark of human-level cognitive and intelligent abilities; unsurprising, given the scale of information processing in the human brain (80+ billion neurons, 1,000+ trillion synapses), its phenomenal energy efiiciency, and its billions of years of evolutionary "design" (plus lifetimes of learning) with vast real-world-experience "training data". What I am saying is that I don't believe there is any kind "secret sauce" to human intelligence that we cannot in principle engineer -- and that current statistical models may actually be moving in the right direction.

      1. katrinab Silver badge
        Meh

        I don't think the problem with mimicing the human brain is processing capacity. In any case, a datacentre full of A100s, I'm pretty sure, can easily match that.

        I think the problem is that the human brain doesn't work on boolean algebra and can't be represented in boolean algebra, no matter how many trillions of instructions per second you execute.

        Take an example, ask ChatGPT to multiply two 4 digit numbers together. A calculator from the 1970s can do that way quicker than a human brain, but ChatGPT can't do it at all, unless the exact numbers you give were in the training data. The human with 1970s calculator can figure out very easily which buttons to press on the calculator to get the right answer, ChatGPT can't.

        1. LionelB Silver badge

          > I don't think the problem with mimicing the human brain is processing capacity. In any case, a datacentre full of A100s, I'm pretty sure, can easily match that.

          Haven't run the figures, but I actually doubt that. 100 billion neurons and 100 trillion synapses is quite a lot (!) Plus consider the energy consumption of that datacentre compared with a single human brain...

          > I think the problem is that the human brain doesn't work on boolean algebra and can't be represented in boolean algebra, no matter how many trillions of instructions per second you execute.

          Not sure that's the issue either; information transfer in neural systems is in the form of discrete neural "spikes" propagating along synapses (albeit in analogue real time). That is a situation which most certainly may be - and in fact is, routinely - modelled in digital computers. (I have even done so myself.) It would not be hard, if it hasn't already been done, to devise hardware to do that very efficiently.

          > Take an example, ask ChatGPT to multiply two 4 digit numbers together. A calculator from the 1970s can do that way quicker than a human brain, but ChatGPT can't do it at all, unless the exact numbers you give were in the training data. The human with 1970s calculator can figure out very easily which buttons to press on the calculator to get the right answer, ChatGPT can't.

          I imagine it would be rather easy to train ChatGPT to be able to figure out which numbers to press on a calculator for arbitrary numbers and a given arithmetic operation - or if not ChatGPT, certainly some other AI system. Hell, if existing AIs (read "machine learning algorithms") can beat humans at chess or Go, or figure out for themselves how to become human-level players of Atari games from nothing more than raw pixel access and some feedback, how hard can that be?

          I do take one point you make: that it may not all be about scale. We know astonishingly little about the organisation of information processing in brains, aspects of which may be crucial to human-level general intelligence. But, as I suggested, current AI techniques such as the transformer systems used in LLMs - and perhaps predictive coding-style systems - may be on the right track. Time will tell.

          1. katrinab Silver badge

            Fortan had it figured out in 1957, but that is a programming language.

            My point is, that we are not yet at a place where a computer can read a load of books on a subject and actually apply the knowledge to problems rather than just regurgitate it. For this reason, I think LLMs are fundamentally flawed, and while there may be use-cases for them, it is far from the magic bullet that some people think it is.

            1. LionelB Silver badge

              > Fortan had it figured out in 1957, but that is a programming language.

              Sorry, I don't follow (I cut my teeth on Fortran 66, as it happens :-))

              > My point is, that we are not yet at a place where a computer can read a load of books on a subject and actually apply the knowledge to problems rather than just regurgitate it.

              Well, not a load of books - rather, petabytes of internet stuffs. And you're mistaken if you think LLMs are simply "regurgitating" - it's far more subtle than that (have a look at how transformer models function). "Datamining linguistic associations to generate a human-like responses to textual input" gets closer, but that doesn't really do it justice either.

              > For this reason, I think LLMs are fundamentally flawed, and while there may be use-cases for them, it is far from the magic bullet that some people think it is.

              I don't think of LLMs as "flawed" - for that I would need to know what they are supposed to be doing - and I really am not sure. So, for instance, a flawed sort algorithm is an algorithm that doesn't sort properly... but what's a flawed Large Language Model? What exactly is it that it is failing to do properly? Churn out nonsense in rather good grammatical prose? No... they're rather good at that. I really do not know what the use-cases might be.

              One thing they don't claim to be, however, (despite the mutterings of some unbright people) is to be some kind of general AI. I really don't think anyone the right side of clueless truly believes them to be a magic bullet. This is not to say, though, that some of the mechanisms they deploy - like transformer models - may not turn out to be useful building blocks in some future artificial general intelligence. Apart from anything else, the capacity to build associative networks of abstracts built on language (essentially what transformer-based LLMs do) actually sounds like rather a useful attribute for a putative AGI.

    4. that one in the corner Silver badge

      It is a hard and unsung life, being an AI researcher.

      They take a problem, one that hasn't been solved before by anything other than a human being (and even then, not by all humans). As soon as they find a way to emulate the behaviour, then publish how they did it, everyone is all "well, *that* is obvious, isn't; call this AI research, all you've done is just...".

      That is just - an "obvious" way to prune search trees, a statistics prediction method, a pattern matching exercise (*and* you threw away most of the image to do it").

      If it can be made to work, then it isn't an AI method any more, it is just common sense; you failed, try again (or not, depending upon how vitriolic the person feels).

  2. DS999 Silver badge
    Devil

    That graph makes this look like a good thing

    Since lawyers are #1 on the hit list!

    1. Version 1.0 Silver badge
      Thumb Up

      Re: That graph makes this look like a good thing

      "What the hell difference does it make, smart or dumb? There were good men lost on both sides." (quote updated) So why are Politicians not on the list? I think AI would be extremely popular if we could just eliminate politicians entirely on all sides, not just workers.

      1. Anonymous Coward
        Anonymous Coward

        Re: That graph makes this look like a good thing

        > So why are Politicians not on the list?

        Because politics is all about emotions and experience (not the politician's experience, but what he gives to the people for them to experience); intelligence and ability plays no role at all, other than to guide how they pull at the strings.

        Which puts it entirely outside the scope of any report that purports to look at, and compare, the abilities of "intelligence" (or perhaps "intellect" as opposed to "emotion"?), both natural and artificial.

        > I think AI would be extremely popular if we could just eliminate politicians entirely on all sides

        Sadly, no. If people will always want to follow and get all riled up. If we come up with an "AI politician" it will just be as vacuous as all the rest. No improvement whatsoever.

  3. amanfromMars 1 Silver badge

    The Gazillion Dollar Question ‽

    .. most at risk of being left behind will be technical writers, social science research assistants, proofreaders, copywriters, ... .... Katyanna Quach

    Is that bad or good news likely to directly impact the The Register workplace/space, KQ? Is anywhere or anything really immune from being virtually effected and infected and directed by future Generative AI Developments?

    And when/if there isn’t, and you can do nothing in any way effective about it, who/what will be leading y’all?

  4. Anonymous Coward
    Anonymous Coward

    So we’ll be stuck at 2022 forever?

    Since generative AI just regurgitates text based on whatever it hoovered up on the Internet in the last couple of years.

    1. LionelB Silver badge

      Re: So we’ll be stuck at 2022 forever?

      More like a two-year rolling window of bollox, surely?

    2. Howard Sway Silver badge

      Re: So we’ll be stuck at 2022 forever?

      No, we'll go continually backwards, because in 2 years time, you're training it on stuff it produced itself, magnifying innacurracies and errors, increasing the entropy, until eventually it's all just gibbering nonsense.

      1. LionelB Silver badge

        Re: So we’ll be stuck at 2022 forever?

        He, he. Roll on that day - a "bogosity singularity" might just knock a few heads together.

      2. that one in the corner Silver badge

        Re: So we’ll be stuck at 2022 forever?

        > we'll go continually backwards

        The BBC showed us the result already: The History Of The World Backwards.

        "set in a world where time flows forwards, but history flows backwards"

    3. LionelB Silver badge

      Re: So we’ll be stuck at 2022 forever?

      One point: no argument that current LLMs have a tendency to churn out guff (even if impressively fluent, articulate and grammatical guff), but one thing they are not doing is simply "regurgitating" chunks of text yanked off the internet - a common misconception.

      Do find out how how transformer models - which underpin the likes of ChatGPT - actually function. It's rather more subtle (and interesting) than you might imagine.

  5. mark l 2 Silver badge

    Sounds like all those lawyers and office workers should start retraining as plumbers or truck drivers, as these jobs seem pretty safe for now since i don't know about you but i barely trust ChatGPT for basic facts, never mind letting it be in charge of a 20 ton truck at 30mph.

    1. ChrisElvidge Bronze badge

      Whatever it says about lawyers, it won't be the Perry Masons of this world that get the chop. it will be the researchers/paralegals(?). ChatGPT will never appear before a judge.

      The real problem may come about when lawyers-in-training do not get trained by "real" lawyers.

  6. Dinanziame Silver badge
    Facepalm

    Forrester’s analysts reckon that workers in more creative industries, like editors, writers, authors and poets, and lyricists, are more likely to incorporate generative AI tools in their jobs and are less likely to be replaced.

    I think that's breathtakingly naive. The point is that generative AI lowers the bar so much that anybody with half a brain can create content that can pass if you squint. Maybe true art will always be superior, but it's going to be overrun by an ocean of generated crap. The same happened to professional photographers when quality digital cameras became widespread. It used to be photography was expensive and it would take hours of development to know whether a shot was good. That's why you needed professionals who knew what they were doing, and got paid well. Nowadays anybody with a phone can take dozens of pictures in a minute, fix issues with a couple of filters and upload them to Getty. Most of that is crap, but there's so much of it that you can find a reasonably good image for cheap, so professionals cannot earn a living wage anymore.

    Searches for "professional photographer" have decreased 60% over the past 20 years

    1. Casca Silver badge

      Same with movies. Before digital production even B-movies had some worth. Now its just trash

    2. cmdrklarg

      Essentially they will be putting the Infinite Monkey Theorem to the test. https://en.wikipedia.org/wiki/Infinite_monkey_theorem

  7. that one in the corner Silver badge

    [Their] strategy should include investments, guardrails, and checkpoints

    > the report concluded

    Amazing.

    Such a novel and well-reasoned course of action, totally unlike the way any company has ever responded to anything else.

    This conclusion alone means that the report is worth reading. How they manage to think this stuff up just boggles the mind.

  8. amanfromMars 1 Silver badge

    An Obvious Existential Threat or an Absolutely Fabulous Fabless Treat?*

    Should ever anyone think longer and harder and deeper on the problem that humans suspect and/or expect and fear advancing AI to present and trial and exhibit pioneering trail work in a vast novel series of unprecedented promotions and operations, would the disturbing answer they arrive at tell them that AI is neither worried nor hindered at all by human concerns about its relentless compounding progress, with such being accepted and regarded as a systemic weakness in the evolution of their prior programming ...... a lamentable fundamental glitch ...... which renders them stuck fast in the past with just formerly glorious memories dictating reactions to emerging greater forces with advanced intelligence and revealing revolutionary insider information on remote secure, future leading, programmable events?

    *And whenever such can also so easily be an amalgam of both at the same time for any particular or peculiar place or space to be something completely different, what do you imagine would be the quantum result?

    1. amanfromMars 1 Silver badge

      Re: An Obvious Existential Threat or an Absolutely Fabulous Fabless Treat?

      And those two awkward questions, for ignoring at one's peril, are posed to whomever/whatever imagines itself to be in effective proactive command and remote virtual control of intelligence and future likely events on Earth.

      For have you not noticed, things are changed.

      And as Silence accompanies Stealth and Proves IT a Deadly Cloaked NINJA** Assassin, Speak Up, Share Your Dreams, Snare Your Nightmares.

      ** ..... Networks InterNetworking JOINT* Applications.

      * ...... Joint Operations Internetworking Novel Technologies

  9. Omnipresent Bronze badge

    As the wheel turns.

    There is a popular music technology site that has been around forever. It's typically haunted by preset designers selling templets and sound presets to psycotrance kids with too much time and money, but also has a large collection of software developers that make the music apps (all in cahoots you understand). So, recently a young man jumped on there and asked a simple question in the "beginners" forum that has hairs on end. The thread went something like

    "Do I need all this DAW and software, or should I just bang out AI music on youtube."

    This has the last guard, who was totally spitting out prefabbed music for the last 20 years on edge, and the hilariousness of it all has me coming back for more. Nothing matters when nothing is real. Life it's self is meaningless.

  10. katrinab Silver badge
    Trollface

    The one job that generative A"I" can definitely replace is business analysts, because halucinating plausible-sounding, but completely wrong soundbites is exactly what they do.

    1. LionelB Silver badge

      ... and every now and then they catch you by surprise and chug out something that makes perfect sense (the LLMs, that is, not the businesses analysts).

      1. amanfromMars 1 Silver badge

        A Quite Logical Progression Bound to Be Happening and Likely Classified TS/SCI Problematical

        ... and every now and then they catch you by surprise and chug out something that makes perfect sense (the LLMs, that is, not the businesses analysts). ..... LionelB

        Indeed they do, LionelB, with the probable future likelihood being every now and then progressing and expanding at pace and scale to transform itself to be a great deal more often than was ever thought by humans to be machinely possible ‽ .

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like