back to article Generative AI won't steal your job, just change it, says UN

Generative AI will probably not replace most current workers, with its impact instead confined to automating some tasks for a minority, according to a report released on Monday by the International Labour Organization (ILO), the United Nations agency that develops standards for the world of work. The report, titled "Generative …

  1. Anonymous Coward
    Anonymous Coward

    I'd love to throw our policies and guidance into a generative AI; and then start asking it questions about it.

    Though knowing full well that authoring practises do not enforce cross checking of cross references between documents (just like the ISO-standards) the outcome is somewhat inevitable.

    Garbage in, garbage out. The best use for generative AI might be to identify and highlight those errors in the underlying sources.

  2. Steve Button Silver badge

    We just don't know.

    The thing about predictions is they are very hard, especially when you are predicting the future.

    Probably in the early 1970s people building cars in factories would have mocked the robots being developed, as they did a pretty crap job. Much like people are currently mocking the output of LLMs. However, after years of refinement we are now in a position where no sane company would consider mass producing cars without using robots. A couple of things though, 1) Today's cars have way more stuff in them compared to cars in the 1970s 2) Many of the jobs have moved up the value chain into things like design.

    We now have a consistently better product, which does far more than the previous generation of cars could. Cars in the 1970s used to regularly break down / overheat or just plain fall apart from the rust. They would not start on a cold morning. A bit like software today.

    So, perhaps in 20 years time we'll be writing software which does what it's supposed to do, in a much more secure and reliable way? We probably won't spend as many hours writing code or tests as much of it will be done by AI?

    And probably the same thing for accountants, lawyers, cleaners and hundreds of other roles. So, yeah I guess I kind of agree with the UN on this one. Although try telling that to the thousands of car workers in Detroit and many other places who suddenly found themselves unemployable.

    On the other hand I could be completely wrong, as I said predictions are hard.

    1. Red~1

      Re: We just don't know.

      There are only three hard things in Computer Science: cache invalidation, naming things and predicting what AI will do to your job.

      1. Steve Button Silver badge

        Re: We just don't know.

        I thought it was TWO hard things in Computer Science? Cache invalidation, naming things and off by one errors.

    2. DS999 Silver badge

      Re: We just don't know.

      So, perhaps in 20 years time we'll be writing software which does what it's supposed to do

      No, we won't. Because the problem of writing software that doesn't do what it is "supposed" isn't the programmer's fault. It is the gap between the architects gathering requirements and the users providing those requirements. The users generally don't know what they want, beyond "what the current software does plus a couple pet features I've always wanted". They generally don't provide useful feedback until you give them something to test and then they can very easily tell you what they DON'T want it to do - but still not what they DO want it do other than "not that!"

      Even if AI advanced to be the equal of the average programmer with 20 years of experience it wouldn't be any better at writing software that does what it is supposed to. It would probably be worse, as it would EXACTLY follow the requirements instead of thinking "this requirement seems a bit limiting, with a bit more work I can make it tons better".

      1. amanfromMars 1 Silver badge

        Re: We just don't know.

        No, we won't. Because the problem of writing software that doesn't do what it is "supposed" isn't the programmer's fault. It is the gap between the architects gathering requirements and the users providing those requirements. The users generally don't know what they want, beyond "what the current software does plus a couple pet features I've always wanted". They generally don't provide useful feedback until you give them something to test and then they can very easily tell you what they DON'T want it to do - but still not what they DO want it do other than "not that!" ... DS999

        Smarter architects providing clueless users, without the useless clueless third party user consultation process, their future requirements would solve the enigmatic dilemma, DS999 ...... although admittedly that might be a problem for architects without such missing third party smarts.

      2. Steve Button Silver badge

        Re: We just don't know.

        Agree with most of what you've said there apart from the "No, we won't". You just can't know that.

        It might be that in 20 years time most software is developed by sitting alongside the people who get to decide what the requirements are and as those requirements are being written down, the software is developed (with the aid of AI) and presented to the user in near real time, and they can then keep saying "Not that!" until you get fed up of saying "Well how about like this then?" It might even make guesses based on similar software. At this point the job of the "programmer" becomes to steer the AI in the right direction, although perhaps at some point in the future they might decide to cut out the middle man.

        As for AI advancing the the equal of a programmer with 20 years experience, it seems with current models we're a long way off from that, although it *could* keep getting better to the point where at some point in the distant future it becomes far better than any programmer. It might take decades or centuries, but it will happen at some point.

        After that we're all just pets.

        1. DS999 Silver badge

          Re: We just don't know.

          I'm not worried at all about AI becoming better than humans in my lifetime. That's a problem for Gen Z or more likely their descendants.

          We don't even understand how human intelligence works, and until we do its a pipe dream trying to make an intelligent machine rather than one that's better at collating data in amounts too massive for the human brain, but in limited ways.

    3. Filippo Silver badge

      Re: We just don't know.

      All true. There's a critical difference between industrial robots in the '70s and LLMs today, though.

      Robots were very well understood, with fairly clear paths to improvement, with most if not all of the obstacles being technical in nature. When they did something bad, you could often point at the misbehaving bit and say "that bit, make it better". Iterate that, and you get better robots.

      LLMs are poorly understood, with no clear paths to improvement except making them bigger and therefore more expensive (there are other paths, but they are very murky), and some of the obstacles are at the theorical level with no definite solutions (e.g. so-called hallucinations, or the inability to learn after the training phase). When they do something bad, all you can do is shrug and have a human take over.

      At any time someone could publish a paper proving that LLMs cannot improve any further by merely making them bigger, and that will be pretty much it until the next big theorical breakthrough - which might come a year later, or ten, or a hundred.

      Or, hey, the next big theorical breakthrough could be about to be published right now, and the singularity could be a month later. All of this to say that predictions are very hard, and nowadays are even harder than in the '70s.

  3. Flocke Kroes Silver badge

    LLMs will create jobs

    Companies need extra staff to correct the mistakes made by LLMs.

    1. Anonymous Coward
      Anonymous Coward

      Re: LLMs will create jobs

      They won’t need staff to correct the mistakes, but rather just to assume liability.

  4. heyrick Silver badge

    AI models cannot do all the work required in most roles

    Of course, what they fail to consider is that an AI model can do part of several roles, so they'll simply consolidate the roles, lay off half the staff, and the rest can work harder to fill in the gaps that AI doesn't do (and harder yet because ultimately AI will be a lousy replacement for a person).

    You can see evidence of this sort of thing already - just try to contact pretty much any large company for assistance. They run small client help lines these days because most of it is an automatic system that's designed to maliciously interfere with any hope you had of talking to anybody with a functioning brain. Moreso if after the machine you have to wrestle with a support droid reading cue cards.

  5. Mike 137 Silver badge

    ISCO classifications are the weakness

    The paper is well researched in general, but the ISCO occupation classifications it uses leave a lot out (e.g. there are no codes for anything explicitly related to risk management). As a result there are serious holes in the profiles (p. 21) and consequently in the overall analysis, because it's just such occupations that are most sensitive to AI-takeover.

  6. shawn.eary

    Who's in Charge?

    No need to worry about generative AI. It will merely engineer the next strain of COVID. The World Health Organization (WHO) will then needlessly lock you up in your home until you die of obscure diseases (other than COVID) while YouTube censors all free speech and objective academic discussion surrounding the topic.

  7. joannerosanne

    Using AI in Software Testing

    Using AI in software testing is the future of the testing industry. With many outstanding capabilities, artificial intelligence promises to optimize as the tester process and efficiency of the software testing phase.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like