back to article Safe CEO: AI is an assistant, not a replacement

If AI can take on the role of a junior programmer, what happens when senior staff start retiring? Industry veteran and CEO of Safe Software, Don Murray, reckons the technology is becoming indispensable, but the human can never be removed from the loop. "I think as an assistant," says Murray, "AI is indispensable. But it is not …

  1. LucreLout

    Not convinced

    I'd like things to play out the way the author suggests, but unfortunately, I think he's wrong while I want him to be right.

    AI does do a lot of what juniors do in many professions, including making lots of mistakes. The real debate is by the time today's juniors become tomorrows seniors (actual seniors not juniors with 5 years experience and a wholly inappropriate job title), what will AI have become? For some, perhaps many, roles its not inconceivable that it'll be capable of doing senior level work and so the need for seniors will have been reduced or absorbed.

    Translators, for example, will never be needed in anything like the same quantity as before AI, if at all by the time a junior translator hopes to become a senior translator in say 15 to 20 years.

    In terms of software engineers, its very hard to argue that AI hasn't outpaced its entire cohort of junior devs in terms of quality of output. There's a credible argument that it won't continue to do so, but there's an equally plausible argument that it does. While personally I think there will still be a need for senior engineers, there is a limit of how much money I'd want to bet on that. What if, instead of investing in todays juniors, a business saves that cash (literally) and uses it to fund hiring tomorrows scarcer senior staffer? There's no reason to think that those investing in juniors will reap any reward when they become seniors because they'll go for the money, same as everyone else.

    The hype around AI is well overblown, for sure, and its definitely not going to be all things to all people, but there are a lot of careers never coming back, and there's a lot more still that won't require the same volume of people to do the same level of work. Marketing, basic accounts clerks, etc won't be needed in future at anything like the level of employment they've enjoyed previously, and for sure the same is likely to be true of some tech jobs also.

    1. Anonymous Coward
      Anonymous Coward

      Re: Not convinced

      I am waiting for a major AI mistake to cause real and substantial damage, and the resulting litigation. Maybe then we'll see how "the AI did it" plays out, with or without humans having checked it's work. The only real constant one can count on will be the c-suits claiming they had no knowledge and shouldn't be held responsible.

      1. Grunchy Silver badge

        Re: Not convinced

        “I am waiting for a major AI mistake to cause real and substantial damage,” what, you must be kidding?

        Just look toward Tesla “full drunk driving” technology, they’ve killed dozens by now. According to duckduckgo a.i. search “Tesla autopilot fatal crash” there are 65 confirmed deaths so far.

        “That’s not a.i.” any automated system runs some kind of artificial intelligence, even if it’s a simple relay bank “ladder logic” scheme. Practically everything is artificial intelligence of some type or other. My microwave oven can cook a bag of popcorn with a single button press, how does it know? Probably it’s just a preprogrammed countdown timer, doesn’t matter, it’s still a.i.

        Look at 737 Max incidents that killed hundreds at a time, in which all the physical might of the pilot + copilot could not dissuade a determined computer brain that was provided high power hydraulics to ensure its decisions could not be overruled by any human or gorilla.

        1. nobody who matters Silver badge

          Re: Not convinced

          I think you are misunderstanding the difference between intelligence and a programmed set of actions - none of the things you have mentioned are even remotely 'AI' (they are not even the current fake AI that is currently being pushed at us!).

          Your microwave only knows how to heat your bag of popcorn because a human programmed it to run for a specified time at a specified power setting to do so. The microwave oven certainly doesn't 'know' how to do it.

          1. Ian Johnston Silver badge

            Re: Not convinced

            Some microwave ovens can do clever - if not intelligent - things by measuring the weight of food placed in them and changes in air humidity as power is applied. Mind you, so can my rice cooker, except for the "measuring weight" bit.

        2. rg287 Silver badge

          Re: Not convinced

          What you've listed is not AI anyway. A timer is not "AI" any more than an hourglass or tipping rain gauge is.

          With particular note to this line though:

          Just look toward Tesla “full drunk driving” technology, they’ve killed dozens by now. According to duckduckgo a.i. search “Tesla autopilot fatal crash” there are 65 confirmed deaths so far.

          Whilst I have no desire to play apologist for Tesla/Musk, there is a legitimate human factors issue to look at with driver-assistance aids (from any vendor).

          65 confirmed deaths, from how many million miles of motoring? Tesla Autopilot/"Full-Self Driving" is still safer than unassisted human drivers in terms of fatalities per mile, in the same way that ABS-equipped cars are involved in fewer accidents than cars with non-ABS brakes. What's more, the majority of Autopilot-enabled fatal incidents show major driver fault - e.g. taking a nap whilst they're supposed to be watching the road.

          This falls into the same basic human-factors category as people turning on the cruise control in their RV and then going into the back to make a coffee (which is not an urban myth - it's happened more than once! UK; USA).

          Autopilot is basically an enhanced package of lane departure monitoring; adaptive cruise control, ABS and other systems. But the driver still has to monitor. A lot of people don't seem to understand this, the same as certain people didn't understand that "Cruise Control" also wasn't a magic self-driving button.

          This is a problem, and it makes great headlines when there's an incident involving the new tech and people wave and say "Boo, yah, look it's crap. doesn't work!".

          The thing is, these technologies still tend to make the world a safer place overall. But it's impossible to identify the collisions which didn't happen because adaptive braking kicked in when a driver was shouting at their kids in the back seat and would otherwise have rear-ended someone. We only see that the overall figures drop.

          Of course it'd be better if we all had better public transport and there were fewer cars on the road anyway. That'd cut road incidents dramatically.

          Alas, even training doesn't seem to address this fully. Air France Flight 447 involved a professional, licensed pilot stalling a perfectly functional airliner into the ocean because they weren't paying attention when they were "on watch", and when the autopilot detected an error and kicked control back to them they had catastrophically lost spatial awareness.

          If there is a fault, it's not so much the tech, it's that consumer "protection" and advertising "standards" agencies didn't call out the "Autopilot" and "Full Self-Driving" branding as false advertising and force Tesla to call it something like "enhanced cruise control". Which at least explains it in terms of a system most people are now somewhat familiar with - hey, it'll hold your lane, slow down for traffic ahead and probably see stuff before you do. But also, you have to maintain a watch.

      2. Anonymous Coward
        Anonymous Coward

        Re: Not convinced

        By then it will have completed the benchmark of equaling a human...because we're hardly infallible.

        Making dumb decisions that kill people is something humans are very good at.

        Human intelligence is not a good baseline to compare AI against.

        At the moment, AI is being trained on human knowledge and is using human tools...when we get around to designing tools for AI to use, it's game over.

        In the world of programming, we're using AI to write code (in languages designed to be human readable and parseable) which is then interpreted into machine code that a machine can understand...so we're teaching machines to understand code from our perspective in order for it to write code that is then turned into other code that is understandable by machines...the goal here isn't to make machines as good as we are as programmers...the goal is to remove the middle man entirely...at some point in the future, it won't be possible to be a programmer because there may be no human readable programming languages in mainstream use...programming as we know it might become an artisanal craft..."rustic software handcrafted by humans the old fashioned way".

        The same will apply to a lot of things I'd imagine. If processes become such that humans don't need to read anything...why involve them at all? For example, lets say 3D printing houses becomes a mainstream thing, which it might, why do we need architects? You can have machines designing the houses in a language that is natural to machines to provide input for other machines that do the printing...boom, architects gone...builders gone...plant hire firms gone...health and safety advisors...gone...building inspectors gone...etc etc...

        The reason a lot of human jobs exist is because humans act as counter balances for other humans...you have an architect produce drawings that builders understand, we have heavy machinery that can be operated by humans, building inspectors use the architectural drawings to check a builders work etc etc...humans, humans, humans...by trying to make AI do things the way we already do them, we're hamming square pegs into round holes and the results are wonky...once we stop doing that and start designing processes that are machine specific, AI will be a lot less wonky and things will be a lot less automated...for now, we're working alongside AI...we're still training it...

        Someone mentioned earlier that AI is currently at the level of a junior member of staff in a lot of cases...that's bang on, because it is...but it can learn faster than a junior...and at some point, juniors become senior and start developing their own processes...that's when AI starts taking off.

        I think we're about a generation of humans away from AI replacing a lot of jobs...which sucks for us, but for our kids...they're going to look at some of the stuff we do now and think it's quaint, just like we look back at fax machines, typing pools, hand cranked engines and think it's fucking nuts etc...we don't have offices full of women on typewriters anymore...in 20 years time we won't have offices full of programmers...programming will just be a thing we ask machines to do when we have a crazy idea...like it or not, vibe coding is what we're all going to be replaced with. Fortunately though, now is the time to make hay...AI is crap enough that it can't replace us, but good enough that it can enhance us...any fucking nuts idea you've had that you've been putting off...do it now...because pretty soon, any man + dog will be able to do it...we've already entered the era of slop while everyone is trying everything with AI...but as tech guys we still have the edge because we know not only how to build things, but deploy it as well...once our edge is gone, we might actually have to get one of them real jobs we've all be dodging for a couple of decades...like shovelling shit, lifting stuff...things nerds aren't built for.

        1. LybsterRoy Silver badge

          Re: Not convinced

          -- The reason a lot of human jobs exist is because humans act as counter balances for other humans --

          I was going to write a long reply but decided to shorten it to "rubbish"

    2. big_D Silver badge

      Re: Not convinced

      Having worked as a translator and having tried to use AI to do translations, I can say that AI still can't do translations solidly enough. I wasn't fully trained, I worked as an intern at a local translation company and, although my translations were readable, and way better than AI, I was nowhere near good enough to become a professional.

      Even now, a few years later, I have to laugh at a lot of translations that AI throws up. It seems to have real problems going between German and English, for example.

      One big problem, which now seems to have been solved, was using formal English would "fool" the AI. saying "do not" would have the AI dropping the word "not" from the translation ("do not open the case, no user serviceable parts inside" became "Gehäuse öffnen, nichts drin" (open the case, nothing inside), not exactly what you want the customer to read, if they have forked out 5K an industry terminal). Use "don't" and it would translate correctly.

      Now imagine being at the zoo and seeing a sign that said "do not enter the enclosure" and the AI translation app dropping the "not" from the translation?

      If I am writing a manual and it drops the "not", I can have a good laugh about it, if tourists start wandering into enclosures with lions and tigers in them, because the translation app said they should, that is another matter entirely.

      This is why you need a knowledgeable human in the loop, to sign off on these things.

      It is from before AI, but with WIndows Vista, Microsoft decided that the workers in Redmond could do a better job of the localisation than the German office, so the properties for network neighbourhood became something like "change your neighbour's nature/character". Unfortunately, it never did do what it said it would do, unfortunately. I had to wait until they moved out...

      AI will no doubt improve over time, but if you don't have a knowledgeable expert to double check the results, how will you ever know if it is right or wrong, or if it is improving or getting worse?

      1. LybsterRoy Silver badge

        Re: Not convinced

        -- if tourists start wandering into enclosures with lions and tigers in them, because the translation app said they should, that is another matter entirely --

        Agreed 100%, after all we don't want the lions and tigers scared do we? On the other hand it would reduce the food bills.

        1. Ken G Silver badge
          Trollface

          Re: Not convinced

          As with the crushed pedestrians, we should regard that as the cost of training a better AI model.

    3. Mostly Irrelevant

      Re: Not convinced

      Are you a senior or higher level dev who uses this technology daily?

      Because your post reads like it was written by someone with no firsthand experience at all. I use several different AI models and tools every day and the chance of these text generators replacing any developers no matter how junior is laughable. You need someone who understands what they're doing to operate these tools, because they make so many mistakes. In comparison to something simple like writing marketing copy there are so many factors involved in software engineering that are completely absent from LLM generated code that anyone with any experience with the technology would never write any of the things you're writing here.

      The current LLM tools may enhance productivity, but require a lot of knowledge to use successfully. It sounds like you have knowledge in something substantially less complex that LLMs can do reasonably well and are applying that to the massively move complex task of software engineering. Even if the LLMs were good at producing code, that's the easiest part of the software engineering process. The author of this article is massively ahead of your understanding in this case.

    4. LybsterRoy Silver badge

      Re: Not convinced

      -- What if, instead of investing in todays juniors, a business saves that cash (literally) and uses it to fund hiring tomorrows scarcer senior staffer? --

      This is essentially what happened when the training levies vanished. Companies stopped a lot of the training that had been ongoing (cheaper to train than pay the levy) and started recruiting experienced staff from companies who still trained.

    5. numericcitizen

      Re: Not convinced

      And how do junior learn? They don't do it in vacuum... they partly learn from seniors... let's say we get rid of juniors... because AI... how do people become senior then? I'm asking for a friend.

  2. druck Silver badge

    In terms of software engineers, its very hard to argue that AI hasn't outpaced its entire cohort of junior devs in terms of quality of output.

    Only if you think a junior dev is one that is complete shit and incapable of learning anything.

    1. Bebu sa Ware Silver badge
      Windows

      Life's Mysteries

      "Only if you think a junior dev is one that is complete shit and incapable of learning anything."

      In one of the few short stints where I worked with actual engineers I recall junior engineeers as being the irritating little shits that kept asking inconvenient and challenging questions (many of which I couldn't answer - hence the irritation. ;)

      They were in sharp contrast with the senior engineers that never asked anything, clearly knew everything and could not be told anything — consequently could never learn anything.

      One of life's great mysteries is how junior engineers evolve into senior engineers. Brain cell apoptosis I suppose.

      † "Here, borrow my copy of Interconnections. Rada is pretty much the full bottle on that stuff. I want it back" … still waiting.

      1. big_D Silver badge

        Re: Life's Mysteries

        Admitting you don't know everything is the first step to enlightenment.

        1. takno

          Re: Life's Mysteries

          Yes, but step 2 is realising that 80% of people can't tell whether you know everything, nothing or somewhere in-between. Step 3 is getting a job as a guru.

        2. Ken G Silver badge
          Angel

          Re: Life's Mysteries

          and realising you don't know anything is the last step.

    2. vogon00

      We were all junior and complete shit at one point. The trick is avoid becoming senior and shit.

      I cannot imagine anyone junior NOT wanting to improve in their chosen discipline - but then I've always been motivated by 'learn more / get better / adapt':-)

      Seniors also have a duty to educate juniors, especially if asked. There are no stupid questions as far as I am concerned, mainly as I remember asking some that, with hindsight, were *incredibly* dumb. That's only 'coz i didn't *know* any better at the time. Fortunately for me, lots of people I worked with were the 'here, let me explain that' type as opposed to the piss-taking type.

      These days, you can't educate anyone who isn't interested. I still have to teach occasionally, and I just spend my time on those who are.

      As the Unseen University's machine Hex said, "All things strive".

    3. LucreLout

      Only if you think a junior dev is one that is complete shit and incapable of learning anything.

      No, you've not properly understood, so you're only half right.

      Junior devs are complete shit. All of them. They always were. They always will be. Same reason, old as time: they have no experience to draw upon. Sorry if that stings, but its absolutely true, and you'd have to be very inexperienced to think otherwise. There is no substitute for experience, and no (human) short cut to obtaining it.

      The question is, what has improved most over the past 2 years. A junior dev? Nope, 2 years in they're still complete shit - sorry, but they all are. Or AI? Well, that really has made significant improvements over the past two years, far faster than any junior dev has.

      The second part of the question is can the AI maintain pace of improvement over the junior dev, or at what point will the junior dev catch up? Right now a box fresh junior dev from say May 23 cohort is probably about 3 years behind the AI currently, in terms of ability. Speed we'll ignore because the AI will always be faster.

      My expectation is that about 10 years experience is where we'll start to see a crossover with LLM advances and junior devs catching up, in terms of ability. That assumes the easy gains are behind us and there isn't another inventive step in LLM's to come.

      1. LybsterRoy Silver badge

        -- The question is, what has improved most over the past 2 years. A junior dev? Nope, 2 years in they're still complete shit --

        I will agree that the technology involving LLMs has improved dramtically, but unlike humans who when trained on crap occasionally are able to see its crap I'm not sure LLMs can, or can be developed to do so.

      2. David Hicklin Silver badge

        > My expectation is that about 10 years experience is where we'll start to see a crossover with LLM advances and junior devs catching up, in terms of ability

        IF (and that's a big IF) that is true, then you are implying that we only need to employ junior devs with 10 years experience but if you did do that then for 10 years none would be employed in the first place to get those 10 years experience....which is the whole point of the article !

        Sort of a Catch-22.

  3. Bebu sa Ware Silver badge
    Headmaster

    "Experience" is only one dimension

    I was thinking about the senior people in skilled roles.

    While years of experience is one characteristic, another is the diversity of strengths and by implication weaknesses of particular individuals in identical roles.

    One might be a devil for detail and irreplaceable when troubleshooting the most subtle of problems, but have the imagination of an extinguished candle; another might have a double helping of imagination and creativity but blind to a small but critical detail; another blessed with the penchant of asking the right questions - most of which they themselves are incapable of answering; and so it goes.

    The intellectual life of people, or at least some, is exceedingly complex (natural languages alone make that quite clear) and undoubtedly requires far more than the trumped up finite state automata that passes for AI, to replicate.

    In the wash·up after the damage has mostly been remedied, I imagine AI/LLM technology will be just slightly more useful than expert systems were after the somewhat fainter bloom of AI 1.0 in the 1980·90s. That is to say there, in the background, doing useful work in very narrow well defined domains, but largely invisible.

    1. M. T. Ness

      Re: "Experience" is only one dimension

      I have seen at close quarters flesh-and blood senior beginners in senior expert positions behaving almost like LLMs. They cannot be modulated. They do not take advice, and they do not repair errors even after manifest failure.

      I think design of AI systems will be critical. We need people to be educated for that. We need the brightest people, and they must educate themselves to become real experts. We need cutting-edge knowledge. Ruminations on textbook knowledge or clickbaiting original articles are unlikely to reach expert level.

      1. LybsterRoy Silver badge

        Re: "Experience" is only one dimension

        I think first of all we need to stop calling them AI. There is no intelligence artificial or otherwise

  4. Anonymous Coward
    Anonymous Coward

    AI is going to replace very mediocre…

    …I think it can write Muzak, but not Mozart.

    Hence why Fiction authors are very safe.

    1. LucreLout

      Re: AI is going to replace very mediocre…

      That's a fair point but it overlooks a few things.

      To summarise them in a nutshell: explain Justin Bieber.

      Just because its not Mozart it doesn't mean the masses understand the difference or place any value upon it.

    2. Mike 137 Silver badge

      Re: AI is going to replace very mediocre…

      "I think it can write Muzak, but not Mozart."

      It might write a good semblance of Mozart by statistical chance, but it'll never be able to recognise why it's good. The missing bit will always be understanding, as that requires much more than stats and logic.

      1. Grunchy Silver badge

        Re: AI is going to replace very mediocre…

        “It might write a good semblance of Mozart by statistical chance, but it'll never be able to recognise why it's good,” nah, the a.i. knows what’s “good.” It has no understanding but that’s not necessary. The a.i. already learned what’s good and what isn’t, but not in a way that you and I could ever learn. The a.i. examined a large proportion of all of the music ever produced and has far deeper knowledge than you or I could ever hope to gain.

        https://youtu.be/NKnZYvZA7w4 <- how it works

        https://youtu.be/ubvnZq_eIYg <- better version of Mozart (MY personal opinion, yours probably differs)

        1. Anonymous Coward
          Anonymous Coward

          Re: AI is going to replace very mediocre…

          Yeah, they'll never manage to stuff enough infinite monkeys into those cramped LLMs to have them statistically crank out a full Shakespeare perchance ... they need to horror-backpropagate the bard's tales through the simians first, hard, like in a circus training dungeon routine, with whips, corsets, the whole BDSM flogorama ... Hey, they even got bears to ride motorcycles this way -- very smart, like superintelligent bears, it really works!

          So we're piloting bears as replacement for interns and junior hires now, plus monkeys to assist senior dogs learn new tricks. Goats didn't work out -- they ate all the furniture ... ;)

    3. Ken G Silver badge
      Holmes

      fiction writers, you say?

      The Great Automatic Grammatizator Roald Dahl, 1953

  5. Ian Johnston Silver badge

    Use of the word "agentic" is a very useful indicator that the piece containing it is pure bollocks.

    1. Steve Davies 3 Silver badge
      Mushroom

      using the word 'agentic'

      I agree that it is nothing more than marketing BS.

      I look at the claims for AI as it exists today and for the next 3-5 years is nothing more than a new name for Alternate Facts

      Remember them from the US elections in 2020? BS, Lies and Conspiracy Theories all rolled up into one simple name.

      To the people pushing Agentic systems/services ... See Icon.

  6. Doctor Syntax Silver badge

    "I have this guy who is amazing and has been around forever, and AI for him is very valuable because he can see very quickly what is correct and what isn't correct"

    So does he actually need the AI?

    1. LybsterRoy Silver badge

      -- So does he actually need the AI? --

      Yup. Hopefully it saves wear and tear on the fingers because he's typing less.

  7. Pascal Monett Silver badge

    "There is no automated substitute for experienced staff"

    Somebody needs to email IBM and HP . . .

    1. Nematode Bronze badge

      Re: "There is no automated substitute for experienced staff"

      Re: "There is no automated substitute for experienced staff"

      One is tempted to say when did sound logic ever influence the occasional, and in some cases (the oil industry) regular, experienced headcount reductions. Followed before long by panic re-hiring of now contractors charging a lot more.

  8. david1024

    Agree with Murray

    This is a great way to explain how to use AI:

    "I think as an assistant," says Murray, "AI is indispensable. But it is not an authority."

    However, that does mean fewer heads as the ones you already have get more efficient.... But if that's just increased velocity, that means more work can be done and more contracts satisfied. Getting more heads finding that extra work that the team can now take on becomes important.

    AI should be growing businesses and increasing quality, not eating the young.

  9. Anonymous Coward
    Anonymous Coward

    How quaint that he thinks companies think ahead like that. No, they will cut the junior staff to boost the bottom line now, safe in the knowledge (no AI needed) that it will be someone else's problem to fix in 20 years.

    1. trindflo

      Mostly agreed. Management rarely looks farther ahead than 5 years.

      1. ComicalEngineer Silver badge

        Fixed this for you:

        Mostly agreed. Management rarely looks farther ahead than the weekend / their next game of golf.

        1. M.V. Lipvig Silver badge

          Management looks further than that - they look at the quarter's numbers. A big bonus this quarter is worth the whole company next quarter.

  10. Nik_S

    There's an obvious flaw in this

    Reading between the lines it suggest that AI can help experienced staff, something that's typically done by less experienced staff. So if AI replaces less experienced staff, how the hell do people get experience so that they can become the people whose jobs aren't threatened?

    1. BleedinObvious
      Childcatcher

      Re: There's an obvious flaw in this

      At the moment I fear people will get stupider faster than AI becomes smarter, such that future stupid people won't be able to distinguish AI right from AI wrong.

      1. nobody who matters Silver badge

        Re: There's an obvious flaw in this

        It appears that a lot of them can't tell the difference now.

    2. david1024

      Re: There's an obvious flaw in this

      Any tool can be misused. If you only think small, as in reducing costs is my only goal and not growing the business... AI, or just about any tool, will initially get your head count down as you push that agenda as managers make everyone scared and leave.

      But the tool, used a better way, gets you more production and increased revenue from your existing resource pool and can make you more competitive as a company.

      Another way to say it is that new/growing companies would be benefitting from accelerated growth and stagnant/declining companies would accelerate in the cut costs direction. (It is the classic zero-sum vs rabbit-attack approach)

  11. Brl4n

    people love cheap garbage so i see AI being successful

  12. Anonymous Coward
    Anonymous Coward

    There is big difference between the utopian blue sky wish list of Ai zealots and the usual greed and power hoarding tendencies of the bosses who own these highly centralized systems.

  13. trindflo

    When do they start replacing CEOs?

    Engineering and programming are heavily detail oriented professions where simple mistakes can extract the devil's due from anyone who trusts them blindly.

    How much easier would it be to train an AI to perform CEO duties? That seems like a job that AI was made for. Think of the cost savings

    1. Anonymous Coward
      Anonymous Coward

      Re: When do they start replacing CEOs?

      Why not?

      So far, AI is rubbish at golf.

      1. LybsterRoy Silver badge

        Re: When do they start replacing CEOs?

        -- So far, AI is rubbish at golf. --

        Are you sure? Try it on these.

        https://www.gaming.net/best-golf-games-on-pc/

  14. ComicalEngineer Silver badge

    Garbage in / garbage out

    As any fule nose:

    Training LLMs on data which is a mixture of good and bad will inevitably result in a proportion of garbage out.

    As far a I can see, LLMs are unable to determine good data from bad data and therefore will inevitably produce erroneous answers at least part of the time.

    As an engineer of over 40 years experience, I made some mistakes in my early career, most of which were picked up and corrected by more senior engineers before I made any major errors. The problem is, as far as engineering disciplines are concerned, that junior personnel are not being given the level of training that I received and instead being told to use AI.

    I'm at the stage of my career where I'm regularly involved in training engineers with 8-10 years experience and in some cases more senior personnel, in my field of work.

    I actually feel sorry for the younger generation having to deal with the BS being produced around and by AI.

  15. Anonymous Coward
    Anonymous Coward

    People problem?

    I think Mr Murray might be a rare fish. He's looking at AI as a tool to be used by people and sees that he still needs people and will continue to need people. So he's still recruiting people at all levels.

    In the majority of business interactions with AI intent I have seen, the primary purpose of introducing AI is to reduce costs associated with people by getting rid of people.

    For all the talk about human in the loop it's just a salve, if they could get rid of those people too they would.

    Sucession planning with an eye of developing in house talent for the next 5,10, 30yr? Really? Most execs are not looking that far, 5yr maybe at a push, but the next bonus cycle is more likely.

    Even before AI, the retention of key skilled employees has never been a priority for most businesses, which is why you always get a bigger pay increase by leaving than staying.

    1. David Hicklin Silver badge

      Re: People problem?

      >>> Sucession planning with an eye of developing in house talent for the next 5,10, 30yr? Really? Most execs are not looking that far, 5yr maybe at a push, but the next bonus cycle is more likely.

      They will be looking at the Sales and Profit figures and the effect on the share price. Nothing else matters.

  16. Locomotion69 Bronze badge

    AI has its merits as it is quite capable in processing vast quantities of data and finding patterns inside them. Not something a human being cannot do, but AI comes up with results much faster - leaving the human to check if the outcome of the AI is plausible. I have seen very good use of processing medical scan data, AI could spot the out of the ordinary which many doctors would overlook. This is a huge benefit - saving time, money and occasionally a life.

    But the fields of application are different - software engineering being no exception. These jobs will be different in the future - but senior and junior levels will be there as they are today.

    1. David Hicklin Silver badge

      > have seen very good use of processing medical scan data, AI could spot the out of the ordinary which many doctors would overlook. This is a huge benefit - saving time, money and occasionally a life.

      But don't forget that for applications like this they have been trained on vetted, good quality data - not the data scraped from the cesspit call the internet.

      They also cost a lot of Money

  17. Steevee

    Interesting article, but it raises a fundamental question; how are all these junior engineers being replaced by AI supposed to be become experienced/qualified senior engineers when the bottom rungs of that career ladder have been sawn off due to AI?

    1. Anonymous Coward
      Anonymous Coward

      It's not raising it at all, it is flagging that same question again. Same as it gets raised every time agentic AI is discussed.

      But no one answers it beyond a few platitudes about reskilling and new AI based roles being created. Which is the usual political BS in this situation (i.e. large number of people being made redundant due to tech shift). No one will ever answer it either because what will happen is that the politicians will promise jam tomorrow and the 'market' will be left to its own devices which will be to not give a shit as usual and no one in power wants to point that out.

      We'll be left with a lot of highly skilled people fighting over the few low skilled roles that are left but at that point I've no idea who will be selling anything to anyone.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon