
By 2026
100% of IT consultancy outfits will have been replaced by AI, and nobody will notice.
Recent research details how customer service reps at a Chinese utility's call center often struggled when trying to use an AI assistant, and were forced to make manual fixes. Researchers affiliated with a Chinese power utility and several Chinese universities recently conducted a study of how customer service representatives ( …
What could go wrong - Elevator Recognition
"like cloth-eared Siri or Alexa"
Or as Chairman Benny Hill would have said now quite politically incorrectly "like croft eeled Silly or Anorexia."
Very different times and as the Cultural Revolution demonstrated, a different sense of humour.
I can imagine trying to mechanically transcribe oral Mandarin, Cantonese or other regional language into Chinese logographic text might challenge the best LLMs.
For the benefit of those too young to remember Benny Hill, he was a comic whose comedy mostly relied on stereotype, misogyny and xenophobia, and so isn't much mentioned any more.
Still, a few months ago I saw an old sketch which made me laugh out loud. Featuring his "oriental" persona he was asked why he (the "oriental" character) had come to Britain, (Not verbatim, this is my recollection.):
"I seek knowledge",
"Did you find it?"
"Yes. It's in Norfolk."
-A.
I'm happy to read that Gartner's Kathy Ross sees "A hybrid approach, where AI and human agents work in tandem, is the most effective", as in Kasparov's Law.
Also, hopefully the "ACM CSCW Poster 2025" version of this Call Center paper will have more than just 1 Figure so that attendees can grasp the gist of this interesting study quickly, visually.
One thing that's missing from the paper at present is a description of the "newly introduced AI assistants" on which the study is focused. Are these the result of in-house development, are they based on some known LLM or "reasoning" CoT model? What are the assistants meant to help with (per their design/training) -- possibly listed as bullets? Some of this does come through the paper but it should be explicitly stated, possibly in the METHOD section, before discussing interview and data analysis, imho. Survey results can then be more directly compared to what was expected from the tool(s).
Apart from that, yeah, the AI assistant in question does seem to have some of the characteristics of a half-baked lifelong stoner that has trouble with numbers, accuracy, phrasing, structuring info, emotional connection, rambling verbosity, and the likes. The prospect of losing one's job to one of those has to be rather psychologically taxing indeed. The analysis is right-on on that point!
And I also wonder if the CSRs found value in those AI assistants in terms of companionship, or entertainment ... (maybe I missed it from the study's analysis)
"the AI assistant in question does seem to have some of the characteristics of a half-baked lifelong stoner that has trouble with numbers, accuracy, phrasing, structuring info, emotional connection, rambling verbosity, and the likes.
"Son, I dont think we have a position for you here in customer service. But you ever thought about becoming a manager? I'm seeing all the characteristics, right there..."
Oh, the Covid Boris press conference days. The real-time (automatic?) subtitles would re-mangle his already mangled words. Meanwhile the human BSL signer had to digest what had been said and provide context. They struggled but still made better sense than the original.
NB Alone amongst governments, including the Scottish & Welsh Assemblies, Downing Street refused to provide signers - the broadcasters had to add their own. Perhaps saving the PM from the inevitable feedback from his signer. See icon.
The main beneficiaries of the BSL announcements will be those whose first language is BSL and thus have difficulty parsing written English.
Thinking about teaching deaf children is changing, for the better, and rather than force them to learn spoken English, it is better to teach them sign - a reversal of the 1960~70s mindset.
In fact because baby’s can learn (simple) signing within weeks of birth some think we should teach babies to sign and so aid the initial development of their language centres.
Scotland’s two busiest railway stations are first in UK to install British Sign Language screens
From the pictures, and video, I doubt the size of the signer on the departure board will be much use, other than to flag BSL is spoken here, the whole screen really needs to be the signer, so that it can be seen and read at a distance (compare the lady at 1:45 in the above video with the display behind her to see what I mean).
You need a skill level to work effectively with LLMs. But the next reporting period's balance sheet requires immediate gains as the shareholders wont wait for productivity benefits in 2 years time. They want to see reduced people costs now because that's how their KPIs work. Revenue per head and average salary etc.
"You need a skill level to work effectively with LLMs."
Namely, the skill to figure out if the LLM is, in fact, wrong, and work out how it's wrong. As the study indicated, it's easier (and more accurate) to simply not use it than to try to spot its mistakes and correct it.
"AI-generated outputs introduced structural inefficiencies in information processing because most AI-prefilled content required manual correction or deletion"
I can't help thinking that the effort required to correct any content that had been prefilled by AI would more than outweigh any supposed efficiency gains - reading, guessing at any mis-transcribed words, and filling in missing bits would, in my mind, take much more time than a human typing away while the client is on the phone.
Add to that the mis-interpretation of emotional cues - which could also be down to client stress on realising they are talking to an AI-bot rather than a real person
I'm sick of IT hypes and headlong rushes into every new technology followed by half of it either being backed out or just losing money / providing poorer service. LLMs are great, but not at everything and they are not intelligent or good at human context understanding. They fake humanity and you need huge resources to do a good job conversationally. Not bad within narrow confines but even the humans they put on help systems work to scripts and you get a whle world of pain when your problem doesn't fit the script. We need more people and to accept a percent or two on the price of whatever it is. What price sanity?
So in this case (and many others, some I've experienced first hand) AI is little better than a day one school leaver on the job. They need to be supervised closely and everything they produce needs to be parsed and corrected. If you've ever managed anyone new you'll know that dealing with a human trainee assistant is a significant burden. It takes a long time before they reduce your workload. The differece here is that the AI might never get better, or it'll get an upgrade/update so you'll need to train again and again and again. All while being expected to produce 30% more because you have the AI.
But it doesn't matter, magical thinking has won this one and law from managment is:
Introducing AI DOES makes things better.
Using AI WILL produce efficiencies
If this does not happen then YOU are the problem.
Speech to text processing is nearly always imperfect & produced text needs checking afterwards.
Often has major hassles with accents.
It sometimes struggles with homonyms (usually contextual analysis will sort it out but not always)
e.g. broach and brooch sound similar
If the jewellery item "brooch" was ruled out, then for "broach", a bit harder as it is polysemous (same spelling, multiple meanings) e.g. could be to start a chat about something awkward, or it could be to pierce something, or could be a nasty loss of control in a boat.
The idea of an "AI" trying to judge how emotional etc. the caller is sounds (pun intended) very unlikely to work very well. e.g. some people may be loud & aggressive in tone / language used used when angry (so not too difficult to detect), whereas others may be very controlled, terse and polite but a human* listening to this type of angry caller still feels the anger vibe... and as for "AI" dealing with / detecting sarcasm or irony. from a caller..
* more difficult for neurodiverse types obviously
They use it to most efficiently nudge the caller into giving up on any request he/she may have had, abandoning complaints, and accepting defeat in the ultimate realization that, contrary to all objective measures of the human experience of reality, he/she must have been holding it wrong indeed, all along ...
You don't really need AI for that. It's already a refined art
>” Speech to text processing is nearly always imperfect & produced text needs checking afterwards.”
Agree, however, what I find surprising, given the state-of-the-art of continuous speech recognition some 20+ years back, and the advances that have been made in the interim, is just how bad it still is at the basic speech-to-text piece ie. The bit that whilst statistical analysis isn’t really AI. Without accurate speech-to-text, semantic analysis is always going to be hit-and-miss. Obviously, only you have good semantic analysis you can actually give the AI the correct seed phrase.
Perhaps like the continuous speech portals we designed back then, we asked slightly different questions to get multiple pieces of information which were replayed to get confirmation and thus we were able to better determine what exactly the person was asking for. So perhaps rather than bowing to the almighty AI, more care should be taken with dialogue design so that whilst the transcript may contain errors, the summary is accurate.
Thats a real disappointment. What I would have hoped for would be something that can record anything said to it, and translating it from their accent into one that I can understand. So I can have a phone conversation with someone in Birmingham or Scotland and understand it. And they can understand me.
A relative has Down Syndrome. She is really hard to understand. Maybe an AI would be able to help her. That could be a massive improvement in quality of life for her.
Anyway, with English, homophones etc., when I watch TV with subtitles it is just awful. It is slow. Sentences behind. It uses the wrong words. What could help would be mixed written / acoustic output. With a spelling checker showing corrected / uncorrected spelling. Well tested so it actually helps understanding what the other person says.
"AI tools, such as large language models (LLMs), emotion recognition, and speech-to-text technologies" [paper, introduction para 4]
While assuredly artificial, none of these have intelligence -- they're merely varieties of template matching system.
We do have, and usefully use, not a few genuine AI tools -- indeed in some pretty sophisticated applications -- but they're all essentially "one trick horses" -- expert systems trained on specific quite narrowly defined problems. There's no such thing as general artificial intelligence, and never can be because none of the tools can actually think. We merely contribute to the bullshit and the promoters' bank balances by calling fancy template matchers and autocomplete engines "intelligent".
Oh, and BTW, a UK health service rep recently promised that "AI" would soon be used to write medical case notes ...
AI has been around long enough for it to be patently obvious that it's useless, so why is everyone still so hyped about it? I still get users at my work requesting access to a CoPilot license about once a week, and I just have to wonder why? What will they use it for? Will it really speed up their workflow to have a random number generator tied to a dictionary who's only objective is to lie? It's especially scary for me because we work for a non-profit healthcare agency, if I was a sales rep for like a paper company then maybe I could see using AI for the simple fact that I probably wouldn't care about my job, but jesus christ, we're supposed to be helping people, not offloading critical care tasks to cleverbot.
Well.
In my org we are using LLM stuff with some success to pre-fill long forms that bore humans. A requirements document arrives from the (prospective) customer, and this has to be transformed into something that the computer systems can use; Human operators hate that stuff. Getting an LLM to cut a first draft is a big win so far as they are concerned.
I dare say that they might think otherwise after being redunted.
And I rather hope that the org eventually realises that the wetware actually, if only occasionally, made crucial judgments that stopped us being crucified for a particularly stupid mistake.
-A.
It's not a scam other than how is is being sold right now. I work in a business that has some exceedingly boring AI (ML really) products that work on SCADA type applications.
Failure prediction and load balancing. It's not exciting, there is no LLM element to it but it is new and shiny. However it works, and is quietly saving businesses millions by alerting to risk of failure and saving some very expensive hardware from needing replaced.
What is a scam is telling all the boards that they can replace all their employees with a subsciption
The trouble is anything that uses statistics or machine learning (eg. Teaching a robot to paint, something being done in the 1980s) is getting a sprinkling of the “AI” fairy dust.
I doubt the failure prediction system being sold today, differ greatly in their implementation, from the asset management systems we were deploying circa 2005 across the Utilities, which had a preventative maintenance and failure prediction element.
If we look at the “AI” co-processors Intel et al are delivering, they are just matrix maths accelerators. I suggest given how mind twisting matrix maths (and vector maths) is, it is easier to give it a label that implies some form of “magic” is happening. Hence I do regard the current “AI” bandwagon to be a hype-bubble and scamming people to believe the “black box” is doing more than it actually is.
I haven't seen them in action, but I've always wondered if anyone is seriously using those AI meeting minutes generators?
To me it would seem that since speech recognition is only a minor step up from a random word generator, that AI summary of garbage would result in garbage? Or is it just that most meetings don't actually have any useful content anyway, so nobody notices if transcripts or meeting minutes are just random fluff?