back to article Enterprises are getting stuck in AI pilot hell, say Chatterbox Labs execs

Before AI becomes commonplace in enterprises, corporate leaders have to commit to an ongoing security testing regime tuned to the nuances of AI models. That's the view of Chatterbox Labs CEO Danny Coleman and CTO Stuart Battersby, who spoke to The Register at length about why companies have been slow to move from AI pilot …

  1. Doctor Syntax Silver badge

    "corporate leaders have to commit to an ongoing security testing regime tuned to the nuances of AI models.

    That's the view of Chatterbox Labs CEO Danny Coleman and CTO Stuart Battersby"

    Not doubts as to whether it delivers anything useful?

    Let me guess. CGatterbox Labs specialise in testing tuned to the nuances of AI models?

    1. Anonymous Coward
      Anonymous Coward

      McKinsey

      Is the embodiment of criminal business models. Why would anyone believe their $4T market estimate is for anything legal?

      1. cookiecutter

        Re: McKinsey

        Cane here to say that. Mckinsey are bloody useless. Don't forget that it was an ex Mckinsey guy who was in charge after the Iraq war, Mckinsey put a 30 year old with zero experience in charge in Haiti. They've been wrong at every point in history.

        $4 THOUSAND THOUSAND MILLION?!!! That's nonsense. At infosec you could see how easily something like copilot could be hacked or spoofed.

        And after tech London today the AI this, AI that bollocks is just insane. Added to the fact that askibg the question, why should anyone learn this stuff if their jobs will just get offshored got shrugs & looks of total bewilderment.

        The tech industry has become basically a giant circle jerk, with everyone trying to pretend to be really into whatever grift silicon valley is putting out.

  2. xyz Silver badge

    Nooooo...

    Enterprises don't want to waste money on useless shit.

    1. UnknownUnknown Silver badge

      Re: Nooooo...

      AI Security Boss mulls over A.I. security and not slop output and lack of demonstrable ROI holding back A.I.

      Keep filling the bubble.

    2. Like a badger Silver badge

      Re: Nooooo...

      Bear in mind that the execs are under the relentless onslaught of the combined marketing efforts of management consultancies, software vendors, and hyperscalers to engage with AI. Those are very high margin businesses (compared to energy supply, manufacturing, transport, food, hospitality etc) and therefore their sales people are extremely well paid, the sales pitches are very carefully set out, and the whole theme is "we know about AI, far more than you ever will, let us help you".

      Imagine you are CEO of a prospective customer; you're the target of these people, persistently and persuasively messaging that adopting AI will save you a shed-load of cash, that it will release your workforce to do some hitherto unknown high value tasks, that if you don't suck up AI then YOU will cause your business to miss out, and fail, that your competitors are adopting AI, and you are being left behind, etc etc. And perhaps more importantly for you, they won't just be whispering it to you, they'll be telling the same story to your investors, your chairman, your fellow directors, to the greasy-polers at D-1 who want your job, to the vast open-mouthed audience of Linkedinners. Whilst cold logic says to tell all the proponents of AI to f*** off until they can prove a credible rate of return on a business case, if everybody around you has been hoodwinked, which is then more likely: (1) Every body wakes up and says "You're right, I was a fool to believe this AI bollocks, it does nothing for us", or (2) you get squeezed out of your job for your lack of vision and opposition to innovation and change?

      This song of the sirens worked impressively well to get companies to embrace offshoring, to adopt massive third party ERP, and to hand their data to the cloud. So I regret to say that it is equally likely to work with AI. This is the power of FOMO and groupthink.

      1. Decay

        Re: Nooooo...

        And it is surprisingly effective. On people you would expect to know better.

  3. Anonymous Coward
    Anonymous Coward

    If a chatbot can advice suicide to a child...

    Who knows what AI can do for your business?

    Cliënt: I smell gasoline, what should I do?

    Help chat: Do you have a cigarette lighter?

    Employee: The coffee maker is soiled and there is limescale. How should I clean it?

    Help chat: Mix bleach and cleaning vinegar in the water reservoir and switch it on.

    CEO: How can I outsell my competitor?

    AI CEO chat: Start a promotion. Promise two free flights to America or Europe to every customer who spends $200 on your product . It will be the most successful promotion of this century!

    Maybe those companies that hesitate have a point?

    1. Anonymous Coward
      Anonymous Coward

      WARNING Re: If a chatbot can advice suicide to a child...

      DON'T DO THIS

      THIS WAS A JOKE!

      The above are very dangerous advices that could kill you, anyone near you, or worse, bankrupt your company.

      1. Helcat Silver badge

        Re: WARNING If a chatbot can advice suicide to a child...

        There's a little thing called Darwinism, you know :p

    2. This post has been deleted by its author

  4. Homo.Sapien.Floridanus

    Billy: I got and F on my history paper! Why did you write that the founding fathers were Uncle Will, Homer Simpson and Al Bundy... and that the Boston Tea Party was a golf tournament?

    Ai: Totally your fault.. you should have been more prompt-specific and also checked my output.

    Billy: When I grow up I'm going to replace you with a good Ai.

    Ai: Lulz. If you survive the sixth mass extiction.

    Billy: [crying] Mom! Can we give Robbie Robot back to Santa?

    1. Boris the Cockroach Silver badge
      Terminator

      Quote

      "Can we give Robbie Robot back to Santa?"

      Only if you tell it where Sarah Conner is.......

  5. Decay

    For me the pent up demand of users who firmly believe that AI will be the magic push a button solution to their daily work, and it will somehow figure out stuff for them they either don't want to put the effort into figuring out or are incapable of doing themselves is immense. Vendors are increasingly targeting users and their leaders with fairy tales of automation, ease of use, clever and intriguing insights into data etc. etc.

    I notice words like accuracy, reliability and other related phrases are very carefully never mentioned. Nor is ownership, data residency, IP, and other phrases related to where your information ends up. And somehow it's IT who are the baddies, deliberately dragging their feet keeping this awesome technology out of the hands of the users because we are lazy, unconcerned with the users daily struggle, or just plain mean.

    Yet we know that the minute it goes sideways, IP make it to a competitor or into the public domain, a report goes to a client with incorrect or hallucinated information, it will always be IT's fault.

    Given an option I'd take the Dara Ó Briain notion of "I would take homeopaths and I'd put them in a big sack with psychics, astrologers and priests. And I'd close the top of the sack with string, and I'd hit them all with sticks. And I really wouldn't be bothered who got the worst of the belt of the sticks."

    In my case I'd put AI salespeople, hypers and consultants and put them in sack. And when the users bitched? "Get in the sack"

    1. Anonymous Coward
      Anonymous Coward

      That's how you breed more of them!

    2. Helcat Silver badge

      Interesting article in the Guardian news paper on AI, recently. Seems there's some AI start up's that hire a bunch of people to fake being AI while the AI is being built in order to get the investment to pay for it all.

      Except they get found out, or they take the money and run, or the AI just doesn't work so the company has to keep using actual people to provide the AI output...

      But there's money to be had in AI, so there'll also be the AI scams. So can the scammers go into the sack, too?

      1. Decay

        Scammers go in a different sack with rocks in it and a nearby river

    3. Androgynous Cow Herd

      "we are lazy, unconcerned with the users daily struggle, or just plain mean."

      Which is obviously irrelevant.

      Those conditions existed long before the advent of AI.

  6. Anonymous Coward
    Anonymous Coward

    "In January, consulting firm McKinsey published a report examining the unrealized potential of artificial intelligence (AI) in the workplace."

    Yes, and they're bunch of morons. *Every* "AI" presents bullshit as fact. All the time. OK, if you're professional bullshitter, i.e. marketing or management, but for everyone else AI is a totally unpredictable piece of shit.

    Literally same question produces 6 different answers when you ask 10 times: Who is the moron who *believes* anything it says.

    Not only that, when you say the answer is bullshit, every model ever tries to defend the obvious bullshit it presents, like a proper psychopath.

    How useful it is to ask a psycho, about anything?

    1. Like a badger Silver badge

      "How useful it is to ask a psycho, about anything?"

      Many psychopaths can be highly intelligent, very insightful and remarkably resourceful (so my friend who's medical director for a high security mental hospital tells me, with graphic examples). However, they're as often as not unconnected to our moral universe.

      But in context, most corporate leaders are insecure, and will treat anything from major consultancies as magic beans. Get a report or analysis from your own people, the directors will apply a mental discount to it of around 90%. McKinsey, BCG, Accenture, PA, PWC, Deloitte, EY, Bain, arrive and recycle the same work by your own people and put their logo on it, and it becomes inherently truthful, and imbued with mystical insights that have to be applied (for a further implementation fee).

  7. xanadu42

    I you liken "AI" to an elephant and security measures as cotton balls (as padding) then the security currently being applied to "AI" is like a half-dozen cotton balls being applied across the skin of the elephant..

    The elephant can obviously still cause a lot of damage (like the proverbial "bull in a china shop") as the security is ineffective.

    What is really needed is enough cotton balls to cover the elephant to a thickness that you can no longer recognise it as an elephant.

    The elephant can still do damage but all the padding will lessen it a little...

    1. Anonymous Coward
      Anonymous Coward

      An elephant? Not sure the almighty Reg Standards Soviet would agree ... maybe a knot of muscle and rage 'roo instead (with frikkin' lasers, and cotton balls).

      Could be time for a new standard ... ;)

  8. UnknownUnknown Silver badge

    AI Security Boss mulls over A.I. security and not slop output and lack of demonstrable ROI holding back A.I.

    Keep filling the bubble.

  9. UnknownUnknown Silver badge

    It will be the magic solution to redundancy.

    Companies will always take the hard cash £€$. Talk of efficiency gains and do more with less is horseshit.

  10. Groo The Wanderer - A Canuck

    It is no different than any other "new technology" that companies bought into because slick sales reps conned management into signing up with promises of "big savings."

    I'm retired now but after almost 30 years in the trenches I've lived through at least three such "big things" that fizzled and disappeared over the decades.

  11. SundogUK Silver badge

    "He pointed to Cisco's acquisition of Robust Intelligence and Palo Alto Networks' acquisition of Protect AI as examples of some players that have taken the right approach."

    So the right approach is to buy out an AI security specialist?

    Pretty sure that's not going to work for most companies...

  12. sanf
    Alert

    Security of your data

    There are various things that stink. From stories of years past, we know of several engineers that lost their job as their prompts were used to train the LLMs. Thus what should have been a trade secret become public. These days, we still have the fundamental question whether or not a successful LLM can be trained with non copyrighted data. Meta downloaded/pirated 80TB of e-books, a fact that is known to court. I doubt the other companies are playing any nicer than that.

    The fact that LLMs change every few months without your control is scary. There are many companies on RHEL8 for reasons - one being stability. Not every company is a startup.

  13. Omnipresent Silver badge

    meekly raises hand

    excuse me..... I don't mean to ask stupid questions, but..... isn't AI inherently insecure and a security risk? I mean, that's the point if AI... to steal. That's what it does. it steals, databases, and then claims you as its own, while discounting you as an individual. That's what it does. It exposes you to criminals with criminal intents by it's very process. How are you going to change that?

  14. Brl4n

    New feature

    Can we take maybe 1 week where we don't talk about anything AI. I'm exhausted and frankly bored.

  15. HuBo Silver badge
    Happy

    Interesting ...

    I'd like to go a bit countercurrent on this (relative to most komments). It seems (to me) that the recent Perplexity training wheels example was showing a software tool that could output relatively decent solutions (approximate?) to some rather open-ended problems or tasks. It seems it might be used (for example) to produce ("brainstorm"?) several possible (near-) solutions to a given chore, and then one (a person or group) may pick a preferred one out of those and go with it.

    This could then be useful in situations reminiscent of math's NP-complete problems where a single exact solution is tedious, hard, or even impossible to derive (in the forward direction), but potential solutions are relatively easy to verify by regular meatloaves (in the backwards direction). In an enterprise setting (say a bank, a government, ...), the use of AI as such tool would obviously require case-specific security safeguards as elaborated by Battersby and Coleman in TFA.

    And with more realistic smaller "models", possibly task-specific ones¹, that ditch über alles and the kitchen sink arrogance and hype, and run on efficient hardware (hopefully on-site), say a SpiNNext follow-on to SpiNNaker 2, that claims 78x better efficiency than GPUs (making an MS/OpenAI 5 GW death-star-gate run in a much more manageable 64 MW² instead), then this tech might get a possible day in the enterprise sun, imho. (maybe ...)

    ¹⁻ Amazing for example that even a lowly 8-bit Atari 2600 can beat such über alles LLM at chess. ²⁻ The new Top500 list comes out tomorrow at ISC High Performance 2025, Hamburg; it'll be interesting to see whether JUPITER (if ready) reaches its target 1 Exaflop/s ... the current top 3 machines consume 25 to 40 MW.

    1. Decay

      Re: Interesting ...

      As a fan of eighties chess computers, some of their coding was brilliant. SciSys in particular wrote some damm good tight code. Mark Taylor from memory was the guru.

  16. OldSod

    "Coleman argues that traditional cybersecurity and AI security are colliding, but most infosec teams haven't caught up, lacking the background to grasp AI's unique attack surfaces."

    People in the security trenches know that the least effective security is that which is "bolted on" after a product has been developed. Although infosec teams are often called to add "a layer of security" to IT systems, it is foolish to think that they can slap "security lipstick" onto every pig. The overuse of the term "AI" as a catchphrase for LLMs/generative AI further clouds the issue. There are a variety of technologies that have all been labeled with the term "AI"; many of them are internalized in various computational solutions, and those solutions are adequately secured (or not) depending on how well the total solution has been engineered for security. Chatterbox Labs products are aimed at solutions in "predictive AI", "generative AI", and "agentic AI", all of which are recent "AI" developments that involve computational solutions to natural language processing. Unfortunately, none of these seem to have been engineered for security. If they have any, it is of the "bolt on" variety.

    Coleman appears to be making a case for people to buy his products by attacking the infosec teams for not being ready to deal with yet another computational/IT solution with no inherent security. If this is pitched at company management, they might just tell their infosec team to get with the program by buying Coleman's products, and think that the problem has been dealt with. That would be foolish on their part, unless they plan on significantly expanding the size of their infosec team at the same time. Infosec already has their hands full dealing with the marvelous diversity of attacks on traditional IT infrastructure, much of which actually has security controls built-in. Management needs to understand that the challenges of making predictive AI/generative AI/agentic AI "safe" requires managing risk in the realm of natural language, an inherently ambiguous and highly nuanced communications technology. This is not the same as the traditional infosec security concerned with Confidentiality, Integrity, and Availability of IT systems.

    Until the recent breakthroughs in natural language processing (i.e. Large Language Models, aka LLMs), only humans were in widespread use for understanding and producing speech. The safety controls on humans involve training and penalties for violations of corporate policies. They inherently depend upon a (human) sense of self and self-preservation, which are features of the human mind. LLMs are a language capability, but they are not an "artificial mind". LLMs will not be made "safe" by having them ingest corporate polity and then warning them they will be turned off for violating it.

    I'm very interested in seeing if Coleman's "bolt on" security will make this kind of AI "safe" for all uses - experience suggests that it won't, at least not well and cost-effectively. What will work? I don't know - very narrow use cases for computational natural language processing? Usage only where the output is an intermediate result that is further developed by humans before being final? How do you make natural language "safe"? Snake oil salesmen, con artists, and sociopaths will say anything to accomplish their goals. How will LLMs be made to be different since they also lack a conscience?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like