back to article Kaseya CEO: Why AI adoption is below industry expectations

Adoption of generative AI for enterprise customers isn't taking off in the manner many in the industry expected – and there are major obstacles in the way, according to Rania Succar, recently appointed CEO at Kaseya. The bots and the tools are not very good... "I'm very bullish on the future of AI to reinvent enterprises …

  1. elsergiovolador Silver badge

    Agenthicc

    There’s a familiar pattern in AI puff pieces: things aren’t working, but that’s actually a good sign, because it means we’re “early.” Adoption is “nascent,” tools are “not very good,” and the solution - of course - is “change management.” The blame is subtly transferred from toolmakers to overstretched users who apparently just don’t want productivity hard enough.

    We’re told the issue is fragmented data, and the future lies in stitching it all together so AI can work its magic. Sounds great - until you realise there’s zero mention of access controls, tenant separation, or preventing AI from blurting out sensitive financial projections to a tier-1 support request. The moment you connect everything, you’re one ill-scoped prompt away from a GDPR disaster. But that part? Not in the keynote.

    It’s not just a technical issue. The trust boundary assumptions in these enterprise environments are decades old, often deeply flawed, and incompatible with the idea of one agent surfing across CRMs, inventory, and finance systems like it’s a summer internship. Before you can even dream of automation, you need a complete rethink of how permissions, context, and intent are handled - across departments and legal domains.

    And yet we’re served the same reheated narrative: bots will boost productivity, just as soon as we re-architect the entire data layer, restructure all workflows, retrain all staff, and ignore the small matter of access governance. Maybe in another 20 years, we’ll circle back and say “this AI thing really took off” - right after it leaks a boardroom memo into a customer ticket response.

    1. Anonymous Coward
      Anonymous Coward

      Re: Agenthicc

      You're giving thought-out, considered points to this problem.

      I'm over here going, "This shit is rubbish, and when I ask it how to upgrade this Java framework -- which thousands of people must have done, in standard, repetitive ways, with descriptive PRs and comments, given, say, the configuration file and a couple open files as context -- it generates broken XML and regurgitates READMEs. This is trash."

      To each their own, I guess. To me, generative AI just can't answer more than a boilerplate question -- and boilerplate, for me, is part of getting my mind in the game. It's not dull and tedious, it's quick and easy - far easier than extracting it to an external tool which is just going to type slightly faster (not massively - generation is slow), at a cost of me having to type the _generalized_ description of what I want (not especially less), and then type the fixes of the trash that's been output.

      Then I try to imagine generative AI trying to automate "this-new-product" which it has never even seen before.

      Why is "AI" suffering poor adoption? because it's garbage. If AGI comes about, people will be pushing against it with almost infinite force, given their "AI" experience is an economy in which trillions has been invested with essentially zero ROI, the earth's environment nearly shattered for its energy usage, teams being routed only to have to be re-hired when projects fall apart, and the results of "AI" itself resulting in little more that Giffy for article images, at a cost of constantly wondering if you're going to be sued for using the generated result.

      1. Tom66

        Re: Agenthicc

        LLM AIs are a pretty good replacement for Google and Stack Overflow when working as a developer -- but that's in part because both have become much, much worse in the last 5 years. It still hallucinates and gives you the wrong answer from time to time, but then again it's not as if SO always has the right answer, so apply a bit of common sense and you'll be fine.

        I'm not worried about my job looking at the consistency of these models. Not just yet!

        1. breakfast Silver badge
          Holmes

          Re: Agenthicc

          I mean it is just the stuff that Google used to find and the stuff that is on SO being regurgitated by a statistical engine. It's not like it has access to any special secret data. It's just repeating all the stuff we posted around the place.

          One conclusion that we can derive from this is that it is basically the average of developer knowledge across the internet. So a worse than average developer will gain more benefit from it than a better than average developer.

          Which raises an interesting possibility that all the companies recruiting specifically for developers that gain massive benefits from AI are accidentally selecting for worse developers.

          1. Tom66

            Re: Agenthicc

            It's a bit more than that though - LLMs are hand-tuned in a very expensive process involving a lot of humans testing the model output and reinforcing it to behave in certain ways. This includes hand tuning on programming questions and problems. This makes them far better than the general internet, in my opinion.

            1. Anonymous Coward
              Anonymous Coward

              Re: Agenthicc

              This seems to contradict the example in the great-grandparent post, and manual training by humans? what does that even mean? I've never heard of that before.

              Most other posts say it limits out at the strength of StackOverflow, and tbh the things on stack overflow are dumbed down examples. If what you're trying to do is more complex than an example, I'd expect it to fail.

              We've had articles on TheReg that LLMs will produce code with bugs and security issues -- and if you ask it if the result has security issues, it'll fess up and say yes, and maybe fix them. This would seem to contradict the trained-by-humans angle, and not just be annoying, but force you to ask similar questions again and again. My theory is that even combining them, "Could you give me this, without security issues?" wouldn't result in the second response.

              Humans are kind of recursive - produce a response, and then consider the response wrt the original context. LLMs don't seem to do that, they consider the input, and then more input given the prior input, but then they just start generating. ? So they don't really evaluate their own response, and it - like this post - ranks in as "stream of conciousness". Take your best response from it. Hopefully it makes sense. Anyway, what I was trying to get at with this is the recursiveness allows us to reconsider the response for security issues, or whether it truly addresses the original issue, which when LLMs suggest broken code, is clearly not happening.

              I think a lot of benefit would come when LLMs can run their own responses back through themselves 3 or 4 times with an additional prompt, "I was thinking, xxxxx, but is that correct?" this might allow them to link disparate events that are not immediately linked in the training data (statistical likelyhood of tokens occuring near This Other token), but which *do* have a link if you combine the right things together. (The output mentions thought B and includes thought C, which occurs from your query but your query and C don't usually occur together -- B and C do. On re-processing, you get your query + B + C and establish a new relationship within the LLM, leading to "discovery". ?)

              1. Tom66

                Re: Agenthicc

                Look up reinforcement learning and human-in-the-loop training. The models are trained automatically and then fine tuned by human feedback in a recursive process. The latter process is very resource intensive involving hundreds of thousands of labour hours. Source data is also manually labelled and safety constraints must be created and tested manually.

                Your last suggestion is essentially how "reasoning" models work. They solve some of the problems LLMs have, but can still get stuck in other ways. They also use considerably more resources, because of the need to run the input and output through the loop a few times. OpenAI and Deepseek both offer reasoning models, as do others.

        2. druck Silver badge
          Flame

          Re: Agenthicc

          LLMs are vastly inferior to sites such as Stack Overflow. They produce an output with no context and no providence. On the website you can see if the question matches what you are asking for, you can see the clarifications, and a number of suggested solutions along with comments on whether these are the best way of doing it, and any pitfalls and security issues. You get none of that insight if you accept whatever AI spits out.

    2. LybsterRoy Silver badge

      Re: Agenthicc

      I was reading it and thinking "all we have to do to make AI a success is to totally change everything about the way we currently work" - errrr NO THANKS.

      Anyone else remember the mantra of "change the way we do things to the way the package does" and just how often that was a total success?

      1. Anonymous Coward
        Anonymous Coward

        Re: Agenthicc

        Anyone else remember the mantra of "change the way we do things to the way the package does" and just how often that was a total success?

        When I retired from A Very Well Known UK University they were midway through a "core systems renewal programme" which aimed to do just that. Much of the software in use was hand-crafted and incredibly specialised and the rest was very customised. At first the new policy worked well, though SAP was utter shit compared to what they had, but I gather that financial constraints mean that the whole programme has now been abandoned, leaving some stuff working but ancient (the assignment handling system was written in-house in 2000) and some much of the worst customised stuff in place (Oracle VOICE CRM).

        So, ramble over, the policy wasn't a bad idea, but unfortunately they chose to align processes with shit (SAP) and gave up before dealing with the really bad bits (Oracle).

        1. Doctor Syntax Silver badge

          Re: Agenthicc

          Is it possible that "the way the package does it" might be, if not deliberately shit, somewhat minimal in order to sell expensive customisation services?

          1. Anonymous Coward
            Anonymous Coward

            Re: Agenthicc

            In the case of VOICE I am sure that was the case. After a while one learned workrounds and cheats for the small bits of it one used most, but stray from the beaten path and the ROUS would get you in seconds.

        2. ChrisElvidge Silver badge

          Re: Agenthicc

          Upvoted for : "though SAP was utter shit compared to what they had"

    3. Anonymous Coward
      Anonymous Coward

      Re: Agenthicc

      … or LLM Generative AI is a hopeless pile of shit and can’t do better than SEO/Reporting Summarisation and generate deepfake porn….. and is a bigger lost-cause fallacy/money-pit than the Metaverse.

      It’s as sentient as Eliza/The Hobbit (1982 game engine).

  2. Mentat74
    Thumb Down

    So it's not just this so-called 'A.I' that is spouting bullshiat...

    All those CEO's are in on it too....

    Give us more money and more of your data and everything is going to work just fine ! promise !

  3. Rich 2 Silver badge

    Maybe….

    …not many people actually have a use-case for and/or don’t want too use LLMs?

    Radical, I know, but just putting it out there

    (and I STILL haven’t heard of a compelling use case! Not a single one. All this talk about “AI” but nobody actually mentions an application. Literally nothing. Tumble weed)

    1. Decay

      Re: Maybe….

      The only use case I have heard is from users who think that AI is a magic button that will do the drudgery work for them and somehow free them up to the more glamorous portions of their job. Completing forgetting that the drudgery is usually the foundational layer of knowledge that allows you to do the exciting stuff with some level of competence. That the output of AI generated "stuff" still needs to be validated, the responsibility for that output is still theirs, and that if an automated tool can do 70% of your work with some level of efficiency and accuracy what you are actually saying is I want a tool that will make 70% of me and my peers redundant.

      Users seem to believe that if a tool can do that much of their job they will be suddenly freed up to do "value add". Which is not typically how the bean counters balance their books. To me it's a bit like long distance lorry drivers saying they can't wait for automated self driving trucks to arrive because that would mean they wouldn't have to do the boring driving stuff.

      Also in all these puff pieces I never see security, Intellectual Property, accuracy, regulatory requirements on Private information or any of the other items that anyone in an operations or back office role has to consider day in day out. If I was in Microsoft I'd be putting a package together to leverage their DLP, security, Azure ringfencing etc to put forward a compelling argument that they and they alone for most organizations can manage the risk and governance requirements and show that CoPilot could address these quickly and easily. And given that most organizations already deal with Microsoft no noise around onboarding a new vendor, SoC Type II concerns, they are already on the books from an accounting and contracts perspective so basically here is the package and here is the cost with limited additional effort required on your part other than getting OpEx approval. I'd make a killing selling that to clients if I was in sales.

      1. Doctor Syntax Silver badge

        Re: Maybe….

        "Completing forgetting that the drudgery is usually the foundational layer of knowledge that allows you to do the exciting stuff with some level of competence."

        And also forgetting it's what you get paid for.

      2. Ian Johnston Silver badge

        Re: Maybe….

        To me it's a bit like long distance lorry drivers saying they can't wait for automated self driving trucks to arrive because that would mean they wouldn't have to do the boring driving stuff.

        Aeroplanes could easily fly themselves from A to B, and in many case do just that, more or less. They still have humans up front - and highly trained expensive humans at that - because if they weren't there then once in every thousand, or ten thousand flights (and there are about 5,000 flights in the UK each day), many hundreds of people would die. These expensive humans also regularly do the boring bits because accident reports over the years, written as ever in blood, have shown that it's essential that they do so in order to be able to do the exciting stuff they are paid for.

        So sure, automate lorry driving on the motorways. Boring stuff. Who's going to get in and out of depots, drive past schools and deal with road closures and accidents?

    2. Plest Silver badge

      Re: Maybe….

      There's one, tons of people on Dev.to and LinkedIn get to write puff pieces promoting the latest design of the "Emporer's New Clothes".

      I like the chat bots, they're actually quite useful much like having a good set of screwdrivers is. I'm in my 50s and so I know the code i want to write but it's way quicker to some AI chatbot to knock up a quick template for me that saves me about an hour of dicking about looking up my notes and scraps of code.

      Outside all this I've yet to find any better use for it all.

    3. Alumoi Silver badge

      Re: Maybe….

      The quality of porn has gone up. Just saying.

    4. Daniel Nebdal

      Re: Maybe….

      Since I'm literally supposed to be working on this right now: I'm in a bioinformatics group, and we're using an LLM to pre-filter thousands of pubmed results before doing manual curation of the rest, for a review paper. We run a small model locally (on a 3090 in a workstation), and it's fast *enough*. The results are not completely reproducible, but we've managed to nudge it to mostly do false positives - and that still saves us from skimming thousands of abstracts. While working on it we found someone else that have done much the same thing: https://www.medrxiv.org/content/10.1101/2025.06.13.25329541v1

      Of course, this is a single well defined task that doesn't have to touch any of our existing systems / data, so it's very different from the Enterprise AI products.

    5. Tom66

      Re: Maybe….

      LLMs are useful as an alternative to Google and Stack Overflow, and quite good there. Outside of that limited window, I've not found a use.

      1. Ian Johnston Silver badge

        Re: Maybe….

        LLMs are useful as an alternative to Google ...

        I asked Google for details of the next Cumbria Recorder Society Playing Day. Its AI overview told me that there would be one six months previously on a date which didn't exist, in a venue which didn't exist, followed by a concert given in an imaginary church by an equally imaginary ensemble.

        1. Doctor Syntax Silver badge

          Re: Maybe….

          Google search very seldom returned a null result, it would almost always find something matching some word in the search term however irrelevant. Not returning null for something about which it doesn't have data would be an essential Google requirement. It's not surprising that a Google LLM would give fictitious results.

          1. Ian Johnston Silver badge

            Re: Maybe….

            I should have said that the top non-AI Google search result was a link to the Playing Day I was interested in ...

        2. Tom66

          Re: Maybe….

          But ask an LLM why I am getting this C++ error...

          > error: designator order for field 'struct_name::prd' does not match declaration order in 'const struct_name_t'

          you will get no useful results on Google, the search on Stack Overflow will find nothing, but ChatGPT solved it immediately and I was on my way.

          I have hundreds of cases like this. I would say 19 times out of 20, I get the right answer. Massive boost to productivity for me. You have to be smart enough to ignore the bullshit answers. But you get those with the internet too, it's just a fact of life.

          Do not use an LLM for information that changes over time or is based on recent data, like your example. Clearly, they need to get better at saying "I don't know". Also, Google's LLMs are far behind the competition.

  4. Anonymous Coward
    Anonymous Coward

    Because it's not very useful. Jesus Christ on a cross, it's not that complicated.

    1. Omnipresent Silver badge

      It's so difficult for computer nerds and criminals alike:

      how about... my iphone keeps locking me out and taking pictures of my pockets while sending gibberish texts to my significant other.

      Oh yeah, and my pc decided to update last night because it hasn't been connected to the wifi in more than a year!

  5. Tron Silver badge

    So basically...

    AI works very badly, customers hate it, it is a huge legal and security risk requiring online access to your internal networks, accessing your data and sending it who knows where, its an extra expense (hardware, software, subs, energy) with no ROI, and we don't trust the companies pushing it...

    Why oh why isn't it selling like hot cakes?

    1. spacecadet66

      Re: So basically...

      "No. No. It's the children who are wrong."

  6. Anonymous Coward
    Anonymous Coward

    AI is just the latest IT Hype

    We have seen it before. Adopt some unproven shite, promote the hell out of it, do an IPO and you exit stage left with a billion $$$.

    Rinse and repeat.

    Yes, I will continue to ignore 99.99% of it.

    1. Plest Silver badge

      Re: AI is just the latest IT Hype

      This time it feels slightly different, this time not everyone is bleating on about the latest tech fad and most techies are in agreement that it's a waste of time.

      1. LybsterRoy Silver badge

        Re: AI is just the latest IT Hype

        -- most techies are in agreement that it's a waste of time. --

        Unfortunately the majority of the public are being conditioned into thinking its great.

        1. Tron Silver badge

          Re: AI is just the latest IT Hype

          They are targeting governments this time, as politicians are typically thick, corrupt or both.

          Data centres as essential for the future of a nation etc, like having the most aircraft carriers, or nukes or whatever. Con politicians and you can funnel unlimited amounts of public money directly into private pockets without worrying about the general public or IT folks or planning permission. You get first dibs on the nation's energy supplies and water too.

          1. ChrisElvidge Silver badge

            Re: AI is just the latest IT Hype

            Or HS2?

    2. Scotthva5

      Re: AI is just the latest IT Hype

      Similar to the early 2000's "blockchain will solve everything" hype:

      "Step right up folks and get yer blockchain! Amazing new technology that will change the way you think, act and feel! Guaranteed to cut belly fat and make today's youth listen to their elders! All for the low, low price of (mumble, mumble), to be determined."

      Déjà vu has entered the chat

  7. Anonymous Coward
    Anonymous Coward

    I have yet to find any, ANY AI "solution" that is actually any use at all. 99.95% is some form of Clippy on steroids based enshitification. The other 0.05% is, at best, "meh"

  8. Anonymous Coward
    Anonymous Coward

    Of course every company wants customers to buy their latest shiny object regardless of how useless it is. But I can't recall an example of a technology being absolutely pile-driven down everyone's unwilling throat like this excrement.

    1. Anonymous Coward
      Anonymous Coward

      Google Chrome, Office 365, Software as a service, everything must be online, 2FA with your phone as both factors....

  9. Anonymous Coward
    Anonymous Coward

    So you know that study finding that companies that talk about DEI were a lot less "Diverse" than companies that didn't make a big, splashy deal about it?

    I think AI is like that. I worked on Expert Systems and Case-Base Reasoning tools back in the early 1990s. The products were decent but ... klunky. And S-L-O-W. Then there was the AI winter. I took that stuff off my CV -- no one cared. During that time cell phones were enhance with NLP, Siri, Alexa, and the OK Google thingy were born. SatNav ("GPS apps" we call them here) learned how to give us directions. That was all AI, but no one called it that.

    Then AI gained an article. "We built *an* AI to...".

    Then ChatGPT burst on the scene, and AI became generative (whatever that means), and agentic, ...

    Here's a use-case: I wanted to find out the name of the painting behind my boss, when he forget to blur his background on a Teams call, and oddly apologized for. "Don't read anything into this painting, it's just my wife's favorite..." Hmmm. Of course I was curious. I fed the image into ChatGPT. Nada. CoPilot. Nothing. Plain old google image search (*before* Google explicitly added an "AI mode" -- BOOM):Tower of Babel by Pieter Bruegel. It came back within seconds! That's AI folks. I for one have wanted a "Visual 'grep'" for about 40 years. But it wasn't called AI. And the supposed "AI-powered tools" couldn't do it.

    So is there mucho, mucho hype? Sure. Can it be very powerful? Absolutely!

    1. LybsterRoy Silver badge

      Thanks for letting us know what the picture was, I'd have worried about it all day. Now I've seen it I'm worrying about your boss's wife <G>

      1. mcswell

        You'd better worry about me. I have a bunch of old pictures (copies of paintings) of the tower of Babel in my hall, including this very one by Bruegel. The pictures used to be hanging in my office, before I retired.

        Oh, yeah, I'd better add this: I'm a linguist.

    2. Doctor Syntax Silver badge

      Did SatNav "learn" to give us directions are do they just follow some graph theory problem, the sort of things mathematicians have been working on since Euler, if not before? Intelligence would not have to be arm-wrestled away from taking a busy motorway nor would its preferred route to get to the motorway be joining a notoriously overcrowded A road via a junction with appalling sight-lines.

      Actually, SatNav has to do three quite complicated things of which the routing is only one. It also has to handle voice recognition and speech synthesis. I'm not sure either of the latter is handled intelligently.

      For instance I set a country house hotel as a destination without adding "Hotel". Voice recognition completely failed when I gave the command to go there with "Hotel" added. Intelligence would have recognised the likely destination and asked for confirmation. Nor would intelligence offer to navigate to the centre of the city when given then name of a a small village as a destination.

      Speech synthesis seems to handle place names better than road names. For instance "Denby Dale" is pronounced sensibly as a place name. "Denby Dale Road" is pronounced "Den bid ale road". I get the impression that the phrase is being concatenated into a stream of sounds and then fixed rules being applied to split up the stream. Intelligence would not do that.

      1. mcswell

        I don't know why that road name is pronounced differently from the place name, but I assure you the text-to-speech system doesn't use "rules". Most likely it uses a neural net trained on lots of English words, including names. And yes, people often mispronounce things, too. My wife corrected my pronunciation of "imprimatur" (she was right).

    3. myhandler

      What world are we in where the guy feels he must apologise for having a Breughel on the wall?

      The moral of the piece being a jest at the two legged ape's vanity.

      Surely you and your boss don't work for Musk?

  10. Grunchy Silver badge

    A.I. has no “emergent intelligence”

    “Artificial” intelligence isn’t “real” intelligence, in fact it isn’t intelligence at all. EVERYTHING that LLMs have to say originates directly from the training material. Train LLM on nothing but propaganda, it’ll produce nothing but propaganda. Train it on misinformation, it’ll spew more misinformation. The LLM is intelligent exactly like a parrot is intelligent: it can say comprehensible words and phrases, but it has no idea what anything means. It’s entire raison d’être lies no further than “what’s the next word.”

    1. mcswell

      Re: A.I. has no “emergent intelligence”

      I see a lot of that in social media, especially on the right. (It probably happens on the left, too, but I confess I'm biased. It certainly happened on the left before the last US election, when the left was reasonably certain who would win...)

  11. Anonymous Coward
    Anonymous Coward

    Hype?

    Funny that. New thing slingers create tools and spend a fortune on marketing and 'influencers' to tell everyone that the new thing is the best thing and if they aren't doing the new thing they are stupid and smell and if the new thing don;t live up to the hype it's becasue they didn't buy enough of the new thing and if only they spend enough on the new thing it'll be everything they said.

    Almost like we've seen it all before, 20x.

    AI has it's uses, I'm not going to diss it for the sake of it. I've used it and found it useful but in a small way. It hasn't removed by workload at all, it has allowed me to do a little more but this is more like some extra glitter on what I was doing anyway.

    Big changes? Just like any other transformational bit of IT it will run into the same old problems. Legacy systems won't play ball. Everyone has been through 100 rounds of efficiency savings so there is no free resource to introduce an AI system that might increase efficiency, compliance lags in industries that need it, legislation is a movable feast.

    There is no change without work, energy and money AI is not an exception to that rule.

  12. Pascal Monett Silver badge
    Windows

    "It took two decades for the cloud to mature"

    Oh, because you call that thing that regularly falls on its face "mature" ?

    1. breakfast Silver badge
      Windows

      Re: "It took two decades for the cloud to mature"

      Beyond a certain age one's balance does tend to get worse...

  13. Anonymous Coward
    Anonymous Coward

    Fusion is hard

    'AI is only powerful when the data is connected'

    Assuming 'AI' for the given task would work, within a set of criteria that make it worthwhile, then this sentence is still wrong.

    Information fusion is a hard problem, irrespective of the task. It just so happens that humans are quite good at it with heuristics (.e.g. experience).

    Aggregation risks destruction of information, fusion when done wrong is combinatoric aggregation.

    Combinatoric problems are not tackled well by the current 'XI' tools, they're not designed to learn efficiently, let alone perform efficiently.

    Businesses, at least small ones in a free market, try high risk low return high cost solutions at their own peril.

  14. Doctor Syntax Silver badge

    "industry expectations"

    Perhaps "hope" would have been a better word. Increasingly frantic hope as the money gets burnt.

  15. cookiecutter

    because it's a grift?

    no one wants it, it's fundamentally insecure. it doesn't do anything that people actually want in an office. its actually going to give you more work to do as you fix its mistakes.

    allowing devs & data guys & essentially anyone to start doing low code bots essentially downloaded from hugging face and github will supercharge the ransomware environment?

    1. Anonymous Coward
      Anonymous Coward

      Re: because it's a grift?

      The board want it because they've been sold the idea that an AI subscription will let them pay off half their staff and till hit deadlines and maintain their MVP and therefor hit the targets set by shareholders and get their bonus. That the idea is wrong and the business might well crash and burn is a problem for another day.

      Worst case is probably that they'll have to rehire the devs they paid off but the expectation will be that they'll be desperate, can be sold the lie they can be replaced again by AI and they'll only get 30% of previous remuneration.

      Win-win.

      1. Doctor Syntax Silver badge

        Re: because it's a grift?

        Pay off half their staff? More likely hope that a return to office mandate will leave half of them to quit without having to be paid off.

        1. ChrisElvidge Silver badge

          Re: because it's a grift?

          BT to replace 42% of staff with AI. Good luck getting AI to solve line problems.

          But it will increase profits and hence bonuses - in the short term.

          https://www.theregister.com/2025/06/16/bt_chief_says_ai_could_cut_more_staff/

  16. Ian Johnston Silver badge

    Adoption of generative AI for enterprise customers isn't taking off in the manner many in the industry expected – and there are major obstacles in the way

    The biggest ones of which that generative "AI" is complete crap and incapable of doing anything either usefully or accurately, let alone both. Note that the only examples of success given aren't "AI" at all, but simply software.

  17. SHLinux

    I sell AI, but it is the fault of the customers it does not sell

    Another article talking about AI sales not being up to expectations. After all, AI is just another hype or bubble in the IT industry of which we have seen many.

    I am not saying there are no use cases where AI can significantly deliver an advantage, but it is also not the holy grail that the tech companies selling AI solutions would like their potential customers to believe. Let alone 'free' AI tools that just want your data to grow their model and be better able to target you with advertisements to make money.

    My 2 cents. Think before you begin. What do you expect. Does the tool guarantee the security/safety of your data?

    Then implement it, and evaluate the tool after a few months. Does it indeed help out and saves money, or does it costs money?

    If you save money, great!. If it costs money, drop it.

  18. Anonymous Coward
    Anonymous Coward

    so another

    well at least thats a heads up for another company to reccomend to avoid

    CEO's really should keep their gobs shut about shit they don't understand

    1. spacecadet66

      Re: so another

      Then they'd be silent basically all the time. Which sounds great to me, a guy who, unlike them, works for a living.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like