back to article GenAI spending bubble? Definitely 'maybe' says ServiceNow

ServiceNow is trying to assure investors that payback for enterprise GenAI investment is coming, but it may not be soon and biz customers shouldn't expect to get huge returns "tomorrow". Late last week at the Deutsche Bank Technology Conference, Brad Zelnick, head of Software Equity Research at the bank, asked ServiceNow chief …

  1. Pirate Peter

    GenAI is about to enter the dreaded "trough of disillusionment."

    "In Gartner's latest Hype Cycle for Emerging Technologies, the global consultany noted that GenAI is about to enter the dreaded "trough of disillusionment."

    i think for many who are not still high on the GenAI koolaid its at the bottom of the trough and out the plughole

    i have yet to see any benefit of Gen AI apart from poorly written text or code

    1. Irongut Silver badge

      Re: GenAI is about to enter the dreaded "trough of disillusionment."

      I asked a tech provider's chat bot what the units of an object property were the other day, it slowly replied "the units are floating point type" which is complete drivel.

      I knew the property was a floating point type, what I needed to know was the units but it had no idea.

      (I only asked the bot because I figured it would be quicker than a support ticket, in the end I used trial and error to determine the units.)

    2. werdsmith Silver badge

      Re: GenAI is about to enter the dreaded "trough of disillusionment."

      I laugh at the way Idris Elba so sincerely promotes generative AI most likely without any clue what he's talking about (he is an actor after all) but the other clueless ones will be the ones taken in by seeing images of a creative in a modern work studio hammer a fist onto a tablet screen to instantly produce the new world beating product.

      1. Snowy Silver badge
        Facepalm

        Re: GenAI is about to enter the dreaded "trough of disillusionment."

        He will not be happy when generative AI goes and generates him out of a job!

    3. Groo The Wanderer

      Re: GenAI is about to enter the dreaded "trough of disillusionment."

      Not sure what Nvidia does with DLSS or how it is implemented, but that might be a valid use case. I've also noticed significant improvement in text-to-speech voicing over the past 2-3 years.

    4. John Miles

      Re: i have yet to see any benefit of Gen AI

      I've found the image generation stuff useful - give it some text and it generated some images to liven up the training presentation I was giving. The training was quick introduction to computer security mainly related to programming for non-programmers who might write scripts/small apps in their job, and the last page was on risks related AI generation and what information you can give away without realising.

      Image generation is a good use example - because you, even if no art skills, can look at an image a decide if it looks good/will work where you want and it won't matter if not perfect especially for something used a few times and retired. Now if I was doing something more commercial I'd probably want a more consistent style than I could get with the tools so would consider using an artist with a much better eye for such things - still much easier than searching for clip-art/free use images.

      Code generation doesn't seem a good use - because you need some coding skills to workout whether it will work and fit - so for anything more than a few lines probably easier to write yourself.

      I do wonder if we should be training the AI to be the program rather than write it.

      1. Zippy´s Sausage Factory

        Re: i have yet to see any benefit of Gen AI

        "I do wonder if we should be training the AI to be the program rather than write it."

        A program that writes programs has been something management have been drooling over since the 80s... I remember "The Last One" (as in 'the last one you'll ever buy') being hyped when I was a kid... it didn't live up to the hype, sadly.

      2. O'Reg Inalsin

        Re: i have yet to see any benefit of Gen AI

        I'm not in the business of creating images for non-programmers so I shouldn't comment, but I will - how do those AI images actually get non-programmers to act securely? Do you mean diagrams? Are you pupils so bored you need images to get their attention?

        Anyway, you misunderstand how the LLM works well with code generation. Let me describe one aspect - human programming skills are applicable to any programming language or supporting libraries. However, to actually write in any particular language, that particular language syntax needs to be correct, and for any particular supporting libraries, obviously the correct routine names need to be imported and called. There is a lot of drudge work involved in that when dealing with a newly or rarely used language or library that involves a lot of reference and example lookups - AI helps there.. AI will makes mistakes of course, but because constant testing is part of programming those are picked up quickly. AI is not good at seeing the bigger picture, but it is fast at detailed syntax work and reference lookup.

        ... we should be training the AI to be the program rather than write it - that's a vague, marketing-like statement, something akin to "be all you can be". Applications of extremely precise numeric programming abound in the real world, FFTs and Matrix Multiplacations, etc. In fact, the computation of current LLM's depends on such numeric processing to work at all. So there is an inherent contradiction in your assertion.

        Humans, on the other hand, based on RNA/DNA (another "programming" also know as "life") achieves a remarkable level of intelligence, far surpassing LLM's. They are closest thing we have to "being the program", and they are remarkable economical, with brains that only use about 20W and require no power connection. Not perfect, of course, and prone to wasteful fads, but still the best we have.

        1. John Miles

          Re: i have yet to see any benefit of Gen AI

          > how do those AI images actually get non-programmers to act securely

          On their own they don't, they have to be part of the communication, in this case they were helping set the scene and make it more relatable - e.g. meet George, a recent graduate doing his first data engineering role and having a picture of George working on a computer has set a scene the people in the room can relate to - either because they are/were someone like George or they are managing someone like him. Now I can introduce a couple of things George needs to worry about, will they remember them probably not - but this was an introduction to make them aware of their unknowns and other training was to follow.

          > but it is fast at detailed syntax work and reference lookup. ...

          Improvements in autocomplete and the showing of expected parameters, the web replacing books, stack overflow replacing expert-sex-change (sorry experts exchange - but it rarely seemed to have an answer to questions matching my issues so not sure on the experts) pretty much means it's a long time since I couldn't quickly find want I want - and refactoring tools in JetBrains IDEs have quickly taught me changes in the languages I use. So not sure AI will help more than non-AI does.

          >> that's a vague, marketing-like statement, something akin

          Back in the Rapid Application Development (RAD) world you’d have form designers – now imagine if you can do similar with AI, but instead of defining code to workout what to do when user presses a button/enters text you could show it/explain to it what should happen and that could potentially make program more tolerant of humans entering same thing many different ways – do you want your pay run programmed that way, no – but for replacing some low code data entering solutions maybe

      3. druck Silver badge

        Re: i have yet to see any benefit of Gen AI

        I hate AI images, there is always something really annoyingly wrong with them, from extra limbs to nonsense text. LinkedIn was infested with them a few months ago, but they have decreased even that Microsoft owned pit of AI hell. Obviously others have also reacted badly to the people and companies that use them.

    5. Anonymous Coward
      Anonymous Coward

      Re: GenAI is about to enter the dreaded "trough of disillusionment."

      Helped me get a First for my MA so all good thanks.

    6. Ian Johnston Silver badge

      Re: GenAI is about to enter the dreaded "trough of disillusionment."

      I occasionally look at Google search's AI summary, just for fun. Every time I have done so it has been clearly wrong.

    7. Snowy Silver badge
      Coat

      Re: GenAI is about to enter the dreaded "trough of disillusionment."

      It is working out quite well for Nvidia.

  2. cyberdemon Silver badge
    Devil

    > it's been hard to get customers to commit to spend offsets and the savings to justify the ROI case and the upfront investments around generative AI." Secondly, the tech is presented as the "Holy Grail"

    You chose.. Poorly.

    1. MonkeyJuice Bronze badge
      Pint

      Comment of the month arrived early this time around.

  3. Pascal Monett Silver badge

    "this is something that's going to be long-term"

    Of course the CFO of a company hyping AI is going to say that. He's got investors to think of. He's the chief bookkeeper, he doesn't know the technology, he just sees the revenue.

    The truth will happen soon enough, and then we'll see him bail with a golden parachute, leaving the company, and the hype, in the dust.

  4. Filippo Silver badge

    I'd be very wary of anyone telling me that the ROI on AI is going to be long-term.

    There are fairly strong signs that LLMs are currently as good as they're going to get, or nearly so. Making them bigger and bigger is showing diminishing returns as hallucinations refuse to go away. The amount of available training data is not really growing very much, and may actually shrink by a lot depending on how lawsuits go. The show has been going on for nearly two years, which is quite a lot of time for a software project to show its worth.

    So, what exactly has to happen for this ROI to show up, that couldn't happen in the last year or so?

    If you told me that the ROI on building a space rockets company was long-term, I'd get it, because after just a few years the company is not going to have usable rockets yet. They could show me a clear plan that, if things go reasonably right, will result in working rockets and therefore money.

    What is the correspoinding problem for profitable AI, that couldn't be done in two years but could be done long-term?

    'cause if it is "make 'em bigger and hope really hard that our problems go away, even if there's nothing in the underlying science that predicts this", it's not a plan I would put money on.

    1. Falmari Silver badge

      @Filippo "So, what exactly has to happen for this ROI to show up, that couldn't happen in the last year or so?"

      Good question, just don't expect an answer anytime soon, no one knows what has to happen least of all the AI providers such as ServiceNow. But rest assured investors and customers, while ServiceNow does not know what has to happen for there to be a ROI, they know it will happen sometime in the future.

      The problem is not ROI on AI is going to be long-term. The problem is that AI providers are unable to offer a business plan that shows a ROI over any term (short or long), just vague promises.

    2. FeepingCreature

      Don't "hallucinations" go down every time they make em bigger? That's what I recall seeing.

      Less loss = more integrated model = fewer free variables.

      Anyway it's a training issue. The models do know they're talking nonsense, but they can't back out because they've already "promised" an answer. STaR'll fix it by allowing the model to shift more thinking before the answer.

      1. Anonymous Coward
        Anonymous Coward

        Untitled.txt

        @feeping

        For your own good, spend some time understanding how AI works. Almost nothing of what u wrote applies to the AI model and it is opposite approach we are taking with SI.

        These LLM models know nothing. They are calculators. Any brilliance or beauty is by chance and is only made so by our observation.

        The Reg and us Commenturds had AI in prespesctive throughout most of this recent bubble. It is nice to be joined by so many of you of late. I preferred the days of old, oh back in 2016 when if you could even get the model to run was cause for celebration. Hell the model was the easy bit. Happy dayz with Denis and chums.

        We pointed out that LLMs have reached a technological dead end and have shown near rock-solid projections - even shown the likeliest way forward with real-time AI models being the only solution.

        In fact, a lengthy post (uh hmm) above which is a bit annoying but does bring another point to the debate: running out of data and the switch to synthetic data (he didn’t say the second bit I added that for his benefit)

        “Two choices”, the Devil said, “Synthetic data on existing snapshot models … or

        Sign here and get real-time models.”

        Sweet Jesus come by me.

      2. Filippo Silver badge

        >The models do know they're talking nonsense, but they can't back out because they've already "promised" an answer.

        That sentence may or may not be true. We don't know. Worse, that sentence is poorly defined. We have no tight definition of what "know" means in that context, or "promised" or "back out"; none of these terms feature in the science of how LLMs work. Because of this, not only we don't know whether that sentence is true, but it's not even clear whether we can know.

        Compare this with the rocketry example from my OP. If the rocket company that promised a long-term ROI has a rocket explode on the pad, they might not know right away why it exploded. However, they do have exact definitions of what should have happened and what happened instead, and that provides a path towards understanding exactly what went wrong, and that understanding will provide a path towards preventing it from happening again. There will be tensile strengths to calculate, pressures to adjust, temperatures to measure. All of those things are extremely well-defined. Eventually, you'll find which numbers were wrong.

        For the LLM, there are no numbers. You can't say "okay, this is a grade 11,362 hallucination, which appeared because this signal was 6,11 instead of 6,14". There are model weights, but they are utterly impossible to interpret in a causal sense. All we have is fuzzy simil-psychological terms; we don't even know whether they describe what's going on correctly, and even if we did, they provide no hard guidance on how to fix it. So the model "knows it's talking nonsense but can't back out because of a promise", right, even pretending that we know exactly what that means, is there anything in there that tells me how to adjust the model weights so that it doesn't happen again? No, there isn't. There is no hard-science path towards fixing the problem.

        Basically, AI investors right now believe they are spending money to solve engineering problems, but they really aren't. I'm a strong supporter of investing in pure research, but one really ought to know what they're doing.

        1. FeepingCreature

          I definitely approve of "know what we're doing" type research, but in the absence of theory we *can* do experiments. And I don't have a link, but I do recall "hallucinations" going down with model size being experimentally confirmed. Similarly, you can experimentally test that mistakes go down if you just let the model talk about the problem before committing to an answer (the much-referenced "chain of thought" technique).

          Historically, experiments predate theory in most fields, including physics.

          1. Groo The Wanderer

            Functioning brain cells in the "researchers" would help a lot in light of the constant hype machine bullshit going around...

    3. O'Reg Inalsin

      Less hype, more help.

      When the ROI is long term then NOW is NOT the time for end users to invest heavily in end applications. That's always been true, and it is true now.

      Conversely, it is a good time for makers to invest in R&D. Less time preaching and marketing, and more time and effort supporting customers who agree to joint R&D, using that information to improve the product in real time. Also, please, apply this to manufacturing, and stop focusing on social media and advertising, which themselves create nothing.

  5. Korev Silver badge
    Terminator

    Can it automate Please close this incident and raise this as a request?

    1. Anonymous Coward
      Anonymous Coward

      Please collect the internet at your nearest Argos.

      (bodylikeagod)

    2. StewartWhite

      A "real" GenAI response

      Now if it was really "intelligent" it would also track when the associated request is made and reply with "Please close this request and raise this as an incident".

      1. Korev Silver badge
        Trollface

        Re: A "real" GenAI response

        Are we colleagues?

  6. drand

    C-suite bollcocks

    Generative AI is a toy. Resist when the idiots try to foist it upon you.

    AI investment & company valuations are in a bubble. Invest in it and make a decent gain, but get out before it all turns to shit.

  7. anonymous boring coward Silver badge

    Why not ask A"I" if AI is a good investment?

    Oh, that's right, A"I" doesn't actually have any intelligence.

    1. TIM_W
      Big Brother

      AI says No

      I asked our in-house AI if it was a good investment and it basically said "It depends" which as we all know is vendor speak for "No".

  8. Plest Silver badge

    How about ServiceNow stop dicking around with AI and sort out their god-awful interface first!

  9. Ace2 Silver badge

    "Is there a bubble right now on spend?"

    What’s that quote… something like, “It’s very difficult to get a person to understand a thing when their paycheck depends on them not understanding that thing”?

  10. Zippy´s Sausage Factory

    ServiceNow is trying to assure investors that payback for enterprise GenAI investment is coming, but it may not be soon and biz customers shouldn't expect to get huge returns "tomorrow"

    Given legal compliance, I expect contracts are going to start including a "no AI usage" in their fulfilment soon enough. It's already happening in niche industries, but I have a feeling it'll creep out from there.

    "Is there a bubble right now on spend? Maybe," said the ServiceNow CFO "But this is something that's going to be long-term. And so we've got to think about what does that mean... every industry is going to shift because of AI".

    Once the executives and the boards realise if AI can replace Alison from accounts, it can replace them - that's when businesses will suddenly decide that generative AI might be problematic in some way and they can't risk using it because, um, reasons. That's when I think the bubble will really burst.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like