back to article MongoDB CEO says if AI hype were the dotcom boom, it is 1996

It is 1996 in terms of the business adoption of AI if it were put on the dotcom era timeline, according to MongoDB CEO Dev Ittycheria. Speaking to investors at a Goldman Sachs event earlier this week, the boss of the NoSQL database company argued the tech industry was still early in the rise of AI, a development which - we're …

  1. breakfast
    Holmes

    Looking at the three use cases here:

    chatbots

    For fake social interactions chatbots are alright but as soon as they're on your company site giving your visitors actual information they're liable to either make things up or just return nothing, neither of which are useful. Currently, commercially speaking, worse than an FAQ section.

    research and summarization

    The problem for research is that your LLMs make things up, which means your research will be wrong unless you're actually doing research into LLMs. Meanwhile for summarisation a recent Australian study showed LLMs to be worse then humans in every way for document summarisation, so...

    automation

    Machine Learning sits under the general headline of "AI" and might be useful for this. LLMs are only really good for automating interactions with people who you don't care about, as they're obviously LLMs and people will recognise that very quickly.

    There's a couple of things I don't see here: jobs that LLMs are -or ever will be- capable of doing, and anything which will bring in an income within orders of magnitude of the levels of investment in AI we have seen recently.

    Very hard to see it as anything other than a bubble at this point and I'd say that in his analogy we're closer to 1999.

    1. elsergiovolador Silver badge

      Currently, commercially speaking, worse than an FAQ section.

      The point of chatbots is to save money on returns. If you buy something for £50 and there is something wrong with it, you go to customer service chat and the game is to get you to give up. To think that the effort is not worth getting £50 back. If you still getting through the hoops, the next step is to get you onto a minimum wage rep from the other side of the world, who typically don't even know where they work. So you have to explain everything again until you ask to talk to management or something. They need to put some road blocks there as well, but once you get there you typically get what you came for. The "management" is typically just a regular customer service person gate kept by chatbot etc.

      The problem for research is that your LLMs make things up

      Most people don't care. Even if you submit a paper, the reviewers unlikely are going to read it, but they will put it through automatic system of some sort that is also going to make mistakes.

      jobs that LLMs are -or ever will be- capable of doing

      At present I am certain LLM can replace very much any junior developer or any job that requires searching and "making sense" of information from large amount of text.

    2. Anonymous Coward
      Anonymous Coward

      What you say is certainly true for the earlier language models, but the latest generation of language models are actually very good and the market has become quite mature.

      Consider Gemma 2 (an open model from Google which you can run locally via Ollama) and Google Gemini Pro. They have a 2 million token context window (which means that you can include a book in your prompt) and the accuracy is greatly improved compared to ChatGPT in the early days.

      I used to do research manually, but these days I use Gemini to give me a head start and then I do the actual fact checking myself.

      Also I need to program in a proprietary programming language of which almost nothing can be found in the public domain. However, I can now just feed the entire programming manual to Gemini and it produces quite good code. This again gives me a head start and I can focus on fixing the mistakes instead of having to decipher the entire manual (which is not very clear at all) myself, write code and then try to fix my own mistakes.

      So all in all, I view it as a colleague with which I can work together. Just like a real colleague it will make mistakes and I need to account for that, but it's still beneficial.

      1. Anonymous Coward
        WTF?

        Stoic rulez!

        "So all in all, I view it as a colleague with which I can work together. Just like a real colleague it will make mistakes and I need to account for that, but it's still beneficial."

        And I said onto Him,

        "Take me home, it's the one place I can rest in peace",

        He looked upon me and spake, "Son, you are already there. Home is with you. Whereve' thou lay-iest thy hat, is thy home."

      2. Anonymous Coward
        Anonymous Coward

        You are either a marketing bullshitter for an AI company or an idiot.

        1. Anonymous Coward
          Anonymous Coward

          If I saw an LLM as something equivalent to a colleague my job would be over - they'd be firing me and employing Lionel Lawrence McComputer instead. Even saying that is asking to be fired because bosses do not care whether an LLM is good at the job as long as they believe it will be cheaper than you.

    3. FF22

      "Very hard to see it as anything other"

      You forgot:

      spreading propaganda

      This can not be underestimated. Right wing people are easily influenced by seeing that apparently a lot of people are saying the same thing, as their whole world view depends on there being authority figures telling what and how to think and on the herd mentality of doing just exactly that (as opposed to liberal world view which values individual thinking and getting to the truth through validation and logical arguments). AI can make to these people appear as if even more people would say the same thing, and regardless of what they saying is being completely illogical and contradicting actual facts, they'll believe it, because of the former.

      enshittification of news and the web in general

      Even if AI generated content is not used to spread propaganda, it is used to try to extract money from the web economy without putting the actual work into it. The prime examples of that are the ChatGPT like chatbots which were trained on billions of web pages, but also millions of new web pages are born every day with the sole purpose of regurgiating content that was already present on the web in order to be able to show ads alongside of them, killing original content creation, and generally lowering content quality because of AI hallucinations. Same thing with books/e-books, which are not also written in thousands if not million on a daily basis using AI.

      1. Paul Crawford Silver badge
        Terminator

        Sadly these are, quite literally, the killer apps for AI

      2. Anonymous Coward
        Anonymous Coward

        for those who can form clear arguments and favour not wasting their little lives..

        @ FUFU2

        The statement suggests that right-wing individuals are more susceptible to groupthink, especially when influenced by AI-generated consensus. This is due to their reliance on authority figures and a tendency to follow the crowd. Liberal individuals, on the other hand, value individual thinking and logical argumentation.

        1. Bebu
          Windows

          Re: for those who can form clear arguments and favour not wasting their little lives..

          "The statement suggests that right-wing individuals are more susceptible to groupthink, especially when influenced by AI-generated consensus. This is due to their reliance on authority figures and a tendency to follow the crowd. Liberal individuals, on the other hand, value individual thinking and logical argumentation."

          In my experience the doctrinaire left is equally susceptible.

          The (apparently shrinking) middle ground are more likely to apply a skerrick of common sense and question contradictory or unfounded or irrational claims. If they are liberal or not in the US sense is only testimony to their relative sanity.

          The questioning and healthy scepticism are the crucial attributes. Even if asylum seekers were consuming pets [they are NOT] out of hunger isn't the question "why is the wealthiest first world nation allowing anyone within its borders to suffer starvation?" Take it away Mr Dylan...

          If anyone states "The average <something> ..... <outrageous claim>" I interject "Average? Is that the median or the mean?" It usually stops these galahs before they can take wing. A bit like swallows: african v european. ;)

        2. katrinab Silver badge
          Megaphone

          Re: for those who can form clear arguments and favour not wasting their little lives..

          Remember that was people in the US call “the left” is actually centre-right. Actual far-left is pretty indistinguishable from far-right, and lots of them jump over the horseshoe to far-right.

    4. mpi Silver badge

      > Currently, commercially speaking, worse than an FAQ section

      Commercially speaking, people don't read FAQs, and customers expect a call-center option, like a chatbot.

      Call Centers however, when done well (well educated and trained personnel) cost a lot of money, or when done badly (poor people, exploited to (sometimes forced) labor in some local strongmans warehouse somewhere, barely fluent in the customer language, reading stuff from a list) may provide worse service than even a purely algorithmic non-AI chatbot.

      Plus, when people argue against chatbots, I always get the impression that they are using the worst examples in the industry. I have seen, and used, some truly amazing pieces of technology out there, which were a blast to interact with. I rather have one of those things, that solves my problem in 20 seconds, than wait on hold for some guy I had to read my order-number to 4 times, because he's wearing a headset from the early 2000s, that 42 people wore before him, while constantly wondering if the guy I am speaking to is even allowed a toilet break during his shift, and whether he has to sift through pictures of hate-crimes on his second screen to do "moderation" for some "social" media company while talking to me.

      > worse then humans in every way

      Except one: Speed. And when the documents in question are not mission critical, sometimes that's the most important thing. Because our society produces a lot of documents, more than humans can even glance at. Sometimes "good enough" is actually "good enough". And the number of use cases where this is true, is pretty astounding.

      > jobs that LLMs are -or ever will be- capable of doing

      The automation aspect is oversold. Lets be clear, yes I agree we ARE in a bubble, and some VCs will be very disappointed when it bursts. Boohoo.

      What LLMs excel at, is not completely automating jobs, but act as a force multiplier for the people doing it. Best example: Test writing. I give it the class and an example test, and describe the edge cases I want tested in a few words, click a button, and off it goes. Do I have to check if what it wrote makes sense? Absolutely. Does it get these simple, but onerous and time consuming tasks right most of the time? Yup. Is that an advantage to me, because I get to spend that time more producitively? Yes.

      1. Anonymous Coward
        Anonymous Coward

        >>Customers prefer call centres despite potential for poor service. Chatbots are efficient but lack personal connection. Ethical concerns and job displacement are potential drawbacks of chatbot adoption.

        You might. I dont. Banking AI has improved greatly in the last year. The AI assistant for mine is a godsend. People who work in call centres on the whole dont want to. People who call call centres on the whole dont want to. The only people who want there is the greedy slimeballs and lower once-factory-floor management. Burn it down.

        Try another company if their AI sucks. It is a USP right up ur ARS3

        1. This post has been deleted by its author

      2. breakfast

        I think programmers get a rather distorted view of how good LLMs are because there is more working code on the internet than broken - it can certainly be helpful for facile cases, although I tend to find that I can write the code faster than the prompt for anything requiring any detail. Unfortunately having the relatively constrained requirements and provable outcomes of code means that developers have assumed it will work as well with general language tasks and it does not because most of the text on the internet is nonsense and that's all it has to learn from. There's no real measure for "working" language other than the way it is understood and these systems cannot and will never be able to understand anything.

        Perhaps one day we'll make a GAI that can, perhaps some of this type of technology will be part of how it processes language, but these are guessing systems - they can only guess at the next likely word and the next likely phrase. Sometimes those guesses are right, but a lot of people are getting burnt by the belief that they are consistently accurate.

  2. karlkarl Silver badge

    > MongoDB CEO says if AI hype were the dotcom boom it is 1996

    The difference is that we don't actually have AI yet. We are not even close to having AI yet.

    At least the dotcom boom actually had "the technology" in place before it attempted to monetise it.

  3. Noodle

    As someone who was an active user of the web in 1996 I dispute the assertion that it was "not very interesting", as I recall it was an extremely exciting time of rapid innovation and creativity pushing the boundaries of what was then a very new technology.

    1. Dostoevsky Bronze badge

      I wish I were that old. Truly a golden age of tech.

      1. katrinab Silver badge
        Windows

        I am that old. Yes there was a lot of innovation. There was also a lot of very garbage business plans that went nowhere.

    2. Anonymous Coward
      Anonymous Coward

      @NetscapeWin95-9600k-winsockdll-pegaus-WSFTP

      Glory days. But, fear you not, Yung 'uns. It comes around and around. And every time it does it gets better, faster, stronger.

      We are on the cusp, the cusp I say, and 28k modems are just coming and US Robotics is one of the coolest companies in town.

      Hell, in a few months we will get IE4! Not all bad with that browser. Some things were nice like the DOM. M$ can fluke it sometimes and that DOM saved me when we presented once to a VC.

      This chap from MongoDB is very smart fellow. I haven't met him but have worked with people who have worked with him. He loves his field of work. The type of chap he is made for what he does. And he has pretty much hit the nail on the head in comparisons between AI on Web1.

      He avoids many if the landmines about whether AI is I at all. Wise. AI is not intelligent in a human sense. Our intelligence compares of multi-modal models which cascade logic throughout the system. Part of our intelligence has been artificially recreated and it is better than its parents.

      That was the biggest hurdle. Now the models can distil the abstract process from LLM creation into an 'algorithm' and apply that to any subject - sort of what new ChatGPT apes.

      We dont need ChatGPT5. No, we need an abstracted enigne[[always this word fails]] that can do what LLM did to language to other non-digital and digtal stuff.

  4. anonymous boring coward Silver badge

    "Netscape was just launched a couple of years earlier. People were excited about the web, but the web was still very basic, static web pages, and it wasn't that interesting."

    It was awesome. WWW is relatively sh*t today.

    1. 'arold

      +1.... +1000 even. The web back then was awesome, exciting and full of promise. There was an optimism and a sense of world wide community. Edward Snowden talks about how helpful people were online back then compared to today.

      Now it's shite, corporate and toxic. Useful apps (maps, streaming and, hmm, that's it), but they're just using the internet as a network layer.

      1. Anonymous Coward
        Anonymous Coward

        I remeber when there was a cow there.

        It was magic. Coming into work everyday on first-gen ISP and not knowing what the hell Im talking about as I was mostly a Mac man from my music work. "no, sir, I have no idea what a winsock.dll is. Do you have any socks with holes in them. Put them on top the modem and Reboot. I'm a Mac man, sir. We dont have any of this. Ask for <another-helldesk-colleague> if you need to call back.+

        Still, I think it has got better. Now Web2 is dead and traps ArkB passengers-of-the-future, the web, including this Web1 place, have settled back to normal -sh for a nation with an average IQ in the eighties FFS

  5. mpi Silver badge

    "and it wasn't that interesting."

    What? Of course it was interesting!

    It was wild, wacky, full of new ideas, and a blast to use. People were making things up on the fly. There is a reason we called it "Web-Surfing" back then, because it was that exciting to interact with.

    You know what's decidedly NOT interesting? The sterile landscape of run-of-the-mill, SEO-infused, ipad-interface-wannabe, corporate-feelgood-speech-bulls__t, empty, senseless and soulless garbage that is most of todays internet. The average webpage today is about as surprising as drywall. They look the same, they make the same mistakes, they use the same user tracking mechanisms. And most of it exists purely as a canvas to serve ads. What's really heartbreaking is that, despite the sameness and pre-fabricated looks (which in any other industry would indicate higher standards), they actually got WORSE in terms of usability, information density, and general usefulness. And they, of course, agree that it's perfectly sensible to load 30-40 MiB of data, to serve 10 lines of text under a giant "hero picture" that's usually about as heroic as a Fainting Goat.

    The dotcom bubble didn't burst because the web was "not interesting". The dotcom bubble burst because stock markets doing stock market things. That had nothing to do with the capabilities of the technology.

    1. Anonymous Coward
      Anonymous Coward

      Re: "and it wasn't that interesting." it was, yes

      I questioned the validity of those silver badges of late. Walls of text and bitterness seem to have spewed unabated from there.

      >> There is a reason we called it "Web-Surfing" back then, because it was that exciting to interact with.

      It was. As more and more people joined. It was amazing. When Boots launched their site, I could go there and find out all about them in a way that would have taken days before. I could even send them a message thru a web form. Sweet Lord, I felt like Jarvis Cocker in 'Sorted for Es & Whizz' when he peaks on his E and wonders where the spaceship is as that must be the next logical step for his feelings.

      >>The sterile landscape of run-of-the-mill, SEO-infused, ipad-interface-wannabe, corporate-feelgood-speech-bulls__t, empty, senseless and soulless garbage that is most of todays internet.

      Google ruined it with structured data. Corporate speech is beneath us. I am hoping WebAI improves thins a bit, but I'm happy The Reg survived and not The Inquirer for sure. It was good but not the same. Web2 social is as souless as you can get. Even soulful things there aren't. Everything beautiful is commoditised. Sold. Pimped and not to the highest bidder either.

      They whored us. Treated us like bitches. As as another right-hook comes in from these scummiest of the scummiest companies ever (worse than slavers), with their temporal AI privacy a@@@ rape. (Sorry for that)

      Localhost a LLM A$AP. Jai1br3ak the W&Bs. Create an AI app that doesn't have internet access. Call it ... Me. Could be a motion-capture model. Any model. Your model. Not Google, OpenAI, Tom, Dicker, Harrishole.

      Brew your own. Recursive loops formed from ineffable prompts that hack the prompt/conv history/preprompts submission encoding to see the inner working of your model. From there you can see patterns in the 'corruption'. I don't know enough about hacking, but someone said to me this is the m modern-day version of buffer-overflow.

      TIP:

      1. Anonymous Coward
        Anonymous Coward

        Re: "and it wasn't that interesting." it was, yes

        Me: Recursive loops formed from ineffable prompts that focus the prompt/conv history/preprompts submission encoding to see the inner working of your model.

        MeAI: Certainly. While using recursive loops and ineffable prompts can be a challenging approach, it's possible to gain valuable insights into a language model's inner workings. Here are some additional considerations:

        1. Experimentation:

        Iterative refinement: Start with simpler prompts and gradually increase complexity to observe the model's behavior.

        Varying loop depth: Explore how different depths of recursive loops affect the model's responses.

        Combining techniques: Experiment with combining recursive loops with other techniques, such as adversarial prompts or prompt injection.

        2. Analysis:

        Pattern recognition: Look for patterns in the model's responses that may reveal underlying mechanisms.

        Error analysis: Identify cases where the model produces incorrect or unexpected results and analyze the reasons.

        Comparison with other models: Compare the behavior of the model to other language models to identify similarities and differences.

        [A bit simple but a starter]

  6. katrinab Silver badge
    Megaphone

    Remind me

    How are your Netscape shares doing?

    Or even for that matter your Microsoft shares. Microsoft as a company is doing fine, but the shares were still massively overpriced at the time.

    Google came along later, so wasn’t an option at the time.

    With the benefit of hindsight, the only good picks were Apple and Amazon. Apple was a recovery stock, and at the time was seen in the same way as other relics of the 1980s personal computer revolution like Amiga.

  7. Anonymous Coward
    Anonymous Coward

    ML works, no ifs no buts. I've posted this a few times, there are ML tools and product out there that work well and can save industry money. Detecting movement in builds from drone footage, estimating erosion risk areas in rivers, predicting failures in industrial systems to allow preventative actions. All good, all works and deeply boring and never makes the press.

    Now LLM is the stuff that gets the headlines, the stuff that makes people think of Terminator and the stuff that doesn't work very well outside of the boardroom's wet dreams.

    I've have used GPT enterprise many times to test for work. For me it's largely waste of time.

    I've given it tasks to do that I alerady know the answer to to compare. For data analysis if you're not very careful it will drift off and do it's own thing. You have to check and recheck what you are doing and what data you are working with constantly or it goes off and includes data you've told it not to. it makes up missing data, and when challenged it cannot evern say why except in the most broad terms why it has ignored you or fucked up. It is like working with a 5yr old genius. It can do the analysis but getting the thing to do it right is very protracted, painful, frustrating and at the end you need to check all its work because it is untrustworthy.

    Where have I actually been able to find use of LLMs. In very smal ways. Gettging it to quickly write an excel macro for me, getting a custom GTP to collate information into a checklist foir a team.

    these were all thing where I was quickly able to sanity check the output, and where the risk from an error was low.

    But these use cases don't justify the cost at all.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like