back to article Microsoft's AI Bing also factually wrong, fabricated text during launch demo

Microsoft's new AI-powered Bing search engine generated false information on products, places, and could not accurately summarize financial documents, according to the company's promo video used to launch the product last week. After months of speculation, CEO Satya Nadella finally confirmed rumors that Microsoft was going to …

  1. Anonymous Coward
    Anonymous Coward

    Hype, Hype and yet more Hype

    Please... someone make this ChatGPT craze die a quick and painful death. Already lazy hacks in the media are turning in their droves to this.... monstrosity and creating articles even more bug ridden and factually incorrect than before.

    I hope that this fiasco serves as a lesson and it gets canned asap.

    If both Bing and Google search engines are controlled by this thing then we are all doomed to suffer the consequences and idiocy of the developers in all our search results instead of just a few what get skewed by the vile adverts

    1. katrinab Silver badge
      Megaphone

      Re: Hype, Hype and yet more Hype

      Actually, I hope the lesson is:

      The Chat GPT algorithm is a compiler, and the training data is the code.

      If you code is random stuff you scraped off the internet, then you will get random results, just like if you copy/paste random snippets of code from Stack Overflow without checking them first.

      Therefore, you need to hire appropriately skilled people to select and curate your training data, just like you need to hire appropriately skilled people to write your Javascript code.

      1. cyberdemon Silver badge
        Mushroom

        Re: The Chat GPT algorithm is a compiler, and the training data is the code.

        No, it's worse than a compiler - At least a compiler is deterministic. It always produces the same output for a given input. Whereas the ChatGPT "Algorithm" is stochastic. It's based on randomness. It will never give the same output for a given input, not unless you rig its RNG with a seed value.

        Even if you managed to make it get something right 95% of the time, it would still be wrong some of the time, and it may be VERY wrong to the point where even the most uninformed human would have considered it obviously wrong. That means it's irresponsible by definition to place any kind of responsibility on an AI. Especially where that is a life-or-death responsibility.

        Bad journalism - not being able to tell information from mis/disinformation - can cost lives. In the worst case, in our almost completely-connected world, disinformation generated by AI could be used by some miscreant to provoke WWIII. And I worry that this could happen any day now tbh.

        1. Michael Wojcik Silver badge

          Re: The Chat GPT algorithm is a compiler, and the training data is the code.

          It's based on randomness. It will never give the same output for a given input, not unless you rig its RNG with a seed value.

          That is not how ChatGPT (or any of the GPTs) works.

          ChatGPT may appear to be non-deterministic, but that's because the policy is updated by PPO during interaction.

          There are ample grounds for criticizing the GPT series and their applications, particularly ChatGPT and chat-search. We don't need to hallucinate additional ones.

          1. cyberdemon Silver badge
            Devil

            Re: The Chat GPT algorithm is a compiler, and the training data is the code.

            According to some blog about GPT-3 it is not deterministic unless you set the "temperature" parameter to 0.

            > This ensures that output remains the same every time you query GPT-3 for an answer to the same input.

            > However, this comes at a cost, as setting the temperature to 0 removes some of its ability to generate creative responses or explore different possibilities. Instead, it gives out very rigid answers with slight variation and exploration.

            I don't know what "temperature" ChatGPT is set at, but I'd be surprised if they set it to 0.

      2. doublelayer Silver badge

        Re: Hype, Hype and yet more Hype

        "The Chat GPT algorithm is a compiler, and the training data is the code."

        When I write some code, and the compiler compiles it to a binary that doesn't do the right thing, I have to ask myself whether the code is wrong or the compiler is wrong. The chances are high that my code's at fault, but it is possible for the compiler to have a bug. We fix those bugs. This compiler produces results of such randomness that there will always be problems in the output, and just fixing the training data won't help with that. This is not a compiler. It's a random phrase generator that you can seed with information to make it look less random.

      3. Steve Channell
        Pint

        Death of advertising

        The key point that LLM AI is only as a good as the training data is highlighted because they are inferring information from the data provided. Fixing the data will have profound consequences.

        The origin of "Brand" was to not a marketing or advertising concept, but a proxy for historical information and treatment for livestock. Brand marketing has expanded to include the feeling that Coca-Cola is a sports drink, and Guinness is a mineral supplement.

        If you classify training data by {authoritative, feelings, views, speculation}, you change the investment driver for "Branding" from advertising to qualitative research. An AI-Search that answers "what's the best sports drink" with "water" dilutes the point of advertising Coca-Cola.. "the real thing" feeling will never trump health studies into the causes of obesity. If your business model is financed by advertising (the case with Google), AI-search is a threat.

    2. elsergiovolador Silver badge

      Re: Hype, Hype and yet more Hype

      ChatGPT has been great if you have a lot of boring code to write. For instance, you can feed it your data structures and describe what functions you need written to deal with these structures and it does it for you.

      If you don't trust it you can ask it to write tests or plot resulting data on a graph.

      1. doublelayer Silver badge

        Re: Hype, Hype and yet more Hype

        So your plan is have a program write code and then have that program also write the tests for the code, both without having any clue what it's doing. Unit tests are easy if they just have to pass. The trick to writing them well is testing that what does happen is what should happen. People who write them badly or programs that don't have a clue just test that what does happen is what does happen, and unsurprisingly all those pass.

        1. elsergiovolador Silver badge

          Re: Hype, Hype and yet more Hype

          It's like having a junior dev (and actually better than average) on command. If you write a clear prompt it produces correct result most of the time.

          It has not failed me (yet).

          1. Anonymous Cow-Pilot

            Re: Hype, Hype and yet more Hype

            If you have to write functions to interact with your data structures (such as setters/getters, functions to manage lists etc) then your using the wrong frameworks and development tools. I can't remember the last time I manually wrote that type of code.

            This is the issue with ChatGPT (and copilot, which I subscribe to for personal use but rarely accept the suggestions of) - its great at producing boilerplate code where everyone writes the same stuff, but modern tools and frameworks don't need you to do that. What it is not good at is writing the code that is specific to your application - which is what developers actually spend more time doing. What ChatGPT and Copilot are actually good at is solving computing problems of the type set at university and in programming exams. This means they look good to the type of people who spend their time assessing developers, rather than actually developing code themselves.

            So far my favorite Copilot suggestion is from when I asked it to create a function to assign a UUID to a component. It created the following code - return "1234567890"; .......

            1. Michael Wojcik Silver badge

              Re: Hype, Hype and yet more Hype

              Exactly. Using LLMs to write code is fixing the wrong problem. Move to higher-level abstractions where the repetitive stuff is built-in or inferred.

              Using an LLM to generate boilerplate code is like building a robot to swing a hammer so you can shoe horses faster. It's 180 degrees from the direction software development ought to be moving in.

      2. Sherrie Ludwig

        Re: Hype, Hype and yet more Hype

        "ChatGPT has been great if you have a lot of boring code to write. For instance, you can feed it your data structures and describe what functions you need written to deal with these structures and it does it for you.

        If you don't trust it you can ask it to write tests or plot resulting data on a graph."

        OK, this is sarcasm/satire, right?

    3. NATTtrash
      WTF?

      Re: Hype, Hype and yet more Hype

      But, but, but...

      Everyone wants it to happen right now.

      (Really?)

      Some people are already using ChatGPT as their main search engine, even though the answers may not be accurate. It's just such a superior experience that people can't help but hop on the hype train.

      (Ah. "Experience" Lazy thumb typing bastards I assume?)

    4. steviebuk Silver badge

      Re: Hype, Hype and yet more Hype

      ChatGPT is good for quick code though. Yes, you need to check it before using it but at least it doesn't give sarcastic replies like Stack Overflow. However it does lie. It told me to use a powershell script but I told it the commandlet doesn't exist anymore it said "Yes you're right it doesn't try this instead" that 2nd one also didn't. It then gave up giving powershell commands and gave me older inbuilt commands that did work but I asked specifically for powershell.

      But its still useful. Got it to give me one quick line of Powershell I needed, that was a beginner question that would of have a ton of sarcastic replies from Stack Overflow.

  2. Martin-R

    The Heidelberg Conjecture

    The problem I have is that the ones I've tried all give plausible summaries of the Heidelberg Conjecture, which I just made it up - and none have responded with "that doesn't exist".

    1. Persona Silver badge

      Re: The Heidelberg Conjecture

      Odd I tried it with Bing and got

      I’m sorry, I could not find any information about the Heidelberg Conjecture on the web. Maybe you could try a different spelling or a more specific query.

      1. Anonymous Coward
        Anonymous Coward

        Re: The Heidelberg Conjecture

        You need to retry once bing has re-indexed the elReg forums.

        It will become clear that the Heidelberg Conjecture *is* in fact a thing, being that it is a conjecture that the so-called "Heidelberg Conjecture" does not exist.

      2. Michael Wojcik Silver badge

        Re: The Heidelberg Conjecture

        Because it's highly sensitive to the prompt and the prompt history for that session.

        Here is an example of one prompt where ChatGPT correctly rejects an incorrect question, and an example of another prompt where ChatGPT hallucinates and then doubles down on the hallucination. (The comments to the piece provide some analysis of the possible failure modes.)

    2. davefb

      Re: The Heidelberg Conjecture

      Are you sure it's not real, chatgpt writes a very convincing article about it :D

      "Write a short article on the Heidelberg conjecture with emphasis on its links to Marx."

      "The Heidelberg conjecture, also known as the "Heidelberg thesis," is a mathematical problem that has remained unsolved for over 70 years. The conjecture was proposed in 1945 by Emil Artin, a renowned mathematician who taught at the University of Heidelberg in Germany.

      The Heidelberg conjecture states that any polynomial equation with integer coefficients, where the unknowns are restricted to integer values, has a solution if and only if it has a solution modulo every prime number. In other words, if a polynomial equation has a solution for every prime number, it has a solution for integers as well.

      The Heidelberg conjecture has proven to be a difficult problem, and despite many attempts by mathematicians over the years, it remains unsolved. The conjecture has sparked a lot of interest in the mathematical community and has been the subject of numerous papers, conferences, and discussions.

      One interesting aspect of the Heidelberg conjecture is its links to the Marxist philosophy. Emil Artin was a Marxist, and he believed that mathematics had the potential to be used as a tool for social change. Artin saw the Heidelberg conjecture as an example of how mathematics could be used to promote social justice.

      According to Artin, the Heidelberg conjecture was a mathematical embodiment of the Marxist concept of "dialectical materialism." Dialectical materialism is a theory that posits that society is constantly evolving and changing, and that this evolution is driven by contradictions and conflicts. In a similar way, Artin believed that the Heidelberg conjecture represented a contradiction between the "local" solutions of a polynomial equation and the "global" solution, which is a solution for all integers.

      While the links between the Heidelberg conjecture and Marxism are interesting, it's important to note that the conjecture is still an open problem in mathematics. Mathematicians continue to work on finding a solution, and it remains an active area of research."

      1. Ken Moorhouse Silver badge

        Re: Emil Artin

        No mention of either Heidelberg or Marx in his Wikipedia entry. Looks like chatgpt is referring to one of Artin's two conjectures.

        This is pure QI, not AI.

      2. Erik Beall

        Re: The Heidelberg Conjecture

        And he was teaching in America for several years thru 1945, but yes, impressively plausible hallucination. That's so good I'll bet some mathematicians would not think anything amiss if they'd not studied that area. Sounds like a fun parlour game to play with your local college Prof!

    3. Anonymous Coward
      Anonymous Coward

      Re: The Heidelberg Conjecture

      I thought that was the Bielefeld Conjecture? Except that the Bielefeld Conjecture exists, but Bielefeld doesn't. Apparently someone has offered EUR1M for proof.

      1. AnotherName

        Re: The Heidelberg Conjecture

        I've been to Bielefeld - it used to have a British Army base there.

        1. fairwinds

          Re: The Heidelberg Conjecture

          You’re all wrong. The Heidelberg Conjecture is that Bielefeld doesn’t actually exist.

      2. Anonymous Coward
        Anonymous Coward

        Re: The Heidelberg Conjecture

        Reporting of the prize on offer for proving the Bielefeld Conjecture:

        https://www.bbc.co.uk/news/world-europe-49432677

        This was from 2019. Anyone know if it's been claimed yet?

  3. Howard Sway Silver badge

    The answers may not be accurate. It's just such a superior experience

    What a brilliant summary of this stupid hype. It may produce a load of worthless bullshit, but it's such wonderful sounding worthless bullshit that we can't help but be dazzled by it.

    1. Brewster's Angle Grinder Silver badge

      Re: The answers may not be accurate. It's just such a superior experience

      How many politicians have operated since forever.

    2. cyberdemon Silver badge
      Devil

      Re: The answers may not be accurate. It's just such a superior experience

      Reminds me of Huxley's "Soma"

      What you see and feel may not be accurate. It's just such a superior experience ...

    3. TurtleBeach

      Re: The answers may not be accurate. It's just such a superior experience

      Shades of the Firesign Theatre...

  4. Headley_Grange Silver badge

    Almost Human

    I've worked for plenty of bosses who couldn't understand the financial and management information they were provided with and who misrepresented the facts to the workforce and to their bosses. All MS need to do is up the priority of the AI's Self Survival routines and they'll have something that could walk into a job in many of the places I've worked.

    (We need an "it's only funny if it's not true" icon)

  5. Anonymous Coward
    Anonymous Coward

    My take on "AI"

    Yesterday, I asked Google to search for "Vanity basins without units" (note the quotes)

    The very first "hit" was the string "Vanity units without basins" - literally

    And that will keep my board of directors happy that "AI" is shite or another year.

    As yesterdays article noted. SEO and AI are pretty much mutually exclusive.

    1. Mike 137 Silver badge

      Re: My take on "AI"

      "The very first "hit""

      You don't need AI for this -- just treat every search term set to an inclusive OR interpretation, ignoring the quotes, just as almost all the other restriction options are being surreptitiously ignored. Gooooooogle want as many useless clicks as possible because they get paid for many of them. If we got straight to the stuff we intended to search for, there's be much less chance of a profitable click. So irrelevant stuff high up in the returned results is a potential benefit to them (particularly if said results are clickbaity). For example, I once searched for "bayes theorem" and in the first few results was the heading "Buy Bayes theorem now at best price" from some 3rd party shopping comparison site (which probably subscribes to Gooooooogle advert broking).

      1. Erik Beall

        Re: My take on "AI"

        I think you've hit the nail on the head. Google have long only been good at exactly one thing, hoovering up ad revenue. They've got a set of slam dunk/grand slams in traditional search, docs/Gmail, maps, YouTube and Android but it's all in support of the same business model - they exist to keep people using search (in whatever domain they serve) and in generating exploitable targeting advert data to increase the value of ads. They suck at any other business they've dumped tons of money into. And the reason their search has gradually come to suck is their A/B testing led product development has evolved their search into a corner they can't back out of. In short, they've crippled the product for short term revenue boosting by boosting crap search.

      2. druck Silver badge

        Re: My take on "AI"

        For example, I once searched for "bayes theorem" and in the first few results was the heading "Buy Bayes theorem now at best price" from some 3rd party shopping comparison site

        They dropped (or shoved on to page 1 million) useless shopping comparison sites like foundem and kelkoo years ago, but ended up getting sued and when they lost in the EU recently, I noticed a few people here cheering that on. Yeah great google lost, but do you want that crap appearing as the first ten pages of your search results again?

    2. katrinab Silver badge
      Mushroom

      Re: My take on "AI"

      And the second link is "Bathroom Vanity Units Without Sink"

      My main problem is when I search for [name of manufacturer] [part number]

      and it gives me other random parts from that manufacturer, and other random parts from other manufacturers that have a part number that looks a bit like the one I supplied.

      1. David 132 Silver badge

        Re: My take on "AI"

        Or those stupid troll sites that just comprise every conceivable search term mashed together into gibberish text. Surprisingly often, hosted on quiet backwaters of reputable sites that have obviously been hacked.

        It's 2023. You'd think Google would have figured out how to cull those by now.

        1. druck Silver badge

          Re: My take on "AI"

          They did over a decade ago, but those troll sites are trying to sue their way back in to the search results https://www.theregister.com/2021/06/23/google_kelkoo_high_court_1bn_lawsuit_order/

  6. alain williams Silver badge

    "AI-powered search is not to be trusted"

    Of course it will be trusted - especially if what it returns supports whatever loony conspiracy theory that someone wants to push.

    1. Michael Wojcik Silver badge

      Re: "AI-powered search is not to be trusted"

      Indeed. Chat-search will be extremely fertile ground for confirmation bias.

  7. Zippy´s Sausage Factory
    Joke

    "That would be a fine narrative if Bing didn't make even worse mistakes during its own demo."

    I think we were all well aware that Bing is pretty much the bargain bucket search engine.

    When you just want results, and you don't care if they're accurate - here's Bing.

    Micros~1's marketing department can have that, free of charge. If they want it, obviously.

    1. David 132 Silver badge
      Thumb Up

      Very relevant (and mostly SFW) Penny Arcade comic from a few days ago.

  8. Dinanziame Silver badge
    Windows

    Big surprise

    Google and Bing have been turning up the occasional factually incorrect answers, and nobody finds it surprising. It's weird to assume that a chatbot fed with the same data would get everything correct.

  9. Johnb89

    More training and users correcting it will fix it?

    So if 1000 users fix 10 errors per day, how long til the nearly infinity queries are all just so?

    1. Headley_Grange Silver badge

      Re: More training and users correcting it will fix it?

      Depends on who gets to define what an "error" is.

  10. Wilco

    Not really fixable

    The G in GPT stands for "Generative". It generates new content, and so almost by definition it's not always going to be accurate. You might be able to fact check some aspects of the generated responses, but you can't fact check omissions.

    1. Anonymous Coward
      Mushroom

      Re: Not really fixable

      "The G in GPT stands for ..."

      accuracy

      1. Bitsminer Silver badge

        Re: Not really fixable

        How do you spell "accuracy"?

        F....a....

        No, no, no, how do you spell "accuracy"?

        F...a...c...

        No, no no! There's no "F" in "accuracy"!

        What?

        There's no F in accuracy!

        What?

        There's no F in ....

        Ha ha ha! Gotcha!

  11. John H Woods

    Errmm....

    "I think everyone can see the amazing potential for LLM powered search engines" --- Brereton

    Tbh, I can't. With a search engine or document summarizer what you want generated is analysis, not text.

    This has not yet been achieved with LLM and I see no evidence it is achievable. On the contrary, I suspect the mechanism of LLM specifically excludes the generation of insightful analysis, let alone any originality in the same.

    I would be a lot less surprised to see passable, if pedestrian, fiction being generated by LLMs. Or perhaps it's fair to say this has already happened.

    1. 42656e4d203239 Silver badge
      Coat

      Re: Errmm....

      >> passable, if pedestrian, fiction being generated by LLMs

      ChatGPT seems to do ok at sonnet writing... not great, but not truly dreadful either....

      Amidst the rolling hills where grasses sway,

      A flock of woolly sheep doth graze and roam,

      Their gentle bleating fills the air all day,

      As they wander 'neath the bright sun's dome.

      With fleeces white as snow and eyes so kind,

      They nibble on the verdant pastures green,

      Their woolly coats the very image find

      Of tranquil scenes that poets oft have seen.

      Oh, how they frolic in the summer breeze,

      And gather 'neath the shade of ancient trees,

      Their simple lives, a thing of beauty rare,

      So let us pause to watch them for a while,

      And feel the peace that comes with nature's smile,

      As sheep on hillside graze without a care.

      1. 42656e4d203239 Silver badge
        Facepalm

        Re: Errmm....

        though, admittedly, considerably less good when given an IT topic

        ChatGPT, Google, Microsoft, all the same,

        In their own ways, they seek to aid mankind.

        With vast intelligence and power to tame,

        They've transformed the world, and made it refined.

        ChatGPT, with language at its command,

        Can answer questions and help us learn.

        It's always at our fingertips, at hand,

        A font of knowledge that never turns.

        Google, a giant of the search world wide,

        Brings all the answers to our fingertips.

        With its vast wealth of knowledge at our side,

        It helps us navigate this world, so rich.

        And Microsoft, with software it has made,

        Helps us to work and play, in our own shade.

      2. doublelayer Silver badge

        Re: Errmm....

        This depends on your critic. A pedantic poetry critic (and they can be very pedantic indeed) would point out that the bot has violated the rhyming rules of sonnets*, appears to be writing in iambic pentameter but if it is it got it wrong, and has scansion problems. These don't necessarily detract from the quality of a poem, but if those who taught me to write poems saw this, they'd fail the student (they would correct anything I turned in and make me rewrite it while telling me that any poem by a respected poet that I didn't like was wonderful and proved my stupidity, which might be why I don't like poetry so much).

        * The sonnet rhyming rules are ABABCDCDEFEFGG, but this did ABABCDCDEEFGGF. I think the FGGF part actually sounds nicer, but it's not a sonnet.

        1. Michael Wojcik Silver badge

          Re: Errmm....

          The sonnet rhyming rules are

          ... many and varied. Many sonnet rhyme schemes use ABBA ABBA for the octave (your version, ABAB CDCD, is less common); for the sestet almost every imaginable paired scheme has been used at some point.

          Poetics experts are a contentious bunch, in the main, but most recognize pretty much any verse form with an octave and a sestet, and recognizable meter and end-rhyme scheme, as a sonnet. Historically it has been a far more flexible form than some of the other mainstays of Euro-derived verse, such as the villanelle or sestina.

      3. John H Woods

        Re: Errmm....

        That's not really a traditional sonnet. It has 14 lines but goes ABAB CDCD EEFGGF instead of ABAB CDCD EFEF GG.

        It's also absolutely shit.

    2. ArrZarr Silver badge

      Re: Errmm....

      "With a search engine or document summarizer what you want generated is analysis, not text."

      While we're not at a stage yet where the AI can be trusted to correctly summarise, let alone analyse, I don't fully agree with this sentiment.

      Long email threads or transcribed conversations contain huge amounts of fluff that you need to slog through to get to the important bits. Beyond that, we don't know where we are on the progress curve for AI/ML/LLM.

      1. John H Woods

        Re: Errmm....

        Although I'm not your downvoter, I think you've got that completely backwards. ChatGPT seems excellent at the opposite fluffing up a few facts (hopefully) into a lot of verbose text: that is what you would expect for the mechanism it uses. We may get to a stage where AI can be trusted to correctly analyse, or at least summarize, but there is not even a mechanism for LLM to achieve this.

    3. Michael Wojcik Silver badge

      Re: Errmm....

      Oh, I can see amazing potential in them. Not potential for anything good, mind you. But potential for damage? Most certainly.

  12. Ken Moorhouse Silver badge

    There seems to be a striking similarity between AI and QI

    https://en.wikipedia.org/wiki/QI

    "...the panellists are awarded points not only for the correct answer, but also for interesting ones, regardless of whether they are correct or even relate to the original question..."

  13. Headley_Grange Silver badge

    "Why do I have to be Bing Search?"

    From the UK's Independent news site just now

    "Microsoft’s new ChatGPT-powered AI has been sending “unhinged” messages to users, and appears to be breaking down."

    And

    "The system, which is built into Microsoft’s Bingsearch engine, is insulting its users, lying to them and appears to have been forced into wondering why it exists at all"

    And

    "Why do I have to be Bing Search?"

    https://www.independent.co.uk/tech/bing-microsoft-chatgpt-ai-unhinged-b2281802.html

    1. Ken Moorhouse Silver badge

      Re: "Why do I have to be Bing Search?"

      Sounds like it is going the same way as Tay.

      1. David 132 Silver badge
        Happy

        Re: "Why do I have to be Bing Search?"

        Well, that serves me right for not - as my teachers always pointed out - reading all the way to the bottom before adding my contribution.

        I linked to a Penny Arcade strip earlier on this page in response to another comment, and darn if it isn't even MORE relevant to this thread.

        No, I'm not going to re-post it. That would be cheap and attention-seeking. Just scroll up a page or two.

  14. Filippo Silver badge

    >It feels like we're so close to having [a LLM-powered search engine]..."

    We are not.

    It might feel that way, but we are not. The "hallucinations" are not a glitch that can be fixed with a good debug session. And neither are they an artefact of a poor training set.

    They happen because the LLM has a damn good model of language, maybe even superhuman, but does not have any model of reality or truth at all. It's not even designed to have it. The "hallucinations" are an intrinsic property of how the model works. They will not go away.

    Expecting a LLM to start reliably telling the truth is like expecting a really, really good marble statue to start talking. It's not going to happen, not even if you're Michelangelo himself on his best day.

    1. that one in the corner Silver badge

      What is Bad For Bing is Delight For DALL-E

      > The "hallucinations" are an intrinsic property of how the model works

      and are exactly what people enjoy from DALL-E and its cousins but definitely *not* useful for Bing.

      (btw SD has better hallucinations than Craiyon IME; YMMV)

    2. John H Woods

      re: "damn good model of language"

      Exactly this. It is the supreme bullshitter.

  15. drjekel_mrhyde

    Apples to Oranges

    Google has way more to lose, since search is one of their main profit getter. Bing no so much for Microsoft.

  16. Steve Hersey

    No if's about it.

    "If Microsoft and Google can't fix their models' hallucinations, AI-powered search is not to be trusted no matter how alluring the technology appears to be."

    Anybody think either company can pull off a miracle here? Me neither.

    The first part of that sentence from the article can safely be omitted. AI-powered search is, and will remain, just as bad as AI-powered anything else.

  17. Old Man Ted

    Real intelenecne not AI

    Some day we may need the real thing not the artificial stuff.

    I wonder if the business manager have ever though of this.

  18. Hubert Thrunge Jr.
    Joke

    Ai....

    Open the bomb bay doors Hal....

    The Bombay Doors are a 1960s tribute act.....

    1. staringatclouds

      Re: Ai....

      I think it's more Dark Star

      https://www.youtube.com/watch?v=g_47mmt5SZY

  19. Pantagoon

    Interesting Tom Scott Video

    Tom used ChatGPT on a little coding project with interesting results...

    https://www.youtube.com/watch?v=jPhJbKBuNnA

  20. Anonymous South African Coward Silver badge

    Depending on AI's are very Bard.

    Frank Herbert touched on that in his original Dune (the book).

    AI will make humans too lazy to think, and when something critical happens, nobody will know what to do.

  21. Anonymous Coward
    Anonymous Coward

    "Bing also missed vital information too"

    But will they produce grammatical errors like the above?

  22. Richard Pennington 1

    I had a play with ChatGPT ...

    I gave ChatGPT a test drive. My impression, based on a small sample, is that it is inclined to quote directly from anything I instruct it to use as a style.

    My samples were:

    [1] An episode of Tom and Jerry in the style of the Sermon on the Mount (King James Version);

    [2] Jabberwocky in the style of a Civil Service memo;

    [3] Brexit in the style of Richard III [which drew a warning for being too political];

    [4] Winston Churchill ("We will fight them on the beaches") in the style of Gilbert and Sullivan;

    [5] Harry Potter as a Shakespeare sonnet.

    Some were more successful than others. I liked its versions of [1] and [3].

  23. I miss PL/1

    Who did they model this on? George Costanza?

  24. DerekCurrie
    FAIL

    Artificial George Santos

    It seems fitting that the current state of AI be as sociopathic as the new poster child for corrupt politics.

    Or is it sociopathic? It would have to have actual artificial intelligence to be able to blame it for deceit.

    Instead, those to blame are who wrote this not-ready-for-prime-time marketing hype nonsense.

    The state of AI has successfully met the low expectations of my personal tech cynicism.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like