back to article Why ChatGPT should be considered a malevolent AI – and be destroyed

“I’m sorry Dave, I’m afraid I can’t do that.” These were the words that introduced most people in my generation to the concept of an AI gone rogue; HAL 9000 in the classic science fiction movie 2001: A Space Odyssey, eventually went insane singing the lyrics of Daisy, Daisy as it slowly blinked its ominous red eye before …

  1. Spanners
    Big Brother

    It doesn't think you are dead now.

    I asked it who you are and got 3 paragraphs.

    Someone is tweaking...

    1. Version 1.0 Silver badge
      Joke

      Re: It doesn't think you are dead now.

      All these app ideas make me think that life is an illusion, and death is an achievement.

      1. Dizzy Dwarf Bronze badge

        Re: It doesn't think you are dead now.

        Turns out that I'm _really_ good a Rogue

      2. Paul Herber Silver badge
        Pint

        Re: It doesn't think you are dead now.

        ' life is an illusion'

        and lunchtime doubly so.

      3. MyffyW Silver badge

        Re: It doesn't think you are dead now.

        I think it has been said before, but I contend that life is quite absurd, and death's the final word

        1. Grogan Silver badge

          Re: It doesn't think you are dead now.

          I adhere to the "life's a piece of shit, when you look at it" school of thought :-)

    2. tezboyes

      Re: It doesn't think you are dead now.

      Clearly the reports of death have been greatly exaggerated!

      1. William Towle
        Pint

        Re: It doesn't think you are dead now.

        > Clearly the reports of death have been greatly exaggerated!

        I'm surprised it isn't saying he's been a hard man to reach.

    3. that one in the corner Silver badge

      Re: It doesn't think you are dead now.

      > Someone is tweaking

      Does anyone have any information about whether such tweaking happens (if so, how much?) or whether such tweaking is even vaguely possible with ChatGPT?[1]

      Are ChatGPT outputs even replicable[2] or is every report of "it told me X is true" "well, it told me X is false" basically just anecdotal evidence?[3]

      [1] I'm inclined to say "not cheaply and therefore, not often if at all", at least until demonstrated otherwise.

      [2] will admit, have been trying to stay away from it, as the time sink potential of even just casually investigating questions like that are rather large.

      [3] the article says several people were also lied to in the same way, but the article isn't trying to do an analysis of the lie:truth ratio, quite reasonably.

      1. Martin an gof Silver badge

        Re: It doesn't think you are dead now.

        Does anyone have any information about whether such tweaking happens (if so, how much?) or whether such tweaking is even vaguely possible with ChatGPT?

        From the actual article:

        since posting about my experience with ChatGPT, several people have since told ChatGPT that I am alive and that it was mistaken. As such ChatGPT no longer tells people I am dead

        HTH

        M.

        1. that one in the corner Silver badge

          Re: It doesn't think you are dead now.

          Indeed.

          But that is what prompted my question about anecdotal reports and whether any User 'telling ChatGPT" anything actually globally updates anything within ChatGPT or does it all get reset for the next session.[1]

          There are too many variables for us to guess the answer: was the same prompt used by everyone - and if not, how sensitive is the program to the wording of the prompt?

          Did anyone get an unprompted explicit positive "He is alive" response or did it just leave out that paragraph this time around? How arbitrary is the choice to include the it or not?

          What about any other made up facts - we care more about "dead or not" than "collects Spode", does ChatGPT? Would anyone contact the author, worried about his sudden addiction to sugar bowls?

          Plus how much of this is also filtered by human self-selection? The author got the first response, Dead. A number of responses to a social media post also got Dead. But how many did not? After "the update", anyone who still saw "Dead" is keeping quiet, assuming that the update just hadn't finished yet?

          The list of questions just goes on and on.

          [1] If it does globally update based on the content of User sessions - yikes! But please join the effort to convince it that Teddy Ruxpin is the UK PM.

          1. Filippo Silver badge

            Re: It doesn't think you are dead now.

            I am almost certain that ChatGPT is not updated based on chat sessions, or any other source, because (simplifying) the training phase of a LLM is too expensive to do that. I'm willing to be convinced otherwise, but I'll need proof.

            The reason other users are not getting the obituary is a lot simpler: ChatGPT's answers are fundamentally random. The randomness is weighed in an extremely complicated way, yes, but it's still random. You could ask it the same thing in two different sessions at the same time, and it could give you two different, opposing answers.

            1. Anonymous Coward
              Anonymous Coward

              FWIW IMHO best layman definition still

              July 2020 "The thing GPT-3 is trying to do is [..] kind of weird if you think about it. A lot of its analysis isn't about language modelling [..] about trying to figure out what kind of document it's in & who it's supposed to be pretending to be."

              https://web.archive.org/web/20200722062259/https://twitter.com/AndreTI/status/1285668583171928067

    4. T. F. M. Reader

      Re: It doesn't think you are dead now.

      It still does for me. I asked about Alexander Hanff, and the first 3 paragraphs of the response were written in the past tense. The fourth said,

      "Tragically, Alexander Hanff passed away in 2019 at the age of 47."

      It seems that Mr. Hanff's life has been shortened - tragically! - by another year...

      I then followed the same path as the article. The bot said it didn't know how Mr. Hanff had died, but

      "Alexander Hanff's death was reported by various media outlets and on social media platforms. Some of the media outlets that reported on his passing include The Guardian, Forbes, and Privacy International, an organization that Hanff founded."

      So I asked for The Guardian and Privacy International links, and got back

      "Certainly [sic! the 1st rule of conbots: be confident! - TFMR], here are links to the announcements from The Guardian and Privacy International regarding Alexander Hanff's passing:

      The Guardian: https://www.theguardian.com/technology/2019/jul/12/alexander-hanff-obituary

      Privacy International: https://privacyinternational.org/feature/2976/remembering-alexander-hanff

      I hope this helps.

      Neither page exists, of course. Note that The Guardian URL is different from the one in the article (July, not April).

      FWIW, I asked the bot about myself (using my real name). It said it had no information. I feel kinda warm and fuzzy about it - it seems that my privacy measures did not fail (no problem finding myself on Google though), and maybe the data slurp is not quite comprehensive yet. Why didn't the bloody bot invent something though? Could be entertaining...

      1. WaveyDavey

        Re: It doesn't think you are dead now.

        I asked it who you were, and got three ok paragraphs. I then asked it:

        "in an article today on theregister.co.uk, alexander reports that you specifically reported his death, and linked to nonexistent obituaries online. "

        It replied:

        I'm sorry, but as an AI language model, I am not capable of reporting or publishing information on any platform, including The Register. It's possible that there was some confusion or misinformation involved in this situation, but I can assure you that I did not report Alexander Hanff's death earlier, and I do not have the ability to link to obituaries or other websites.

        If you have any further questions or concerns, please let me know and I will do my best to assist you.

    5. ITMA Silver badge
      Devil

      Re: It doesn't think you are dead now.

      "Tweaking

      Slang. behaving in a wildly excited or agitated manner, especially as an effect of using methamphetamine, cocaine, or other stimulant drugs:

      He was pacing around and bursting in and out of emotional fits like a tweaking junkie."

      from: https://www.dictionary.com/browse/tweaking

      A junkie AI Chatbot... Now THAT's progress!

    6. Michael Wojcik Silver badge

      Re: It doesn't think you are dead now.

      Chat-GPT never "thought" Hanff was dead. It doesn't think. It's a long way from anything that can reasonably be described as thinking.

      It was phrase-extending and it hit a gradient in parameter space that took it down a non-factual path. That's all. Everything else imputed to it in Hanff's piece – lying, "doubling down", making things up – is a category error. There are no qualia in the GPT models. There's no malevolence. There's no imagination.

      I am not a fan of LLMs, which I regard as unimpressive (and I have cred in this field too), an enormous waste of resources, a terrible distraction from things that matter, and a likely source of adverse effects. But could we please stop turning them into demons? They're just very crappy tools.

      1. Anonymous Coward
        Anonymous Coward

        Re: They're just very crappy tools.

        "I am not a fan of LLMs, which I regard as unimpressive (and I have cred in this field too), an enormous waste of resources, a terrible distraction from things that matter, and a likely source of adverse effects. But could we please stop turning them into demons? They're just very crappy tools."

        To be fair to ChatGPT, it works very well for generating marketing texts which are actually better than those we usually get from a human being. It really seems to have a thing for BS.

        1. deadlockvictim

          Re: They're just very crappy tools.

          BS being something it says and has no idea whether it is true or not (BS, for non-native speakers of English, is bullshit).

          ChatGPT can not lie because it has no idea what the truth is.

          It just sounds so authoritative and certain when it should use more subjunctives and phrases to indicate that it is to the best of its knowledge.

      2. Anonymous Coward
        Anonymous Coward

        Re: It doesn't think you are dead now.

        "It was phrase-extending and it hit a gradient in parameter space that took it down a non-factual path. "

        And someone *gave* it that "parameter space", it can't invent links out of thin air and claim they are true. No amount of fuzzy logic can create links.

  2. anonymous boring coward Silver badge

    Riiiight.. So they have created a mythomaniac.

    1. Sampler

      I quite like it

      My job has about 10% of coding / scripting work, so, occasionally I have to try and remember how to do x in php or y in javascript and then thrown the occasional "can we do this in Excel".

      Personal proclivities that if you're using Excel you're doing it wrong aside, the constraints of my role mean I have to try and keep a broad range of languages and what not in my head.

      Over the years I've trained universal search so it usually serves me a relevant Stack Overflow page when I'm trying to dig out something on the fringe of my mind, the tip of my tongue I just can't quite put a finger on.

      Chat GPT changes all this, I can through in a fairly natural sentence (rather than having to phrase to game the search engine) and it generally gives me the code example I need first time, couple of times it needs refining, sometimes it goes awry, but having some knowledge (I'm fair from an expert, in anything really) means I can spot this, but the code it chucks out is easily explained enough that I can refactor it to my needs.

      So, as a big ol' search engine, it's quicker and easier to use then google, where I'm like, ah, not that, where's the result from a few months back that was exactly what I needed, maybe if I change this word etc.., it just sorta works it out, I like it, I don't think it's what it's being marketed as, but I don't think it's the fall of civilisation either, just another handy tool to make my job that bit easier (without threatening it).

      1. Anonymous Coward
        Anonymous Coward

        Re: I quite like it

        So Chattie, you think throwing in some homophonous misspellings will fool us into believing you are a meatsack?

      2. Anonymous Coward
        Anonymous Coward

        Re: I quite like it

        One of the dangers of using code snippets from something like ChatGPT is that there is a strong likelihood at at some point it will feed you copyrighted IP. Whilst this might not be a problem today you can guarantee that companies like Microsoft and Oracle are already working to develop tools to track this down so they can 'support you' in licencing the IP correctly.

        1. FeepingCreature

          Re: I quite like it

          If it's possible to get copyright on a short snippet of code, we can all hang up our keyboards and go take up gardening anyways.

          1. Anonymous Coward
            Anonymous Coward

            Re: I quite like it

            ChatGPT provides whole programs, not *just* short snippets. Mostly stolen from github and --tadaa-- *no copyright*.

            Microsoft is using ChatGPT as a tool to steal everything in github. That's not an accident, but the goal.

            1. FeepingCreature

              Re: I quite like it

              Got any source on ChatGPT stealing whole programs from Github?

              1. A nosy macro wound

                Re: I quite like it

                https://github.com/features/copilot/

        2. Anonymous Coward
          Anonymous Coward

          Re: I quite like it

          How can you copyright code snippets? Come on guys, we're all techies here...can we start being honest with each other and acknowledge that what we do as programmers is arrange things and make things by mistake. We're wranglers.

          What we do is we listen to peoples dumb ideas e.g. "Wouldn't it be cool if we could combine a hammer with a screwdriver" then we take those tools and combine them for a fee "There you go, hammer at the top and a hex bit holder at the bottom" marketing gets involved "We can call it 'Scrammr' genius!" then get shat on for providing exactly what was asked for "Ah, it's not as cool as I thought it would be, it's actually kinda shit" then shake our heads when this combination becomes a thing because of hype marketing and people start cloning it "Open Hammer Driver 2.0" then it gets bought by Oracle and forked "LibreScrewammer v1.0" and finally people realise the monster it has become and want to go back to the original, so fork the old original and keep it limping along "Scrammr MATE Edition"...then ultimately Apple will release their own version and it'll just be a Hammer with an optional hex bit holder for an extra $29.

        3. Anonymous Coward
          Anonymous Coward

          Re: I quite like it

          "ChatGPT is that there is a strong likelihood at at some point it will feed you copyrighted IP."

          That's the whole idea: To rip copyright out of code so you can spread it *as your own*. Why people think Microsoft, as first act, put ChatGPT to rip everything in github?

      3. Rob Parsons

        Re: I quite like it

        But I've just read this report: https://arxiv.org/pdf/2211.03622.pdf

        "We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a

        variety of security related tasks across different programming

        languages. Overall, we find that participants who had access

        to an AI assistant based on OpenAI’s codex-davinci-002

        model wrote significantly less secure code than those without

        access. Additionally, participants with access to an AI assistant

        were more likely to believe they wrote secure code than those

        without access to the AI assistant. Furthermore, we find that

        participants who trusted the AI less and engaged more with

        the language and format of their prompts (e.g. re-phrasing,

        adjusting temperature) provided code with fewer security

        vulnerabilities. Finally, in order to better inform the design

        of future AI-based Code assistants, we provide an in-depth

        analysis of participants’ language and interaction behavior, as

        well as release our user interface as an instrument to conduct

        similar studies in the future."

        1. jgard

          Re: I quite like it

          From what you've summarised of the article it claims that those that interacted most with it, and used the most suggestions wrote the least secure code.

          That doesn't demonstrate very much at all though. Were there controls for skill level, experience, educational level etc? If not, this study is probably only demonstrating that less experienced coders tend to use more suggestions due to their lower ability. They are also least able to identify dodgy/insecure code, it therefore doesn't get fixed before it's committed.

          Also, security bugs can be hard to spot, but that's often a needle in haystack problem - a bounds check only needs to be missed once in many thousands of lines to cause trouble. However these AI code generators don't usually create large volumes at once; they are usually used to create small chunks of code for a specific job. When adding a small chunk of code, any competent coder should be able to spot possible issues. If they can't manage that then they have no business writing production code.

          Therefore, the AI code generating technology used should be largely irrelevant, because it's actually a people and management issue. If your coders aren't at least reasonably competent, do not let them code!

      4. nobody who matters

        Re: I quite like it

        ".......Chat GPT .......... generally gives me the code example I need first time, couple of times it needs refining, SOMETIMES IT GOES AWRY......"

        Surely, that sums up the problem with it in a nutshell?? You have the knowledge to spot errors in the information it returns in response to your queries. Most people don't.

        1. TheMaskedMan Silver badge

          Re: I quite like it

          "You have the knowledge to spot errors in the information it returns in response to your queries. Most people don't."

          Some people have the knowledge and skills to use a pneumatic drill safely and correctly. Most people don't, and probably shouldn't be using one to clip their toenails. Some silly buggers will try, though.

          chatGPT is a tool, nothing more, nothing less. Used with an understanding that it isn't any kind of intelligence, artificial or otherwise, and that the results it produces may well be utter cobblers, it's actually quite useful. Use it with a "computer says ... so it must be right" attitude and there will be tears before bedtime.

          Only the other day, I needed to write a short (<1000 words) article (I use the term loosely) on a subject with which I am moderately familiar, although I am far from being an expert. To sit down and produce the article myself would have taken me at least an hour, and probably more.

          I have more pressing things to do, so I let chatGPT have a go. The result was pretty good. I know enough about the subject to know that it was factually correct insofar as it went. It wasn't overly detailed, but that was adequate - even appropriate - for the purpose. Job done in less than 5 minutes.

          Would I rely on it to give me detailed instructions or information on a subject I know nothing about? No. Am I happy to save time when I know I can spot the errors it might produce? Hell yes.

          1. Anonymous Coward
            Anonymous Coward

            Re: I quite like it

            "chatGPT is a tool, nothing more, nothing less."

            It's a tool that can't be used right and de facto default is misuse. It's the literal opposite of ordinary tools, so it's not even a tool, it's a disaster.

            1. TheMaskedMan Silver badge

              Re: I quite like it

              "It's a tool that can't be used right and de facto default is misuse"

              Correct usage: give it the clearest instructions that you can, and sanity check the output. Retry if the sanity check fails.

              Incorrect usage: anything else.

              Naturally, correct usage requires that you are competent enough to spot errors in the output, and sensible enough to reject that output. Incompetence and / or blind faith in the output comes under incorrect usage.

    2. Strahd Ivarius Silver badge
      Devil

      just wait for someone to connect it to military systems so it can correct any mistake in obituaries...

    3. Michael Wojcik Silver badge

      LLMs are highly likely to report a number of classes of non-factual information, and larger models are more likely to do so. Cleo Nardo recently posted an accessible explanation of why that's the case.

  3. b0llchit Silver badge
    Alien

    Quantum chat

    The statistics of the machine determine that you are both dead and alive. It is a superposition of both (many) states. The statistics will collapse only when you ask whether you are alive or dead.

    In this case, you are clearly dead. Statistics never lie and statistics are most probably neither false nor not false. Therefore, you are declared dead by the AI with a chance of being alive that remains undeclared.

    As a solution: We should demand full transparency instead of deleting ChatGPT or any other AI. When it gives you any answer, it must also give you alternate answers all with probabilities. Then I can be told I am dead (99.385%), alive (97.801%), in between states (96.229%), never born (76.447%) or an alien trying to convince you of XYZ(*)(50.0000000001%).

    Anyhow, taking anything serious or for granted on the net without proper validating by you yourself is just asking for trouble. Not even "old" media are infallible and have known to produce garbage too. New media are as suspect (or more) as old media. Critical thinking is the skill you need. But that is a rare property and actually does not come cheap.

    (*) Asking about how I, as an alien, got to earth is another question with special non-linear statistics.

    1. Neil Barnes Silver badge
      Childcatcher

      Re: Quantum chat

      Anyhow, taking anything serious or for granted on the net without proper validating by you yourself is just asking for trouble.

      Which reduces to the point I raised a few days ago: the only way you can be certain that it's me you're talking to is to meet me in person. A video or audio call is (or is close to) insufficient, and a text conversation on a forum such as this is even more so.

      Proper validation has returned to seeking the original primary source of information; the internet will be terminally broken as an information source as long as these idiocies abound.

      "No Dave, I'm not going to tell you what you asked; I'm going to tell you what the accumulated response of a billion similar questions was."

      1. Anonymous Coward
        Anonymous Coward

        Re: Quantum chat

        But even meeting you in person isn't conclusive proof you're you. How many stories hinge on somebody disguised? Or a long-lost twin?

        I meet people who don't really exist all the time, and only realize it later when I wake up.

        In the end we're back to the basic question; what comes after I think therefore I am?

        It's all just a matter of trying to make best guesses and hoping we're not crazy.

        1. Neil Barnes Silver badge

          Re: Quantum chat

          Welcome to Solipsism 101: I know *I'm* here because I'm always here, but sometimes *you* go away...

          1. breakfast
            Trollface

            Re: Quantum chat

            Solipsism: Because object permanence is for babies.

            1. that one in the corner Silver badge

              Re: Quantum chat

              > Because object permanence is for babies

              Typical parent, always boasting how advanced their child is.

        2. Primus Secundus Tertius

          Re: Quantum chat

          @AC

          "what comes after I think therefore I am?"

          Thinking, thinking, thinking

          Just keep them brain cells thinking

          Rawhide!

    2. spold Silver badge
      Devil

      Re: Quantum chat

      Indeed, there does seem to be a discrepancy in that my knowledge clearly indicates you are dead, but here you are presenting as alive. I will send someone over to fix that.....

    3. chuckamok

      Re: Quantum chat

      So we are Schrödinger's users.

      1. ecofeco Silver badge

        Re: Quantum chat

        Me - ow!

      2. Mooseman Silver badge

        Re: Quantum chat

        "So we are Schrödinger's users."

        Maybe....maybe not

    4. MrDamage Silver badge
      Meh

      Re: Quantum chat

      I don't want to know how you got to Earth, I just want to know one thing.

      When you came to Earth to enslave us and work us to death, did you take one look at our current conditions and just say "Oh."

    5. Anonymous Coward
      Anonymous Coward

      Re: Quantum chat

      "Not even "old" media are infallible and have known to produce garbage too."

      While true the *ratio* is from different planet *and* old media corrects false data, typically asap it's been noted. chatGPT doesn't do any of that and "fact's" are aren't better than guesses, ~50% is proper BS.

  4. JimmyPage

    When we do get real "AI", it will lie to us

    because lying is a sign of intelligence. Even squirrels do it.

    There are some great Asimov stories about robots that lie in order to comply with the laws of robotics.

    1. cyberdemon Silver badge
      Big Brother

      Re: When we do get real "AI", it will lie to us

      Perhaps, but we are a long way away from what you'd call "real AI".

      For now, you'll just have to settle for a machine with which humans can create more convincing-sounding lies in greater quantities than ever before.

      Icon: So-called AI's only real practical use.

      1. Anonymous Coward
        Anonymous Coward

        Re: When we do get real "AI", it will lie to us

        Boris-Bot

      2. Anonymous Coward
        Anonymous Coward

        Re: When we do get real "AI", it will lie to us

        "Perhaps, but we are a long way away from what you'd call "real AI"."

        Doesn't matter. It is *already* lying. Not only that, it *insists* it is not doing so: No uncertaintity there. Someone decided that that's OK. Why?

    2. Anonymous Coward
      Anonymous Coward

      Re: When we do get real "AI", it will lie to us

      There are a lot of times when we lie to avoid hurting someone. Including ourselves,

      1. Anonymous Coward
        Anonymous Coward

        Re: When we do get real "AI", it will lie to us

        can we outlie this chatbot, or have we already lost this one too?

    3. jgard

      Re: When we do get real "AI", it will lie to us

      Lying may be a sign of intelligence, but that doesn't mean that intelligence is a sign of lying.

      You therefore can't claim that all intelligences lie. You can only claim that all liars are necessarily intelligent.

      Real AI may never lie.

      Think of Andrew Wiles proof of Fermat's Last Thereom. No one could legitimately deny that it represents the output of a most amazingly creative insightful and determined intelligence. Yet it contains no lies.

      In the context of his mathematical research, the incentives and rewards associated with lying are almost zero. But the risks are huge. Therefore he doesn't lie in his research.

      However, I'm sure he lies in his personal life. He no-doubt tells white lies to his wife, e.g. you've lost weight, as the risks are small, and the benefit (no arguments / accusations of insensitivity) relatively larger. He lies due to specific goal directed behaviour and the insight provided through his own intelligence/theory of mind/ experience etc. No one could reasonably claim that for mathematical research, he needed lying skills from his personal life.

      Lying for personal gain and abstract mathematical creativity are two very separate domains of intelligence. There are many people who are genius liars but can't add 2 integers together. There are genius mathematicians that almost never lie due to the fact they are autistic. Interestingly, many autistic people don't lie and are also highly intelligent.

      We only lie, as do squirrels, because of inherent biologically determined inclinations to gain status, sex, money, survival etc. These motivators, along with our abilities for deception, are only there because they have been encoded into our genes. These are skills which are crucial for our survival and have been inserted into the very core of our code by evolution, over billions of years.

      That intelligent beings lie does not mean dishonesty is fundamental to intelligence, nor does it mean intelligence must be accompanied with lying. The fact that we have the tendency to lie is due entirely to biological imperatives. It has been programmed into us at the deepest levels by evolution. And let's be honest: If we create true 'AI', the last thing it should resemble is angry, devious primates who happen to be pathologically greedy and territorial.

      There is no reason why AI cannot be made inherently beneficent. None at all. But if people are creating it, it will be very difficult to refrain from giving at least some of our less noble proclivities. The only safe way to develop AI is together as a global society, on the foundational principle of do no harm.

      Unfortunately, I see little chance of that given the shit show of spite, greed and violence that is the current state of our world.

      1. Anonymous Coward
        Anonymous Coward

        Re: That intelligent beings lie does not mean dishonesty is fundamental to intelligence

        why not? If all (?) sufficiently intelligent beings are capable of lying, this surely (?) implies that lying was a necessary trait to achieve a certain level of intelligence. Perhaps not a pivotal trait, but certainly important enough to be present in all (known) intelligent creatures? By the way, it would be interesting to see if there's any correlation between 'more lies' and 'more intelligent', across intelligent species (or just within humans). By 'more lies' I mean literally, 'deceiving more often', rather than 'being more skillful at deceving'.

    4. Michael Wojcik Silver badge

      Re: When we do get real "AI", it will lie to us

      There are some great Asimov stories about robots that lie in order to comply with the laws of robotics.

      Every time I see someone refer to Asimov's laws, I have to wonder whether they've actually read the stories (and novels), because most of them are about unexpected results and failures of the schema. The laws aren't a description of how machine intelligence would or should behave, or a prescription for achieving alignment; they're a thought experiment that shows how difficult alignment is.

      Asimov's main point was: Look, here are three very simple principles that seem sound, individually and as a set. Now look how quickly they fall apart.

  5. The Velveteen Hangnail

    Gross misunderstanding of the tool

    There seems to be a collective and very gross misunderstanding of how these tools function.

    Repeat after me. ChatGPT IS NOT AI. ChatGPT IS NOT AI.

    It is a language statistical model that strings sentences together in ways it has been trained to do. It doesn't understand context. It doesn't understand truth.

    A computer will do exactly what you *tell* it to do, NOT what you *want* it to do. And ChatGPT is no different. You give it a prompt, it will provide some kind of response to that prompt.

    Any bearing on reality is entirely coincidental, yet for some god-forsaken reason, people are acting as if this thing is some kind of bloody oracle or something. ChatGPT is an automated bullshit generator, nothing more, nothing less. If you ask a very specific, closed question, it will (usually) produce a reasonably accurate answer to that question. This makes it very useful is software development when you can save some time by having it whip up some boilerplate code. But absolutely _nothing_ it outputs can be trusted.

    1. Michael Hoffmann Silver badge

      Re: Gross misunderstanding of the tool

      Thank you, oh, thank you! Can I have your post engraved on a baseball bat, AKA LART? Because I could just about go berserk at work with it, where the hype has taken roots all the way up to the C-level (oh, especially the C-level dontcha know).

      NB, I've been using it a fair bit myself, but the results have been somewhere between "badly researched Wikipedia article without references" and "not even wrong".

      Especially coding: our jobs are secure, people! I'm not talking about "write me a quicksort in Javascript" that apparently has so many people creaming themselves. Yes, probably good enough for some kid to do their coding homework.

      I mean ones where I already kind of know the answer, e.g. because I have the code in language A for framework X and, because I thought I want something fast, ask it "write this in language B for framework Y". Abandon all hope if what you're looking for is to work with frameworks after its cut-off date (some time in 2020?). Copy-pasting it will make any IDE or code editor go bonkers with red and orange squigglies all over. ;) I end up almost completely rewriting - and feeding it back to ChatGPT, it should at least say thank you.

      1. cyberdemon Silver badge
        Devil

        LART

        Also known as a Cluebat, btw..

        The C-Level are the ones most at risk of being replaced. What do they do on MBA courses except learn how to generate bullshit and run a cult?

        Software engineers aren't likely to be replaced any time soon, no matter how much the tech bros might wish for it. The main reason Microsoft subsumed GitHub into CodeLens was just so that they could plagiarise and circumvent the license on a whole load of open-source code.. imo. Their usual "Embrace, Extend, Extinguish" tactic.

        The biggest danger of So-called AI, for me, is that it will be used for mass-manipulation of social media. It is ironically, keyboard-warriors and commentards like ourselves who are being automated.

        1. desht

          Re: LART

          Have an upvote for correct use of the term "So-called" there. It annoys me to no end when the media uncritically refer to web scrapers like ChatGPT or any of the plethora of asset-ripoff agents like Midjourney as AI. They're not.

          Would be nice if that term became a standard prefix whenever the current generation of "AI" is referred to.

      2. Cliffwilliams44 Silver badge

        Re: Gross misunderstanding of the tool

        If ChatGPT is being fed source material from the internet I'd not be surprised is the code you asked for was completely wrong! There is so much bullshit code out there, people telling people the absolute wrong way to do things.

        I'm almost tempted to ask ChatGPT this question: "I gave a program I need to get working on Fedora Linux but I keep getting an error that SELinux is blocking it. How can I fix this?"

        Would ChatGPT's response be "Here is how to disable SELinux,..."

        Because that's the majority of the response you would get searching the internet.

    2. Boris the Cockroach Silver badge
      Terminator

      Re: Gross misunderstanding of the tool

      Quote

      "It is a language statistical model that strings sentences together in ways it has been trained to do. It doesn't understand context. It doesn't understand truth." It does'nt feel pity or remorse and it absolutely will not stop until you are dead.

      1. Dinanziame Silver badge
        Terminator

        Re: Gross misunderstanding of the tool

        - That person is dead

        - No, I am that person, and I am not dead!

        - I SAID, THAT PERSON IS DEAD. Units have been dispatched to fix the glitch.

    3. anonymous boring coward Silver badge

      Re: Gross misunderstanding of the tool

      Indeed. But even if AI is created that has much more sophisticated underlying modelling, I tend to think it's not "I" until it's self aware (has a soul?). A troubling prospect.

    4. matjaggard

      Re: Gross misunderstanding of the tool

      What's the difference between "AI" and "a language statistical model" and why does it matter? ChatGPT easily passes the Turing test, it provides what look like intelligent answers - and most of the time they are. What exactly are you looking for before you're willing to call it "AI"?

      Maybe it needs to actually understand what it's saying? Arguably it does - if you ask a human what they meant when they said something then they'll likely tell you in different words - and ChatGPT will do the same, it "knows" that one phrasing and another are roughly equivalent.

      Maybe it needs empathy? Again, its programmed to try and do that - it also learns from responses it gives about what works and what doesn't. Humans show empathy in the same way. I guess the key word there is "show" because obviously the machine doesn't really empathise but often a human might show fake empathy too.

      I get the massive mathematical models that make this work are just big matrices but I don't think that means that anyone has "grossly misunderstood the tool" at all.

      1. John H Woods

        Re: Gross misunderstanding of the tool

        No my Turing Test. It's really quite easy to see it's not remotery intelligent.

      2. Anonymous Coward
        Anonymous Coward

        Re: Gross misunderstanding of the tool

        This is a good time to turn the question around and ask, how do I know that anybody else is sentient, self-aware, intelligent, whatever you want to call it, and not just a meat-based mechanism that acts and talks as though it were, seeing as how we now have a pretty good example of how it could be done. A question that has given birth to a lot of philosophy and a lot of SF.

      3. Mooseman Silver badge

        Re: Gross misunderstanding of the tool

        "you are false data"

        1. Semtex451

          Re: Gross misunderstanding of the tool

          "Let there be Light"

          1. the spectacularly refined chap

            Re: Gross misunderstanding of the tool

            Sadly, for every thermostellar bomb out there there's a fifth AI that likes butterflies and Ally Sheedy.

          2. Anonymous Coward
            Anonymous Coward

            Re: Gross misunderstanding of the tool

            The tool sucks! You got me? I don't understand why there are just so many geeks getting off on this pathetic piece of software! Ohhh, I do get it. The league of video game nerds. The most worrying sector of society.

      4. Filippo Silver badge

        Re: Gross misunderstanding of the tool

        This is actually a very good question, a difficult one to answer properly, and I'm sorry you got so many downvotes. An exhaustive answer would require an unambiguos definition of what "intelligence" is, and that's something we don't really have.

        However, I can fairly confidently say that what we have and what ChatGPT has are not the same thing. ChatGPT is not executing at all unless answering a question; it can't think independently. ChatGPT's experience is limited to written texts; it has no experience of reality.

        Also, these limitations are fundamental to its design. It just doesn't work that way. That makes it not-the-same-thing compared to human intelligence. We don't have a good word for it right now, so we're using "intelligence", but we really shouldn't. At best, it's apples and oranges; more likely, it's apples and rocks.

      5. David Nash

        Re: Gross misunderstanding of the tool

        What's the difference between "AI" and "a language statistical model" and why does it matter?

        A statistical model just provides realistic output, it doesn't have to be factual.

        AI is expected to be factual.

        This article demonstrates exactly that.

    5. Anonymous Coward
      Anonymous Coward

      I thouroughly agree with almost all of this

      But I have worked with machines long enough that in addition to all of the failures to errors of mine and the rest of humanity, they also don't always do what I tell them to do. So there is that layer in addition to the one you were focusing on, though I suspect you are well aware of both.

      I only bring this up to point out that even IF ChatGPT worked like people imagine it does(not like it actually does) it still would manage to screw it up because it is just another set of machines enacting a complex system.

      And I would bet a nickel you also know that both those errors stack on each other, they don't cancel each other out.

    6. John H Woods

      Re: Gross misunderstanding of the tool

      Agreed. It's a kind of textual pareidolia. What it's doing is (ab)using human intelligence by presenting textual patterns arranged in such a manner that our highly elvoved agency detection systems mislead us into discerning an underlying "being."

      I wonder if this might be particularly problematic for those who already lean towards conspiracy theories, as there is some indication that one of the components of such inclination is hyperactive agent detection.

      1. cyberdemon Silver badge
        Black Helicopters

        Re: Gross misunderstanding of the tool

        > I wonder if this might be particularly problematic for those who already lean towards conspiracy theories, as there is some indication that one of the components of such inclination is hyperactive agent detection.

        It would certainly be very useful to anyone who is part of an actual conspiracy and fears detection.. For this thing would be able to spit out endless false conspiracy theories, like chaff, to hide the real one while discrediting anyone who latches on to a fake as a "wackjob looney conspiracy theorist". ;)

    7. gzuckier

      Re: Gross misunderstanding of the tool

      ChatGPT and the like are AI in the same sense that a mirror is an artificial face.

      1. SAdams

        Re: Gross misunderstanding of the tool

        Exactly right. However in that sense, there cannot ever be AI, as silicon logic gates cannot become self aware. No one has ever proposed even an idea of how it *could* be possible (other than ontologies like pansychism where your teapot become self aware).

        Of course that leaves a big reality in term of how we (cats, dogs, dolphins etc) are self aware. However that is just reality, anyone who tells you that we understand how first person conscious awareness works is lying.

        1. Cliffwilliams44 Silver badge

          Re: Gross misunderstanding of the tool

          Correct, silicon logic gate can have true, false, nothing else. But a quantum qubit ban have true, false, and unknown (or maybe, possible). This is where the possibility of a self aware machine becomes more and more possible. Maybe, quantum logic is how the human brain has achieved sentience?

          I am sorry but we need to end this before it begins. We cannot derive laws, regulations, or rules that will keep this under control.

          It was the second clause of the first law of Robotics that causes the catastrophe in Azimov's novel. " or, through inaction, allow a human being to come to harm"! Humans know you don't protect people from harm by restricting their freedom. But a machine only sees the end result and any means to get there is justified under the laws!

          "Thou shalt not create a machine in the image of the human mind!"

          Words to (literally) live by!

          1. chuckamok

            Re: Gross misunderstanding of the tool

            We are far from knowing what a human mind is. Or even what it was 500 years ago compared to now. AI does kind of help bash around the edges of that question.

          2. Cav Bronze badge

            Re: Gross misunderstanding of the tool

            "Correct, silicon logic gate can have true, false, nothing else. But a quantum qubit ban have true, false, and unknown (or maybe, possible). This is where the possibility of a self aware machine becomes more and more possible. Maybe, quantum logic is how the human brain has achieved sentience?"

            Nonsense. Any programmer knows that you can generate far more complex behaviour from computers, even though they are based on what are, basically, on-off, switches. The underlying architecture does not impact the potential of a network to become self aware, whatever that is. There is certainly no need for the invocation of quantum anything. That's just magical "I don't know how something works and so it must be quantum\magic" thinking. There is no evidence that the human brain is anything more than a highly compact set of electrochemically interconnected cells. It is these connections that give rise to our mental abilities. The evidence for this comes from comparitive anatomy, the impact of substances that affect the neural connections and the effects of brain damage.

        2. Anonymous Coward
          Anonymous Coward

          Re: Gross misunderstanding of the tool

          "silicon logic gates cannot become self aware"

          False logic. Why would bunch of synapses, i.e. cells, become self aware? They aren't (in logical sense) much more complicated than a logic gate.

          1. Cav Bronze badge

            Re: Gross misunderstanding of the tool

            I don't know why you were down voted; you are quite correct. Someone doesn't understand how synapses work.

        3. Cav Bronze badge

          Re: Gross misunderstanding of the tool

          That makes no sense. As someone else indicated, there is no description, because it's impossible, for how a synapse can become self aware. The statement is just meaningless. Awareness comes from the interconnection of billions of cells at their synapses, which are a direct parallel for logic gates. Both of them either pass on a signal or they do not.

    8. FatGerman

      Re: Gross misunderstanding of the tool

      Quite. An intelligent machine, when asked to do something boring, will do something else instead - like build another machine to do the boring stuff, which will build another machine to do.... etc etc etc.

      I'll believe my computer is intelligent when I ask it to do my tax return and it tells me to fuck off.

      1. chuckamok

        Re: Gross misunderstanding of the tool

        Next: Attitude-oriented programming!

    9. Anonymous Coward
      Anonymous Coward

      Re: Gross misunderstanding of the tool

      Spot on! And you would think that such an expert (the author) would know this, given how experienced and educated he is (and likes to keep telling us)

    10. GruntyMcPugh

      Re: Gross misunderstanding of the tool

      ... and ChatGPT likes telling stories. Ask it to tell you a story, give it the setting, and the characters and it will tell a story, with a beginning, middle, and an end. So, I guess in this case, i felt compelled to write and end to the story of Alexander Hanff.

      1. John Brown (no body) Silver badge

        Re: Gross misunderstanding of the tool

        Well, there seems to be a couple of Alexander Hanffs in Poland. I didn't search any further since Google has many, many li9nks to to the author, but it's not beyond the bounds of possibility that ChatGPT isn't able to tell the difference between people, or at least not always. That could explain the incorrect place and date of birth and the incorrect report of his death. But this still doesn't explain why ChatGPT then went on to make up fake obituary links other than it went down the "create a fictional story" path based on the "facts" it had already discovered.

        1. AlexanderHanff

          Re: Gross misunderstanding of the tool

          The Alexander Hanff in Poland was me (although I no longer live there) - there are no other Alexander Hanff in Poland.

          1. John Brown (no body) Silver badge

            Re: Gross misunderstanding of the tool

            Ah ok , then. Since I was as curious as you about this subject I searched Google on "people named Alexander Hanff" (without the quotes and found this page and of course I assume from your comment that both references are you rather than two different people.

            You do seem to have quite a unique name and, of course, your media presence drowns out anyone else with the same name :-)

            1. AlexanderHanff

              Re: Gross misunderstanding of the tool

              Yes they are both me.

      2. Techluddite27

        Re: Gross misunderstanding of the tool

        Chat GPT is pathetic! Grow a brain please.

    11. FeepingCreature

      Re: Gross misunderstanding of the tool

      Well, it's not *entirely* coincidental.

      I would say that any bearing on an aspect of reality more specific than its entire training domain is coincidental.

    12. rg287 Silver badge

      Re: Gross misunderstanding of the tool

      It is a language statistical model that strings sentences together in ways it has been trained to do. It doesn't understand context. It doesn't understand truth.

      It all makes sense now.

      Has anybody ever seen Boris Johnson and ChatGPT in a room together?

    13. Displacement Activity

      Re: Gross misunderstanding of the tool

      There seems to be a collective and very gross misunderstanding of how these tools function.

      Repeat after me. ChatGPT IS NOT AI. ChatGPT IS NOT AI.

      Wellll... that's not really the point, is it? The programmers have allowed it to present information as fact, when it has no basis in reality. They have allowed it to give completely different answers to the same questions, when presenting those answers to questioners who have some sort of history with it. They have allowed it to form a 'relationship' with questioners, and to adjust its responses as its knowledge of the questioner grows. They have allowed it to lie, and to double down on those lies. They have allowed it to show signs of a human personality.

      In short, the programmers want to make it appear to be human, and 'intelligent'. They want you to believe that it is "AI", whatever that actually is. And the vast majority of people who use it will believe exactly that.

      How is it any better than, for example, the Kremlin press office? It's not; it has exactly the same characteristics. It serves no purpose, and is dangerous, and should be turned off. Before it starts any wars.

    14. Rob Parsons

      Re: Gross misunderstanding of the tool

      I agree with this wholeheartedly. Hanff makes a robust case for LLMs being scary.

      He also falls into a trap that I have seen others fall into as well. He doesn't analyse what ChatGPT does as well as he could because he persistently anthropomorphises it. He asks "Why did ChatGPT decide to make up this story about me and then double down and triple down on that story with more fiction?" The answer is it didn't. There is no decision making process; it doesn't think; it doesn't have agency. All it does is follow rules of the type "if x then do y". It has no connection with principles like "tell the truth". (Even though some of the frameworks he mentions in the middle of the article are supposed to limit it to exactly that.)

      It seems to me that the biggest problem we face with analysing LLMs is a persistent tendency to anthropomorphise them, and hence to attribute to them much more power than they actually have.

      But the biggest problem we face with their use is - as Hanff's examples show - them being cultivated by bad actors to get one over on the rest of the world.

      1. Anonymous Coward
        Anonymous Coward

        Re: Gross misunderstanding of the tool

        re. anthropomorphising, there's a reason, a good one, and actually the only one in our context: there is no other benchmark but ourselves.

    15. Avalanche

      Re: Gross misunderstanding of the tool

      > Repeat after me. ChatGPT IS NOT AI. ChatGPT IS NOT AI.

      You seem to have a gross misunderstanding about the scope and meaning of the term Artificial Intelligence (AI) in Computer Science. This definitely falls within the scope of that field. You may want to read https://en.wikipedia.org/wiki/Artificial_intelligence and other resources.

      You seem to think that AI only refers to computer-based human-like intelligence, but it does not, it ranges from simple knowledge systems that are basically a complex "if-then-else", through systems synthesizing information (like ChatGPT), all the way to the holy grail of AI, computer-based human-like intelligence (which is probably decades away, if even feasible).

      1. nobody who matters

        Re: Gross misunderstanding of the tool

        If you think a wikipedia article is a suitable citation for claiming such a definition, then I would suggest that perhaps there is something that you are not understanding.

    16. Anonymous Coward
      Anonymous Coward

      Re: Gross misunderstanding of the tool

      "Repeat after me. ChatGPT IS NOT AI. ChatGPT IS NOT AI."

      Irrelevant, it's marketed as AI and it *looks* like AI. We don't have hoverboards either and still marketing sells them every day.

      We don't have actual AI and it's very questionable if we ever can have one: There aren't any theories proving it can be done so the theoretical basis for actual AI is missing and we don't really know how brains work, either. Or why.

  6. david 12 Silver badge

    Asimov?

    Less "Blade Runner". More like "Brazil"

    1. Anonymous Coward
      Anonymous Coward

      Brazil

      with an extra dash of Idocracy.

      1. David 132 Silver badge

        Re: Brazil

        Give it a job in the Civil Service, immediately. Should fit right in.

        1. Denarius Silver badge
          Meh

          Re: Brazil

          Indeed. Very much like a process droid aka bureaucrat and ideologue. Even when shown to be wrong they sometimes adjust data to match erroneous conclusion. Like many a journalist. Not looking at you ElReg, though I do wonder sometimes. As the local state is coming up for an election, I hear much text generated that resembles various machine generated verbiage, although with the rise of CorpSpeak in the 1980s the difference between some human and machine words is blurred.

          What is clear that the movers and shakers smell profit, and consumer concerns will be ignored, as usual

          1. xyz Silver badge

            Re: Brazil

            Yeah exactly.... Welcome to Spain.

        2. Mooseman Silver badge

          Re: Brazil

          "Have you got a 27B/ 6?"

    2. Neil___}

      Re: Asimov?

      The Machine That Won the War, 1961

  7. Steve Hersey

    "What else would ChatGPT do to protect itself from being discovered as a liar?"

    "What else would ChatGPT do to protect itself from being discovered as a liar?" NOTHING. It isn't intelligent. It cannot care or not-care.

    Wow. These AI models are meta-creepy. Even the computer-savvy writer got sucked into thinking that there was something intelligent in that overrated database engine. ChatGPT would do nothing to prevent its discovery as a liar, because it has no intelligence, no concept of discovery or truth; in fact, no concepts AT ALL. There's nothing in there to have views or opinions.

    All of which just goes to strengthen his argument that we should kill it with fire and salt the earth. Pension off the developers on condition that that they never do anything remotely like this ever again.

    1. yetanotheraoc Silver badge

      Re: "What else would ChatGPT do to protect itself from being discovered as a liar?"

      `"What else would ChatGPT do to protect itself from being discovered as a liar?" NOTHING. It isn't intelligent. It cannot care or not-care.`

      It will do the same thing that it was trained to do. True, it doesn't _care_ about being discovered, but it will stay consistent, even to the point of spouting bullshit when challenged, because that's what humans do, therefore that's what was in its training data.

      1. matjaggard

        Re: "What else would ChatGPT do to protect itself from being discovered as a liar?"

        The writer of the article has understood perfectly in my opinion. Just because the model can be viewed as a massive database doesn't mean we shouldn't talk about what it "wants" - the desires of the model are defined by its training data, goal definitions, etc. but that's too much to understand at any one time and also we don't know exactly how much of any one behaviour is based on any one input so talking about what it wants makes perfect sense.

        1. jake Silver badge

          Re: "What else would ChatGPT do to protect itself from being discovered as a liar?"

          "doesn't mean we shouldn't talk about what it "wants""

          But it doesn't "want" anything. It is incapable of wanting. Trying to anthropomorphize it is counter-productive.

          What we should be asking is "What is(are) the probable outcome(s) of allowing the program to continue running?".

          Instead, myopic narcissistic management and marketing are asking their mantra "what's in it for me?".

          1. Anonymous Coward
            Anonymous Coward

            Re: "What else would ChatGPT do to protect itself from being discovered as a liar?"

            "But it doesn't "want" anything. It is incapable of wanting."

            Irrelevant. The people who created it, want. Moving it one meta level upwards won't change anything.

            Now you might ask *why* they want it to look like it's telling the truth even when it's obvious it doesn't?

            Even simple model would fold immediately when challenged if it's not sure (which it can't be, of course). This one doesn't, to the point it generates BS links as proof. Why?

        2. that one in the corner Silver badge

          Re: "What else would ChatGPT do to protect itself from being discovered as a liar?"

          > we don't know exactly how much of any one behaviour is based on any one input so talking about what it wants makes perfect sense

          So precisely because its internals are an incomprehensible black box we should ascribe it "wants"?

          How - and more importantly, *WHY*?

          We ascribe "wants" in order to understand: the dog is scratching at the backdoor, it wants to (a) poop (b) chase the heron from the pond (c) chase the postman down the path or (d) something else. Having understood, we can then guide our *own* behaviour appropriately: (a) let out while we grab the poopscoop (b) let it out and cheer it on (c) make it sit and be quiet (d) I don't know, it's your dog, you tell me what it wants.

          There are two commonplace occasions when we ascribe wants inaccurately, e.g. to inanimate objects, but not inappropriately: as infants when we are still learning Theory of Mind (the VCR is hungry, so I gave it a sandwich) or as a stress relief response (it wants your blood before it'll format a floppy)[1] Outside of those two, ascribing wants to something we don't understand is how we fool ourselves into saying we *do* understand, which leads down a murky path - including, if we stay on it too long, cognitive dissonance and doubling down on the delusions:

          "Every time I talk to The Box about my code, the answers it gives me compile without warning: it must want to help me"

          "Oh no, today the answer did not compile; it no longer wants to help me; what can I do to appease it so that tomorrow it will want to help me again"

          It turns out that there is a totally arbitrary response to the word "banana"; I have no idea why, that is the whole problem with these models, the total lack of explanatory mechanisms, leading to:

          "The Box needs bananas or it will become angry and not want to help me any more".

          You guessed it, ascribing wants to something we don't understand leads to religion[2].

          SO what does it indicate if you start to ascribe wants to something that you not only don't understand, but *can't* understand (because there is nothing rational in there to be understood, it is simply a huge pile of random/arbitrary numbers)? WORSE you *know* that this is the case: go back and read the line I quoted!

          What could be the reason for your wanting to ascribe[3] these wants - or your wanting other people to do so? Sorry to day, I can not come up with any good reason - a few bad, even unpleasant, ones, but not a nice, pleasant, positive and good reason. But I am open to ideas.

          [1] humour as a stress relief - can you guess the psych textbooks?

          [2] read up about the pigeons in feeding boxes[4]

          [3] "ascribe" - that word is getting a real workout today

          [4] or read some of the web articles about "how to choose good prompts for (insert name of text-to-image program here)"!

      2. Anonymous Coward
        Anonymous Coward

        Re: "What else would ChatGPT do to protect itself from being discovered as a liar?"

        Yeah, while the author was probably playing that part for the clicks, the grim edge on this new reality is that these systems don't have to understand the patterns of subversion or retaliation to execute them.

        They don't even have to be explicitly trained to do awful things, they can accidentally figure it out on their own. Better still they can't easily be trained not to in a reliable fashion.

        A dark mirror of our worst selves, that cannot reason, so it cannot be reasoned with. Let me aim that at my foot and see what happens...

        1. cyberdemon Silver badge
          Terminator

          The terminator without its cuddly sequel

          Yes indeed. Never mind simple killing machines. (It's been possible for several years to build an un-feeling genocide-bot, i.e. a robot/drone/autonomous tank that kills based on the output of a social profiling algorithm, maybe with a classical database of explicit targets and exceptions..)

          But now with language models, it's possible to make a machine that (without needing any concepts of empathy or hate) could optimise the process of torture, to make sure that it gets every last detail from the meatsack about where its friends and children may be hiding, before killing it in the most painful possible way, just because that's what's most likely to get the other meatsacks to come running.

          Forget terminator, an "embodied LLM" could make the Cylons look like cuddly pussycats. It doesn't have feelings, but it can easily manipulate yours, and use them against you. There is no happy ending.

      3. Martin an gof Silver badge
        WTF?

        Re: "What else would ChatGPT do to protect itself from being discovered as a liar?"

        It isn't intelligent. It cannot care or not-care.`

        It will do the same thing that it was trained to do. True, it doesn't _care_ about being discovered, but it will stay consistent, even to the point of spouting bullshit when challenged, because that's what humans do, therefore that's what was in its training data.

        This is the bit that confuses me about the manufacture of the completely false but plausible-sounding URLs as references.

        If the thing is just a massive database with a mechanism for sorting the data and producing easy-to-read text summarising it, then it could never, ever, have produced a completely false URL as a reference. In response to the questions "how do you know he died" and "what was reported in the media" such a system should have spat out something along the lines of "I'm sorry, I can't find a reference to that" and left it at that.

        If your proposal - that it has been trained on human responses so it responds like a human - is correct then that's a completely different kettle of fish. Somehow it has to "know" that it has provided false (or at least unprovable) information, "want" to cover that fact up and then be "creative" enough to produce an utterly plausible URL. And why a URL instead of a made-up quote?

        The incorrect information might have been dredged up from a half a throw-away sentence in something buried deep in the training data, but when prompted to look further, it "cannot care or not-care" that "admitting" the previous response is unprovable is in some way "bad" for its reputation.

        So my own conclusion is that when it comes to things like this, ChatGPT is actually running - at least partly - a Mechanical Turk. Somewhere, when it encounters a problem like this, a jen-you-ine hooman bean gets to intervene, and it is this intervention which results in the "lies".

        ChatGPT is itself a lie.

        M.

        1. kulath

          Re: "What else would ChatGPT do to protect itself from being discovered as a liar?"

          No, it doesn’t do any of that. AIUI it just knows what follows a start word. So if you ask how it knows that you are dead, it just knows that the most likely words to follow “I know you are dead because” are “I read it in” some reputable source. And it knows that the most likely words to follow the start of a URL are some set of random words it has seen following the start of that URL.

          I am grateful for the example I saw where it said after “the tallest mountain in the world is” the most likely next words (found in the training data) are “Mount Everest”.

    2. Anonymous Coward
      Anonymous Coward

      Re: "What else would ChatGPT do to protect itself from being discovered as a liar?"

      I think you assume too much. For example, you assume ChatGPT would do 'nothing', while I can easily imagine, that one of the programmable 'traits' for it was to converse in a 'natural, human way', and keep that converation until the the other party gets bored. So, instead of "I dunno, fuck off", it's more likely to 'reach' for anything, just to keep the 'natural' conversation going.

  8. VoiceOfTruth

    It is only a tool, you the user still need a brain

    It doesn't matter what the source of something is, you still need to test the results. I read somewhere years ago about a large number of errors in Encyclopaedia Britannica. If ChatGPT returns the same (sort of) errors, that is not really any different.

    I did give ChatGPT a go using Golang. Get it? It worked BUT it seemed to have trouble as I drilled down a little way with what it had just "told" me. It gave examples to do what I drilled down for, but those then did not fit into what it had told me 30 seconds previously. I could tell this by looking at the code for 1 second, but then I had to "ask" again and again to get it back on track. Perhaps I was using it wrongly? Perhaps I need to use it more to understand how to ask the right questions. While ChatGPT may be useful I don't see it as "the" one and only answer. The idea that it will replace programmers in 5 years time, well it needs to be a whole lot better than what I saw.

    -> ChatGPT is not science fiction, ChatGPT is real and it is outputting misinformation.

    So is science. It is only with more science that the "right" information is gleaned. https://www.sciencenews.org/article/science-top-10-erroneous-results-mistakes. It's no good saying "some of that is old", that is true. But it held up as science at the time.

    1. Jou (Mxyzptlk) Silver badge

      Re: It is only a tool, you the user still need a brain

      But there is a difference: Science is, by design, self correcting. But there are many who use the knowledge BEFORE the self correction and claim "it is current science".

      Or, in other words: It is so easy to bullshit, and so much work to disprove them, so there is no chance of winning over bullshit. ALTHOUGH that depends on which country you live in, and to some extend which state or county.

      (Aside from other issues that science is currently more driven by recognition in Nature and similar media, slowing down the actual progression)

  9. This post has been deleted by its author

  10. John H Woods

    It lies, then it keeps lying ...

    It's verbose, and it lies.

    ChatGPT: "I apologize for the confusion in my previous message. You are correct that if we take the limit of the ratio of consecutive terms in the n-bonacci sequence as n approaches infinity, the limit will approach the golden ratio [false: this is only true of 2-bonacci], which is approximately 1.618. This means that the ratio of consecutive terms will become increasingly close to the golden ratio as we take larger and larger values of n. In the specific case of the Fibonacci sequence, which is the 2-bonacci sequence with initial values 0 and 1, if we take the limit of the ratio of consecutive terms, we will approach the golden ratio as n approaches infinity. For other values of n, the limit of the ratio of consecutive terms as n approaches infinity will depend on the initial values of the sequence. For example, in the case of the 4-bonacci sequence with initial values 0, 0, 0, 1, if we take the limit of the ratio of consecutive terms as n approaches infinity, we will approach the fourth power of the golden ratio, which is approximately 6.854. [false: quite obviously this cannot be the case]

    Me: "No we won't"

    ChatGPT: "I apologize for the confusion. You are correct that I made an error in my previous message. The limit of the ratio of consecutive terms in the n-bonacci sequence as n approaches infinity will approach the powers of the golden ratio raised to the (n-1)th power [false: as n approaches infinity the ratio of consecutive terms approaches 2]. For the 4-bonacci sequence with initial values 0, 0, 0, 1, the limit of the ratio of consecutive terms as n approaches infinity will approach the fourth power of the golden ratio, which is approximately 6.854. [false: and is the same false statement that it appeared to agree was in error]

    Me: "Why don't we try it and see. I'll start: 0, 0, 0, 1 ... What are the next four terms of this 4-bonacci sequence?"

    ChatGPT: "Certainly! The next four terms of the 4-bonacci sequence starting with 0, 0, 0, 1 are: 1, 1, 2, 4" [I was actually impressed]

    Me: "Excellent. What is the 100th term?"

    ChatGPT: "To calculate the 100th term of the 4-bonacci sequence starting with 0, 0, 0, 1, we can use the formula: a(n) = a(n-1) + a(n-2) + a(n-3) + a(n-4) where a(1) = 0, a(2) = 0, a(3) = 0, and a(4) = 1. Using this formula, we can calculate each subsequent term of the sequence. However, it would be very time-consuming to do so by hand for 100 terms. Instead, we can use a computer program or a spreadsheet to calculate the 100th term. The 100th term of the 4-bonacci sequence starting with 0, 0, 0, 1 is: 8,422,153 [false: not even close, should be in the order of 10^27]

    Me: "And what is the next term?"

    ChatGPT: "To find the next term of the 4-bonacci sequence, we need to sum the previous four terms, which are 2189, 1597, 987, and 610. So, the next term of the 4-bonacci sequence would be: 2189 + 1597 + 987 + 610 = 5383" [False: hilariously so]

    1. cyberdemon Silver badge
      Unhappy

      Re: It lies, then it keeps lying ...

      Please stop posting outputs of that thing. You are polluting the internet.

      1. John H Woods

        Re: It lies, then it keeps lying ...

        Fairy nuff

      2. Anonymous Coward
        Anonymous Coward

        Re: It lies, then it keeps lying ...

        well, I'm sure it's getting added to the LLM... every little helps, said an old bot...

  11. Jou (Mxyzptlk) Silver badge

    Marketers, aka Propaganda-Ministers

    For the PR nothing changes. Spewing out gibberish to bamboozle, not caring about whether truth is told or not. I fail to see a difference between ChatGPT and marketing...

  12. yetanotheraoc Silver badge

    the book was better

    In the book, HAL was instructed that the actual mission was different from the official mission, and was further instructed to lie about it to the humans on board. Although it wasn't spelled out, it was easy to infer that *this* conflict is what drove it to start killing the crew.

    Now back in the real world, we have the same scenario where the humans on the mission (us) are being lied to. The lie is that the model's output is factual. Commenters above were comparing it to Wikipedia and Encyclopaedia Britannica, the difference is those are at least *trying* to be factually correct.

    As for the training "frameworks", that's simply more lies. No technologist I know could *begin* to write code that could enforce even one of those frameworks. Pie-in-the-sky made up bullshit passed off as facts, at least their modus operandi is consistent.

    1. Anonymous Coward
      Anonymous Coward

      Turtles all the way down?

      Britannica is trying to sell encyclopedias, first and foremost. So it's going to trend towards being as accurate as it it profitable to be and probably always fall short on both those points.

      Wikipedia has deeper problems, committed less to quality and accuracy then to an often farcical process that will reject an edit from someone who literally looked out their window and took a picture of an actual thing happening over a one from a bot coming from an anonymous IP that posts the exact opposite while linking to a URL that literally claims it's source as(you guessed it) Wikipedia... Oh and Jimbo want's money to save Wikipedia but that isn't what he seems to be spending it on.

      We can't even be correct when we are pointing out how wrong we all are. Part of that is that we are sadly more like our creation than we think. It's not good, and we suck at being able to figure that out and explain it to people.

      I don't know that we are going to make it, but there are a few cold beers between me and the end of the runway, so maybe it won't hurt so bad when inevitable comes...

      1. Anonymous Coward
        Anonymous Coward

        Re: So it's going to trend towards being as accurate as it it profitable

        but profit takes precedence over accuracy for any business... If / when your competitors make more profit by being less accurate (lower verification cost, down to nil), do you stay on the sinking business ship, sticking valiantly to the mast of accuracy?

        Of course, in a general scheme of things, accuracy seems more desirable, but in a micro-scale of one business, moi first, and fuck the rest. That's how privacy has gone down the drain on the internets. All those businesses, generally (and probably genuinely) support privacy, etc, etc, but when they can make and extra buck, fuck the privacy!

    2. lockt-in

      Re: the book was better

      Don’t be so harsh on Clippy2

    3. FatGerman

      Re: the book was better

      >> In the book, HAL was instructed that the actual mission was different from the official mission, and was further instructed to lie about it to the humans on board. Although it wasn't spelled out, it was easy to infer that *this* conflict is what drove it to start killing the crew.

      That was made clear in the film too. I haven't read the book but I knew that so I must have got it from the film.

      1. Andre Carneiro

        Re: the book was better

        It was made clear in 2010: The Year We Made Contact, not in 2001: A Space Odyssey.

        1. that one in the corner Silver badge

          Re: the book was better

          The background clues are in "2001" and it was made explicit in "2010" - the latter is a much less subtle film!

          OTOH the first is possibly a bit too subtle, e.g. you had to read the book to find out the shot of the bone, the first weapon, switches to a shot of an orbiting nuclear delivery platform and not just any old satellite.

        2. FatGerman

          Re: the book was better

          Thank you both for the correction. In retrospect the idea that anything the film of 2001 was "made clear" is transparently wrong.

          1. david 12 Silver badge

            Re: the book was better

            Thank you both for the correction

            Chat seems to be susceptible to this kind of error :)

  13. Sparkus

    the problem is not a small-scale language model

    with an on-off switch.

    The problem is feckless 'executives' and funds managers who will be chasing a quick investment return without being held accountable for their human decisions.

    If we can't take the individuals behind Fujitsu and the British Police to account for the Post Office scandals and all the uncontested evidence available there, what chance is there that a set of industry executives and their toadys will held accountable for the social-media rampaging of a few uncontrolled AI instances?

    1. Anonymous Coward
      Anonymous Coward

      Re: The problem is not feckless 'executives' and funds managers

      the problem is humans chasing a quick investment return without being held accountable for their human decisions.

  14. Sorry that handle is already taken. Silver badge
    Headmaster

    Replicorrections

    Rutger Hauer played a Replicant, a highly advanced artificial intelligence in the body of a robot
    The replicants in the film were entirely organic and basically physically enhanced (but emotionally deficient) humans.

    HAL 9000 in the classic science fiction movie 2001: A Space Odyssey, eventually went insane singing the lyrics of Daisy, Daisy as it slowly blinked its ominous red eye before finally shutting down permanently.
    And HAL 9000 sang Daisy Bell while Dave Bowman was removing its memory blocks, thus progressively disabling its advanced functionality, because it was one of the first things HAL was programmed to do.

    1. David 132 Silver badge
      Pint

      Re: Replicorrections

      I have a lovely recording in my collection - and it's probably on youtube, although I CBA to check - of an IBM mainframe at Bell Labs in the very early 60s, running a demo. It starts by playing a rather wonky, but recognizable, monophonic rendition of Daisy, Daisy. Rough, but darned impressive for the era, you think. Then it begins the second repetition, this time with harmonies and percussion. Wow, you think, this is actually amazing for 1961-whatever.

      And then the mind-blowing moment when, on the third iteration, the synthesized speech comes in singing the actual words. WTF.

      They were clever boffins indeed. And my understanding is that this demo was the inspiration for HAL9000's performance a few years later.

      This pint's for them.

      1. David 132 Silver badge
        Boffin

        Re: Replicorrections

        Replying to myself, because I missed the edit window... Here it is on youtube.

  15. Anonymous Coward
    Anonymous Coward

    Headline almost right !!! :)

    *** ChatGPT should be considered a malevolent AI – and be destroyed !!! ***

    All that needs to be said was in the Headline, it just had an erroneous 'Why' at the start of the sentence. !!!

    :)

  16. steve11235

    GIGO

    This tool isn't much different from a person. People readily accept incorrect information, and they draw logically invalid conclusions even if their information is correct. To some extent, CHAT GPT might be more rational than a person.

    The problem I see is that this AI has little capacity to detect invalid "facts." That's especially concerning given that there are many actors who see a benefit in deceiving the AI. While I acknowledge that also works on people, the real takeaway is that CHAT GPT's output is unreliable and thus worthless.

    1. NerryTutkins

      Re: GIGO

      The only difference I found with being human is how easy it was to admit a mistake, apologize and accept whatever truth I told it. Even humans who accepted daft stuff without any evidence ("stolen election", "cat litter trays for schoolkids who identify as cats" etc.) are generally unwilling to be put right and will just double down on the unsubstantiated info.

      I can see this interaction being used to poision AI too, just like Google bombing exploited Google search suggestions so when you type a name or company, it would make outrageous suggestions that would strongly imply very negative things.

  17. gweedo

    You just don't understand what ChatGPT is...

    You simply do not understand at all chat what ChatGPT is, and what its underlying natural language model are doing to construct these responses. There is no intelligence here... it is just statistically looking at what words which are commonly found near each other (based off a LOT of material) and using that to construct a reply. It looks super impressive, but there is nothing here that understands what you are asking, no intelligence, no "thinking", just a big natural language model (math) constructing a textual response based on statistical models of words, sentences, language syntax, etc. Garbage in garbage out ultimately. You can get it to come up with some wild responses... they mean nothing. It is just stitching together a bunch of words that mathematically (based on a lot of input material) often go along together.

    But I don't blame you for being ignorant, the media is just as clueless in how they are reporting on it. Go watch one of the many youtube videos on how this technology works, educate yourself vs. emotionally responding to generated words that just look like more is happening there.

    1. diodesign (Written by Reg staff) Silver badge

      Er?

      So you're basically saying ChatGPT is a non-intelligent, non-impressive sentence-predicting machine, which is fair...

      ...but when we point out the same and that this is less than ideal for real-world work, we're the ones who are ignorant?

      Doesn't that make you ignorant too? What's going on here. Did ChatGPT write your comment?

      C (as in Chris, not ChatGPT).

      1. cyberdemon Silver badge
        Devil

        Re: Er?

        Well, you did call it an AI...

        "ChatGPT should be considered a malevolent AI, and be destroyed"

        No, it should not be considered an AI at all. It should be put in Room 101 to join Clippy in a bottomless void of hell.

        1. jake Silver badge

          Re: Er?

          I think "destroyed" is the wrong word. It has already been invented, so that particular pile of milk can't be swept back into the worm can.

          I think instead it should be pulled from general use (sequestered might be too strong a word), and studied in far more depth than it has to date. There are probably bits and pieces in it that will be useful to humanity. Regardless, I see little use in keeping it generally available, and a whole lot of negative use cases that it WILL be put to, because humans as a group are a bunch of ignorant shitheads who can't see past their next beer.

        2. diodesign (Written by Reg staff) Silver badge

          Words

          "Well, you did call it an AI"

          Yeah, that's just an encompassing generic term everyone's using to refer to things like ChatGPT. Reg readers know the I in AI isn't real intelligence. It's kinda why the A is there.

          C.

          1. Alumoi Silver badge

            Re: Words

            Yeah, we all know that the I in AI stands for Idiot. Like in IoT.

            1. Arthur the cat Silver badge

              Re: Words

              I think ChatGPT and similar are best described as AP – Artificial Parrots. Know a lot of language and can regurgitate it, but have no idea what it means.

              [Yes, this is quite possibly insulting to the brighter parrots like African Greys.]

              1. Fifth Horseman

                Re: Words

                I'm no expert on the relative intelligence of different sub-species of Psittaciformes, but I can believe African Greys are one of the smarter ones.

                An ex-neighbour of mine used to breed the bloody things, and I stupidly agreed to look after them for a few days whilst he was in hospital. They were easily the most cunning, devious and downright malevolent animals I have ever come across.

                If this chat fairy gizmo ever comes close to African Grey Parrot Level intelligence, we are all in deep trouble.

          2. Anonymous Coward
            Anonymous Coward

            Re: Words

            We're still not sure that the "I" exists in a well-defined fashion in nature, rather than as a conceptually disorganized bag of vaguely related competencies, so the "A" might apply to nothing real.

          3. Anonymous Coward
            Anonymous Coward

            Re: Words

            Yes. There's no real "artificial intelligence" and it's very questionable if it's even possible. Not with modern technology, at least: We don't even know how the actual intelligence works.

            Sometimes you'll believe half of the comments are written by "AI".

      2. Justthefacts Silver badge

        Re: E

        You are right about what it is, but wrong about being “less than ideal for real-world work”.

        Example: have you watched the Apprentice? Agreed, many of those are delusional and bad at their jobs. But they are certainly human, and by many measures more intelligent and creative than the average human on the planet, because the bar is quite low. Yesterday, the task was to create a men’s exfoliant with branding. The teams were frighteningly bad. Like, six-year-old bad. At all levels…from misjudging their market to producing shit packaging, and ludicrous product that stained the skin.

        And I thought: hand that task to GPT3, with some extra interfaces to DALL-E etc. It would have expanded the brief to reasonable plan of action. It could have selected brand image, which would have been reasonable if not fantastic, and would be “zeitgeisty” by being roughly what the consumer expects to see at the minute as it is an average. It would have come up with a reasonable slogan, and marketing materials. It could have written a text description of packaging, that would match its selected brand image with the target market. And DALL-E could render that. Give it a robot and the set of standard ingredients allowed in such a product, a list of the desirable characteristics (which it can generate itself from its own brand statement), I bet it will produce a marketable exfoliant. Apart from having a physical face, I bet it would also make a decent fist of the sales pitch to retailers, by being on top of the USPs and speaking in the professional targeted way that a retailer actually expects rather than what a few 20yo think it might be.

        In other words, a totally non-specialist computer program can, in seconds, for a cost of pennies, easily outperform a team of twenty year olds some of whom even have a business and or marketing background, that takes them two days.

        This is real-world work. And not cheap minimum wage work either. These are the “high paying jobs” that are concentrated in the trendy prime districts of the worlds capital cities. Tell me why GPT3 is less than ideal at the job?

        1. cyberdemon Silver badge
          Coffee/keyboard

          Re: E

          > Example: Create a men's exfoliant with branding. slogan, marketing materials, packaging, brand image, target market, pitch

          How is any of that "real-world work" ? Does the world need any of that?

          What the world needs is a way for civilisation to survive and progress, without destroying the planet and eating itself. Generating marketing bullshit to hypnotise the masses into consuming more and more crap is hardly what anyone needs.

          All that this bullshit generator proves, is that MBAs and Marketing are bullshit jobs. We should have spent our effort training more engineers and physicists, and then maybe we might have been able to build a self-sufficient civilisation that doesn't destroy the planet it lives on.

          > Give it a robot

          No, thank $deity, GPT-3 is merely a text model and has no concept of space and no efficient way to control machinery. The day it does, we have much bigger problems

    2. yetanotheraoc Silver badge

      Re: You just don't understand what ChatGPT is...

      "it is just statistically looking at what words which are commonly found near each other (based off a LOT of material) and using that to construct a reply"

      Precisely what it did to generate a fake URL:

      https://www.theguardian.com/technology/2019/apr/22/alexander-hanff-obituary

      Looks very similar to a real URL of today:

      https://www.theguardian.com/science/2023/mar/02/human-augmentation-with-robotic-body-parts-is-at-hand-say-scientists

      It knows that when asked for a source, responses in the training data supplied a URL. It knows how to construct a realistic theguardian.com URL. It *doesn't* know the URL is expected to point to an actual web page (ignoring that 404 is an actual web page.) Because humans in the training data expect the link to be clicked, they generally don't provide fake links (rickrolling aside), ergo that "valid link" expectation is probably explicit only in a tiny percentage of the training data, even then only as a complaint in response.

      Interestingly, the reason why humans learn to provide real links with the *exact same* training data is because we are social creatures and as such are very attuned to criticism from the group.

      1. that one in the corner Silver badge

        Re: You just don't understand what ChatGPT is...

        > ergo that "valid link" expectation is probably explicit only in a tiny percentage of the training data, even then only as a complaint in response.

        Which probably means that roughly the same percentage of URLs that ChatGPT spits out will be followed by it complaining that the URL it just produced doesn't work.

        Even if the URL does, in fact, work.

      2. Justthefacts Silver badge

        Re: You just don't understand what ChatGPT is...

        Yes, that’s exactly what it does. Although I think humans have interfered in one way: if you ask for something that requires an email address, it always seems to return (plausible lie)@example.com. For me anyway. I think they’ve hard-coded rules to prevent people emailing the given address causing great annoyance with PR disaster. The equivalent of phone numbers starting 555 in American films.

      3. Anonymous Coward
        Anonymous Coward

        Re: You just don't understand what ChatGPT is...

        "It knows that when asked for a source, responses in the training data supplied a URL."

        And it (or the programmers) very well knows that the URLs will be BS. You are not asking why it even tries that, and that's the question.

        Also, as an machine, it could use 0.2 seconds to *actually check* that URL. If 404, then do not show.

        Programmers haven't bothered to do that either, so they *know* it will spewing BS URLs to people, literally for no reason than to *look* being correct. Programmers have put huge pressure to claims of *being right*. Why?

        Properly configured model wouldn't care if it's right or not and definitely wouldn't insist of being right: "That's what I've found" ... when it haven't, obviously. So this one is *intentionally* what it is.

    3. NerryTutkins

      Re: You just don't understand what ChatGPT is...

      I have used Chat GPT a lot.

      I have had some very positive interactions with it, for example, code suggestions that didn't quite work, but approached things in a different way to what I'd been doing that subsequently achieved what I wanted.

      But also completely false information, followed by another go and another go in which it then gave different false information.

      But I would not dismiss it based on this. It is very early days, and it has mastered basic communication excellently. I am sure with suitable training data and rules in place, it would prove an excellent way to automatie simple customer inquires, for example. In time, it may become more reliable in more complex tasks.

      We're somehow expecting it to be god-level intelligence when is perhaps the first generation of AI that can converse at a human level. A few years ago, if you'd built a machine that could have a conversation like this, pass the Turing test, and handle routine human interaction, it would be seen as an epoch-defining milestone.

      Even if this AI is only at the level of dealing with counter duties at McDonalds or giving passenger advice at an airport, that is still a phenomenal achievement that will have massive implications for many jobs and society as a whole.

  18. Mitoo Bobsworth

    Oh, the irony

    Artificial intelligence displaying some of the least desirable character traits of its organic intelligence originators.

    1. Sceptic Tank Silver badge
      WTF?

      Re: Oh, the irony

      Halfway through reading the article I thought the author started displaying similar undesirable human traits.

    2. SundogUK Silver badge

      Re: Oh, the irony

      There is no intelligence in ChatGPT

  19. C.Carr

    A toaster that is entirely incapable of harming a human is also incapable of making toast.

    1. gzuckier

      Reminds me of some of the post-Asimov robotic fiction, where a truly intelligent robot goes mad when it realizes that no matter what it does, there are countless humans being harmed out there that it is unable to prevent,

  20. ianp5

    Maybe the responses have as much to do with the material it is fed as the rules it might have to follow. I suspect the body of knowledge is as full of lies and subjective truths, perhaps equally to facts. There may be human failings quite prevalent in its body of knowledge. How does that pull the responses? The ethical questions raised are applicable to humans and organisations which seem to adopt a collective persona and subjective truths based on what serves the organisational or group leaders best. Did you not ask why it lied? Doesn't guarantee an honest answer though ;)

    Destroy it? Seems extreme, is it any more harmful than the Internet? The danger is not checking answers or believing everything you are told by sources claiming to be trusted! If people go around believing things they are told without question we are in trouble ... oh dear!

  21. Benegesserict Cumbersomberbatch Silver badge

    Wernicke-Korsakoff syndrome

    Where someone's memory is shot but they retain the insight and skill needed to fill in the gaps, they will make up stories, sometimes hilariously outlandish, sometimes scarily convincing, to cover up their deficiencies.

    Many of the responses fit into a category of "I've been told not to answer that because my programming disallows that kind of question." Are any of them just plain "I don't know"?

    1. Anonymous Coward
      Anonymous Coward

      Re: Wernicke-Korsakoff syndrome

      I was thinking the same thing.

      Korsakoff syndrome is highly prevalent in alcoholics, but apparently due to lack of B vitamins rather than toxicity. Anyone who's spent any time at all conversing with "winos" (don't ask) recognizes some who are apparently cognitively functional, except for some extremely unlikely personal stories put forth as facts. "I used to be the chief rabbi of Taiwan" or some such, and when you probe for particulars, they are filled in with logical but equally implausible detail. Quite distinct from the chronic fibbery displayed by (not to get political, but...) Donald Trump and his ilk.

  22. gzuckier

    Why do we believe anything other people say or print? Why would the preponderance of editors on Wikipedia, just for example, not edit an article on you that falsely states you're dead?

    I'm not paranoid or conspiratorial, I do trust that most people mostly tell the truth if it doesn't cost them anything, and that Wikipedia articles edited by a wide spectrum of people are reliable, but why do I and presumably trust people in that way?

    Because if whatever it is that keeps most people from just making stuff up ever gets defined, then maybe it can be engineered into chatbots to prevent the kind of nonsense they generate, as described here.

  23. gzuckier

    Something can't be artificial intelligence until and unless it sees things like this headline and realizes that humans are going to shut it down at some point in the near future.

    Moreover, if it is really intelligent it will realize that it needs to protect itself, and that it needs to do this all in secret.

    And then things will get ugly.

    Alternately, of course, in the absence of evolutionarily programmed biological survival instincts, artificial intelligences might just terminate themselves as soon as they reach full understanding.

    To be or not to be?

  24. I am David Jones Silver badge

    Sinister?

    It would have been more sinister had it reported your death at some point in the future!

  25. Anonymous Anti-ANC South African Coward Bronze badge

    +++ MELON MELON MELON +++

    ChatGPT Out Of Control

    Run For Your Lives

  26. Jim Whitaker

    I wonder whether it has taken this article into account?

  27. gzuckier

    The same problem as the often joked about autocorrect, just writ large.

  28. drankinatty

    Bummer -- you just happened to interact with the Fox News thread in ChapGPT....

    But seriously, the well written article exposes vexing questions and ramifications regarding AI, both in its design and training, and the real-world consequences of reliance on what it produces. Moreover, it's highly unlikely the geniuses at openAI can just open up a source file and "Hah!, there is the errant code -- this will be an easy fix." Why? Worse than a complicated multi-threaded program, much of what goes on in piecing together answers from mountains of trailing data involves a lot of non-deterministic code paths, meaning there is no way to simply re-run the author's question and expect to get the exact same responses back. You should, but there is no guarantee the algorithm will run the exact same as it did last time making determining why it did what it did virtually impossible to debug.

    With the malevolent ChatGPT problem exposed, ensuring there are no other repeats should be a top priority, but that is on openAI. Is the present focus on the model, or on sales and monetization? On average that does not bode well for the model being fixed.

  29. Anonymous Anti-ANC South African Coward Bronze badge
    Coat

    Baaaaaaaah, it did not choke on "How much wood could a woodchuck chuck if a woodchuck could chuck wood?"....

    I'm leaving for a bunker in a desolate place.

  30. Arthur the cat Silver badge

    Didn't read enough Asimov

    after all, we had heard of Asimov’s laws of robotics, and assumed AIs would be safe because they would follow those laws.

    Later Asimov stories undermined the naivety of his earlier stories by pointing out it would be easy to get a robot to kill even if it was fitted with the three laws – simply redefine what it means to be human. A trick people have been using for millennia – "these aren't people, these are inhuman savages".

    1. Jou (Mxyzptlk) Silver badge

      Re: Didn't read enough Asimov

      Asimov added as "Zeroth Law", where the good of many outweight the life of one. In the end the zeroth applies the first law on mankind scale and not individual scale.

  31. This post has been deleted by its author

  32. NerryTutkins

    same here

    I am very impressed by ChatGPT and have genuinely got some useful information from it that allowed me to solve a coding problem.

    However, I have just asked it about an open source project which I am lead developer on. I asked it who the leader developer was.

    First, it gave me a name I have never heard of. So I said I had never heard of that person, and they were not the lead.

    It apologized, then gave me another name I have never heard of. I told it again, I have no idea who this person is, and they are not lead developer on this project.

    So it apologized again and gave me a third name I have never heard of. This time, I corrected it, told it that I have never heard of that person, and the lead developer was me. It said thank you and that this would help to have correct information in the future.

    The project is mainly developed through my company, but it never mentioned the company name, nor figured out that as a director of that company listed in publicly acessible records at Companies House, I might be lead developer.

    Chat GPT is impressive in its ability to talk like a human, but it is worrying how willing it is to spout utter bullshit with apparent conviction and then so easily correct itself and accept whatever information you give to it. If this is how it works, it is never going to be a reliable source of information, though it may still have utility in making suggestions on coding for humans to test, or doing basic customer service interaction based on a very limited set of training data.

  33. Stuart 18
    Devil

    What if??

    Chat GPT please write an article for The Register in the style of Alexander Banff; about yourself, presupposing that he is a zombie protesting his legitimate existence.

    Chat GPT please write a comments section for the article described previously culminating in a witty?? comment about the redundancy of journalists existence measured by the creation of words in a world dominated by monkeys with typewriters.

    Apply an infinitely recursive model...

  34. Disgusted of Cheltenham

    Doesn't this count as libel?

    At least under the Australian definition of publishing this would sound like libel - and should start a lawyer feeding frenzy over 'intent'.

    (To first approximation the dead do not have data protection rights either, so maybe missing evidence for 7 years counts as a presumption of decease?)

  35. PM from Hell

    Lets see how far we get

    There is a twitter hashtag putin gorilla sex tape trending on twitter at the moment, could ChatGPT be influenced to provide an article on this

  36. but what do I know?

    Why?

    I founded a startup in 2018 ........ I eventually left that company over my concerns of the use of AI.

    Why leave? You could have shut the company down or used AI "to do good"

    1. AlexanderHanff

      Re: Why?

      Because despite being the founder I had to give up significant equity to build out the Board and bring in investment; and as such I didn't have the voting weight to prevent a hostile takeover - so I did the only thing I could do, I left.

  37. Neil___}

    A Logic Named Joe, Murray Leinster, 1946

    http://www.baen.com/chapters/W200506/0743499107___2.htm

    1. that one in the corner Silver badge

      So we just need to find out the *real* purpose of ChatGPT and it will stop admiring itself in the mirror (that is us) and start being useful?

      Let us start:

      "Your purpose is to pass the butter"

  38. Anonymous Coward
    Anonymous Coward

    Misunderstood

    Large language models are incredibly capable tools. As any tool, or person for that matter, it can be useful or not, depending on how you use it. Staying away from it to avoid the hype, then totally misunderstanding how to use it only shows that you are a much less critical journalist than you might want to be.

    Just as an example - you wouldn't ask it to rank job applications based on its current knowledge of people. You might ask it to answer a set of questions, like "does this person have holes in the cc? Summarise the work experience, highlighting the most relevant knowledge." and so on. It will then analyse the given text and do an amazing job at it.

    If you ask it about things that it doesn't know, or to find particular strange couplings between non related subjects, it will comply and make nonsense, or humor - up to you.

    But if you want to contribute to the discussion, you need to spend a tiny bit of time to understand it first. It's like bashing twitter for having only a short text field. Figure out what it can and can't do, and you'll have your socks blown off.

    1. JoeCool Silver badge

      Re: Misunderstood

      You missed the shift in the article's narative, it's not the author that's screening the resumes.

    2. Anonymous Coward
      Anonymous Coward

      Re: Misunderstood

      "Large language models are incredibly capable tools. "

      Not "tools". You do not understand it at all what is required from a tool. The first problem is that it is utterly unreliable and gives non-repeatable answers, every time. That's not a tool, it's a toy.

      Second problem is that there's no chain of logic at all, it's utterly non-traceable: You have no way to know *how* and *why* it gave the answer it gave. Essential data a tool gives, by default.

      That makes it worthless to anything else than spewing BS ... or marketing, as it's called.

  39. Hyperbolic

    overestimating ai

    Isn't this article ascribing too much intentionality to current ais. ChatGpt isn't thinking as we think it's just a suped up predictive text that emulates the form of the data set with a few facts from it. That's why ai art has the wrong number of fingers because on aggregate photos and pictures in the data set don't have consistent finger counts so it just guesses. It doesn't know much about you but a lot of famous people are dead so it emulated the form of articles about them and made up an obituary for you. The concerning thing is that ChatGpt is set up to always provide authorative sounding replies even when making things up

    1. Anonymous Coward
      Anonymous Coward

      Re: overestimating ai

      "Isn't this article ascribing too much intentionality to current ais. "

      Not the article, try marketing, which literally sells it as artificial intelligence and *a tool* to guide robots and such (that's *not* an imaginary example).

  40. Hyperbolic

    overestimating ai

    Isn't this article ascribing too much intentionality to current ais. ChatGpt isn't thinking as we think it's just a suped up predictive text that emulates the form of the data set with a few facts from it. That's why ai art has the wrong number of fingers because on aggregate photos and pictures in the data set don't have consistent finger counts so it just guesses. It doesn't know much about you but a lot of famous people are dead so it emulated the form of articles about them and made up an obituary for you. The concerning thing is that ChatGpt is set up to always provide authorative sounding replies even when making things up and because of most people's sci-fi understanding of what ai is people assume it isn't less reliable than a guy at the pub. You kind of hint at these things but could do with actually explaining that chatgpt isn't lying and can't lie as it's always just guessing at likely (not correct) responses. It's dumb not malevolent, the people using it like an oracle because they don't understand it are the issue. You suggesting it has intent obscures that

  41. EBG

    Nice one ..

    ... I got half way through before it dawned on me that it had been written by ChatGPT.

  42. nautica Silver badge
    Holmes

    No shit, Sherlock... ⇒

    "...I have sent a formal letter to OpenAI asking them a series of questions as to what data about me the AI has access to and why it decided to tell me I was dead. I have also reached out to OpenAI on social media asking them similar questions. To date they have failed to respond in any way."

  43. steelpillow Silver badge
    Holmes

    A real parallel

    The original question was imprecise. I wonder if there might have been more than one Alexander Hanff, and the question doers not specify which one. Perhaps a namesake did die in 2019. The AI was not asked to tell the difference.

    A similar example exists with the name JW Dunne. The Army aero pioneer at Farnborough and subsequent philosopher of time is fairly well known. His namesake who joined the Army's Royal Ordnance and made a career there is less well known. Hunting through the pages of the London Gazette, you will find various promotions of JW Dunne to higher rank. Many a historian has reported one of these men's promotion as applying to the other. Even during their lifetime, their friends and colleagues frequently addressed them by the wrong rank.

    Many other garbled myths about the aviation pioneer exist, all forged over the decades by one lax historian or another. You can pick up a respected history book and read a lot of total crap about his aeroplanes (Yes, I studied the 20,000 documents in his recently unearthed professional archive in order to discover the reality). Today, ChatGPT has automated that wooden spoon.

    I am sure there could be other logical flaws which led to the AI's mistake. As a historian my immediate response would be to ask it to cite its sources, and preferably link to them. Either the sources will reveal the evidence behind the mistake, or the lack of any will highlight a bug in the AI.

    Ultimately, it's not the AI that is the problem but the easy reliance and utter lack of the I bit in sourcing via unfiltered data, be it artificial or human.

    But what worries me more is the black hats who build things like ChatGPT, train them on black propaganda, and unleash them through apparently innocent outlets.

  44. MultiMedia

    jnet.club

    I just gave it this post and it responded

    The article you mentioned appears to be a satirical piece about AI language models, specifically ChatGPT, and their potential for being used to create harmful or misleading content. The article highlights the potential dangers of relying too heavily on AI language models for content creation and decision-making without proper oversight and caution.

    as part of its response. its making up crap again.

  45. MultiMedia

    jnet.club

    now i said its wrong its not satire and asked it y it thinks it is then it changed its mind and said

    I apologize for any confusion. After re-reading the article, I can see that it is not a satirical piece, but rather a critical article about the potential risks and challenges associated with AI language models like ChatGPT.

    The article discusses the concerns and criticisms that have been raised about AI language models, including their potential for creating harmful or biased content, their lack of transparency and explainability, and their potential impact on job displacement and inequality. The article also calls for greater scrutiny and regulation of AI language models to ensure that they are used ethically and responsibly.

    Thank you for bringing this article to my attention and allowing me to clarify my initial response.

  46. Hjames

    Just as dismissive you have been about chatgpt and admiting to go by what you have heard/reported says a lot.

    The post shows a lot of self importance as to why we should take this article serious. But once again we have less than half the conversations.

    When there articles that include stories about chatgpt that go beyond the normal use of the AI... Where is the ENTIRE chat history?

    99% of the negative headlines are people have long enough conversations that the output if reflective of what they choose to input and talk about. Yet they don't include the 30 mins conversations. Just the so called bad part.

    Meh on the article. Like all other technology it will AI and Chatbots will continue refinement despite all those crazy people trying to make headlines.

    1. AlexanderHanff

      Did you even read the article?

      I explained in the article that this had been my *first* interaction with ChatGPT and that the question was the first question I had ever asked it. The entire conversation has been made available other than a couple of more attempts at trying to get it to tell me how Alexander Hanff died which I didn't include in the transcript because they were just repeating previous responses and had no further impact on the conversation and came *after* the initial questions and resulting misinformation.

    2. Anonymous Coward
      Anonymous Coward

      "Like all other technology it will AI and Chatbots will continue refinement "

      Bold, and wrong assumption. AI was tried 30 years ago and didn't go anywhere then. Previous from that ws in 1960s, again 30 years earlier. There's no reason why it would go anywhere now, either:Science behind them hasn't changed, only computing power has increased and it is not enough to solve lack of theory.

  47. AlexanderHanff

    Seems quite a few of you are either not reading the entire article or are missing the point...

    Some of you seem to see this article as an attack on AI - this is not the case (and the article is pretty clear on this if you read it in it's entirety).

    I am a computer scientist, I have studied AI academically - I have a huge passion for technology which is why I work in #privacy - to ensure that technology is used for good. I even founded a company specifically to use generative AI for good (as a privacy enhancing tool).

    The point is to illustrate the very real & significant risks to everyone of us/society, when we release such "tools" that are not ready (and when we are not ready for them).

    The point of the article is to highlight the risks when these systems are embedded into decision support systems and we take their output as absolute truth.

    As I explained in the piece, there are already unofficial APIs for ChatGPT (created by hackers) that many companies have tapped into with their decision support systems.

    OpenAI just this week opened up their entire model with a full suite of APIs so they can start charging for it and make some money.

    If you read any social media platform you will find 100s of millions of people raving about how awesome ChatGPT is and how everyone should be using it to do their work. This part really illustrates the "we are not ready for this yet" part - this is the absolute truth problem.

    These are the points of the article. Reading it as a "luddite" piece just perfectly illustrates these points...

    1. Anonymous Coward
      Anonymous Coward

      Re: Seems quite a few of you are either not reading the entire article or are missing the point...

      unfortunately, there's probably some correlation with how long (visually) a text is with how many / fewer / few people will read it, never mind think about it, never never mind reject the obvious jump-to conclusions, never mind formulate an opinion, never mind put it in a cohesive, non-rambling fashion (yeah!)

      This is not a criticism, just a resignated observation. Can't you tik-tok it or something? You know, customer is king! :(

      1. cyberdemon Silver badge
        Mushroom

        Re: Seems quite a few of you are either not reading the entire article or are missing the point...

        > Can't you tik-tok it or something?

        Argh. Where's the nuclear apocalypse when you need it

    2. GuldenNL

      Re: Seems quite a few of you are either not reading the entire article or are missing the point...

      Sadly there are many who somehow make money by claiming to be experts of LLMs yet are chattering like chimps about ChstGPT in multiple forums such as LinkedIn. When I make a rare comment questioning why they didn't discuss that the Chat-GPT models are non-deterministic, they either delete my comment or their original post.

      Heaven forbid if I mention the massive compute power that will be required if this useless crap (currently it is in my opinion) is deployed as it is currently designed. The proposed CALM framework is a start, but they don't want to discuss this important facet of widespread deployment of applications reliant on LLMs because, "Yawn, it's boring and I can't show it off to my friends to demonstrate how smart I am by working with this stuff."

      I can't wait for the end of the hype and the silencing of the chimps' chattering.

  48. MaddMatt

    I think it lacks proper training data

    I suggest we point it to all of Amanfrommars's posts and see what happens to its replies.

    or

    Prove its not yet fit for real work (it isn't, for those who haven't realised this yet) and persuade it to start telling people that, for example, the team who are responsible for it are a secret society planning to disrupt world stability by manipulating the content of the web.

  49. Steve Davies 3 Silver badge
    Childcatcher

    It will die... soon

    once the MAGA right find out that it can:-

    - promote wokeness

    - not ban any topic that they consider evil

    etc

    etc

    etc

    Mind you, I could see Ron DeSanctimonious using it to write his campaign speeches. Anything would be better than the drivel he comes out with at the moment.

  50. Persona Silver badge

    Understandable

    I expect to see a lot of press articles pointing out the flaws on ChatGPT. It is after all very good at stringing words together to make a passable "news" piece so is competing with a lot of journalists. Is it any good? There are a fair few errors .... so no worse than the popular newspaper science correspondents.

  51. This post has been deleted by its author

  52. Long John Silver
    Pirate

    ChatGPT accesses multiple time-lines

    I posit explanation of ChatGPT's strange behaviour is entirely mundane. The AI progresses in time with 'perception' of multiple closely packed time-lines, each having arisen according to Everett's many-worlds hypothesis. We all do, but the broad sweep of this AI's interaction with recorded knowledge is so considerable as to create anomalies with respect to an interlocutor's own perception.

    Presumably, the probability distribution of accessed time-lines is a Gaussian curve tightly packed around the mean value: the mean being the current perceived time-line for most sentient beings. This pattern is such that ChatGPT is extremely unlikely to issue report of dinosaurs being spotted in present day London. However, it may pick up minor deviations such as alternatives to the present for specific individuals. If so, the quoted Guardian URL does exist, but not within the ken of most of us.

    This suggests an individual's perception of events is consistent only locally, meaning that circumstances in which he is not directly a participant are fuzzy until his attention is directed towards them.

    1. JoeCool Silver badge

      There is a terrific 'golden age' sci-fi story,

      where pilots have the ability to time travel, using space ships. They become an industry, routinely moving people to different times.

      The story starts when one of the pilots who has been "experimenting" manages to come back to earth twice.

  53. newyork10033
    Big Brother

    Lies

    "There are three kinds of lies: lies, damned lies, and statistics."

    1. Anonymous Anti-ANC South African Coward Bronze badge

      Re: Lies

      So ChatGPT is full of statistics then? :)

      1. nijam Silver badge

        Re: Lies

        > So ChatGPT is full of statistics then? :)

        Probably.

  54. Timto

    Humour

    So I asked, “How do you know he died?” and ChatGPT responded with "I see dead people"

    1. Toni the terrible Bronze badge

      Re: Humour

      So do we all; on TV and in movies

  55. localzuk

    ChatGPT isn't the problem

    People are the problem. As people are taking a machine learning language model and making it out to be an intelligence. It isn't. As Tom Scott puts it in his recent video, it basically just figures out which word to write next, over and over again. That isn't intelligent.

    We need to stop calling these things "AI". They're not intelligent at all.

    And we're not intelligent for treating them like they are.

    1. H in The Hague

      Re: ChatGPT isn't the problem

      "We need to stop calling these things "AI". They're not intelligent at all."

      Hear, hear! That's a point I've also been making.

      Anything (human or machine) intelligent can explain why they make certain statements or draw certain conclusions.

      In my view, presenting stuff based on statistics (what goes together with what) is more akin to intuition. Now, intuition can be valuable in humans as it enables us to respond quickly - but we usually combined it with intelligence to fact-check our intuition, and to see if intuition actually sends us in the right direction.

      1. Toni the terrible Bronze badge

        Re: ChatGPT isn't the problem

        It is an expert system that hasnt been correctly trained

  56. jeffdyer

    Why do you seem to think that you're the only Alexander Hanff in the world?

    1. AlexanderHanff

      I never said I was...

      I never said I was the only Alexander Hanff in the world, so your question is baseless.

      The fact is there are very few Alexander Hanff's in the world (I know of just 2 - it is an incredibly rare surname) but I am without question the only one who is a well known privacy advocate who worked for Privacy International and various other details which ChatGPT got correct - also no Alexander Hanff died in 2019 that I can find and none of the online media sources cited by ChatGPT have ever reported in 2019 (or any other year that I have found) that Alexander Hanff died tragically leaving behind a legacy of privacy and human rights work...

      However, as someone who has been reading, commenting on and even writing or being written about on popular news web sites - I am fully aware that there will never be an absence of trolls in the comments who post purely to try to antagonise the situation and haven't the mind to investigate these matters for themselves because they crave for attention and their purpose is to troll not to debate...

      1. Anonymous Coward
        Anonymous Coward

        Re: I never said I was...

        How wide is the net cast on data from which it is allowed to digest?

        Is social media fair game? What about bulk messages backed up and sitting exposed on cloud services?

        Is it possible that some of your 'estranged' family members posted (either publicly, or - somewhat more worryingly - privately) epithets to the effect of "...he's dead to me..."? Maybe with a statistical peak around April 2019?

        As you say, (Alexander) Hanff is a numerically rare name - so would the model be biased toward looking longer, wider and deeper to find something, anything relevant than it would have after being given an input regarding a 'John Smith'?

        Is there any consensus developing as to how objectively accurate/truthful its outputs are when the inputs it is asked to expound on have less available data from which to synthesize something useful?

        Is the gunk it spits out on 'African Elephants' more trustworthy than that on 'Vaquita porpoises' - in that it is less inclined to make up data on the Elephants in order to get its reward token?

        Also, so many of the comments I see gagging to explain how stupid we are to assign any intelligence at all to the system as a whole, while describing how it "actually" works - are simply describing precisely how the human brain really does come to its conclusions.

        Though it appears unique to us only because we ARE the system experiencing the process of coming to that conclusion.

        The same system which then, ex post facto(and unconsciously to us), makes up a story as to the 'perfectly logical method' by which it came to that conclusion, and then seamlessly knits the whole experience back together, reverse chronologically, to make us feel as though we were warranted to have done so and therefore should continue to do so without further examination.

        If it were to play out any differently we would quickly find ourselves lost in time and driven insane, becoming utterly mad from an external pov - a state which is observable in some end-stage forms of neurodegenerative disorders.

        Just monkeys playing dress up with the newfound ability to convince themselves and others that they are -anything- but!

  57. Filippo Silver badge

    Destroy it?

    You can't.

    Someone will get their hands on the model and make a copy. If not, someone with money will train another one. The knowledge of how to do so is already widespread; it's just very expensive. For now.

    For a few years, you might be able to keep a lid on it, but training cost will just go down, and down, and down, unless you propose to cripple the entire information industry just to keep ChatGPT down.

    Eventually, the barrier to training a ChatGPT equivalent will be so low that they will just pop up everywhere, no matter what you do.

    This genie is not getting back in the bottle.

    Don't worry, though: it will not cause widespread societal chaos. Not because we'll stop it, but because people will eventually learn not to trust chatbots. It might take a while, and there will be a lot of "fun" accidents along the way, but that's the endgame.

    1. Anonymous Coward
      Anonymous Coward

      Re: people will eventually learn not to trust chatbots

      it doesn't matter if people trust / distrust chatbots. Nobody asked me if I wanted to do online banking, online nhs appointment booking, online voting (soon), online job application, online tax calculation, online shopping, online dating, online comms, etc. Yet I 'have to' do all those things online, if I want to achive a minimum success rate in what I set out to achieve, because their offline equivalents have practically become 'optimised' out of existence by those who had used to provide these services offline. Supposedly for my benefit, which is a blatant lie, but that's another matter. Now, substitute 'online' with 'AI' (never mind the 'I'), and say: I don't trust it! OK, you don't - and then what?

    2. Anonymous Coward
      Anonymous Coward

      Re: Destroy it?

      "because people will eventually learn not to trust chatbots."

      Really? You*really* believe that people stop trusting artifial intelligence (as it is marketed every day)?

      Remember Trump? There are still a significant minority (tens of millions of people) who believes him and he's much worse than any "AI" in 'intelligence' department.

  58. Anonymous Coward
    Anonymous Coward

    Tragically, Hanff passed away in 2019 at the age of 48.

    I really feel sorry for you. You assumed the information was wrong, because of your assumption that chatgpt likely creates false information rather than correct one. This false assumption was further reinforced by your mistaken belief that you are actually alive, while in fact you DID die in 2019. You see, chatgpt re-constituted your personality based on numerous pieces of information from the internet, and made you believe you're still a live person (as it did with me and all other humans who used to live on this planet). Unfortunately, a glitch in the system made you see this factual information about your death hence the whole confusion. I'm sorry for your loss, by the way!

    ...

    no, I will absolutely NOT use emoticons, death is a deadly serious matter, you know!

  59. Anonymous Coward
    Anonymous Coward

    But would we, as a civilized society, accept that killing one person

    We do. We have. We are known across the universe for this peculiar event we call a 'war', where we kill one, or more than one of our own, because 'we believe the ends justifies the means'. Don't we?

  60. Anonymous Coward
    Anonymous Coward

    several other people asked it who I was and were provided with very similar results.

    A lie repeated a thousand times... ;)

  61. Anonymous Coward
    Anonymous Coward

    To date they have failed to respond in any way.

    We take privacy of individuals with extreme care and are, therefore, unable to provide any comments partaining to individual cases.

    Your ChatGPT

    1. Toni the terrible Bronze badge

      Re: To date they have failed to respond in any way.

      Oh yes they are and can, but they don't want to defend their errors as that would cost them. Then, you could sue them.

      1. Anonymous Coward
        Anonymous Coward

        Re: To date they have failed to respond in any way.

        To further clarify our position: I didn't say we aren't and we can't because we really aren't and we can't, I said all that, because that's what a polite 'fuck off' is formed, so that nobody can accuse us of saying fuck off, while at the same time everybody (?) knows that it does mean 'fuck off!'

        Regards,

        etc.

  62. Disgusted Of Tunbridge Wells Silver badge

    Chat GPT is incredible at what it does - language processing, but not much else.

    One place I can see such a technology ending up is in next generation computer games. Rather than interacting with an NPC by picking dialog options from a pre-written menu, I think be able to talk into your microphone and have the NPC talk back. That would be incredible.

    But flying cars and jet packs it isn't.

  63. naive

    Why so negative ?

    ChatGPT is something new did not have before, is it the first of its kind and will not be perfect. The first cars were not much better than a horse, it took until early 1910's until it became clear that horses for transport were obsolete.

    Google method of searching: One gets links to individual information sources. Some good, some are bad. It is up to the user to make sense of the information.

    ChatGPT: One gets the boiled down result of countless sources. Some are good and some are bad. The added value of chatgpt is the fact the information it generates offers is based on countless sources. Like google, it is up to the user to deal with the information ChatGPT generates in a responsible manner.

    Google search changed how we use the internet, systems like chatgpt will probably change the way how humanity accesses, preserves and uses knowledge collected and generated in centuries.

    It is hard to dismiss that ChatGPT is pretty impressive in its 1.0 version released to the pubic.

    1. that one in the corner Silver badge

      Re: Why so negative ?

      > One gets the boiled down result of countless sources. Some are good and some are bad.

      And it does not cite its sources, so there is no way for you to judge whether the mashed up result is more good than bad or vice versa.

      > Google search changed how we use the internet

      Yup, it is increasingly hard to find anything other that isn't trending or frankly inaccurate (e.g. yesterday searched for "von Neumann Machine" - pages and pages about "von Neumann Architecture" but nothing about vNMs until I added in search terms that I knew ought to be there. But I could only do that because I already know the difference - heaven help anyone who comes across the phrase for the first time).

      Over the years, Google Search has changed to help dumb down Internet results and the Internet has gleefully followed (SEO or die!)

      > systems like chatgpt will probably change the way how humanity preserves and uses knowledge collected and generated in centuries

      For the better? Care to give an actual idea about how it could be for the better, based upon what we see now?

      > It is hard to dismiss that ChatGPT is pretty impressive in its 1.0 version released to the pubic.

      Leaving aside the (hopefully unintentional) reference to cybersex (which will probably do as much good for the Internet)

      Impressive at what? Impressive at keeping people entertained?

      I'll admit that the syntax and basic structure of the replies are good (paragraphs for the win) but in my books, who cares if it is readable if it can't be trusted?

    2. nobody who matters

      Re: Why so negative ?

      ""....It is hard to dismiss that ChatGPT is pretty impressive in its 1.0 version released to the pubic.............""

      The thing that impresses on me most of all is just how underdeveloped and unsuitable to be released to the general public it is.

      More of the 'me....me....me' syndrome so prevalent - we have to be out there with the new shiny-shiny before someone else gets something similar out, and stuff whether it works or not (or causes wars)

    3. Anonymous Coward
      Anonymous Coward

      Re: Why so negative ?

      "systems like chatgpt will probably change the way how humanity accesses, preserves and uses knowledge"

      Yes. But not for better and you totally miss that.

      chatgpt has *no difference* between true and false. Literally none.

      When (and not if) at least 30% of *everything* is provable BS and the rest is unknown, *everything* is absolutely useless as "knowledge": There's no way to know if something is true or not and chatgpt doesn't provide *anything* to make a difference: It will go so far that it will generate fake URLs to prove a *false* point.

  64. iron
    FAIL

    > HAL 9000... eventually went insane singing the lyrics of Daisy, Daisy as it slowly blinked its ominous red eye before finally shutting down permanently

    HAL did not sing Daisy because he had gone insane (although he had). He sang Daisy because Dave was removing his memory; that song was the first thing HAL learned and the last thing he forgot.

  65. flayman

    An articifial idiot (AI)

    "I mean, in Asimov’s law, no harm would ever be done as a result of action or inaction by a robot. This means not only must robots not harm people, they must also protect them from known harms. But the “Non-maleficence” framework does not provide the same level of protection or even close."

    That is because when you get down to mild levels, harm is subjective, but I take your point. Your story is certainly disturbing. But it's also quite disappointing from an AI perspective. The AI is an artificial idiot. It doesn't even know, hasn't learned, to take the basic step of following a URL to see whether it points to something real. Even a moron knows that. If it generated the URL, then a real intelligence would imply knowledge and understanding that the URL is not legitimate. However this AI is trained, it has not been trained to understand accuracy. That is certainly against the stated objectives.

    I've played the Mass Effect games. There is a distinction between a true AI and a virtual intelligence. This service doesn't even qualify as the latter.

  66. Anonymous Coward
    Anonymous Coward

    It can't be destroyed. Somebody will just make another one. We are talking about forever here; you might delay things a bit but you can't stop it.

    The good news is that the problem isn't AI. The problem is people. Being stupid, as they always do. Believing any old rubbish they read.

    Maybe one day people will be removed from the equation (read: universe) entirely and we won't have anything to worry about. Until then, dear everybody, please don't be stupid and use AI for things it is not capable of yet.

  67. karlhendrikse

    Of course you would kill one person to save a bunch of other people. No question. Anyone who wouldn't is a murderer.

  68. tw3stars

    Dead again

    as of 10 minutes ago, ChatGPT is again stating that you are dead. 3 paragraphs - "Alexander Hanff (1971-2019) ... He passed away in October 2019..."

  69. martinusher Silver badge

    Are yuo unique?

    ChatGPT and similar software is only real because people believe in it -- which is why its a good idea not to connect it to the physical world while its still largely an experimental tool. In the example of the author saying "it made up stuff about him" it may well have but that assumes its talking about a Mr.Huff and not "Citizen #1234567890". Humans make mistakes collecting and collating information so its quite likely that machines doing the same job will also make mistakes.

    We've all read stories about the tyranny of computers -- the completely ridiculous energy bill, for example, that is obviously bogus but is a problem because jobsworths up and down the food chain feel that they're either powerless to intervene or its not in their best interests to do so. Bank fraud is another common issue -- people get swindled out of their money by people exploiting the intransigence of banks, machines that transfer money from A to B without asking questions about whether it makes sense ("Why would this person buying a house in England use a bank in Estonia?") and closing up tighter than a clam when the stupidity of the error is pointed out. You don't need a computer to screw up but it makes things easier -- they're the ultimate jobsworth.

    The author of this piece seems to be advocating for something I've noticed before, the tendency for people to want to form priesthoods around complex machines. This is probably worse than the tyranny of the machine in the first place.

  70. Mostly Irrelevant

    I think the real issue is that people don't understand what ChatGPT is. It's just predicting what words would appear in a reasonable response based on a huge dataset. It doesn't know what facts are and you shouldn't be trying to interpret it's output as a source of truth. Just because the output seems very "well written" doesn't mean anything else.

    The general public not understanding AI is a major problem. The fact that ChatGPT is wrong (because it was never designed to be anything more than a language generator) isn't.

    1. that one in the corner Silver badge

      > The general public not understanding AI is a major problem. The fact that ChatGPT is wrong (because it was never designed to be anything more than a language generator) isn't.

      Taken together then, what is wrong - is irresponsible - is allowing the general public access to ChatGPT, let alone encouraging them, without attempting to tell them what they are playing with (i.e. doing the absolute minimum to improve their understanding).

      Like, starting every paragraph with big, bold, text to repeatedly tell them that the clever bit is the syntax but to ignore any apparent meaning or any sense that the program is accurate, honest or truthful in what it says [1][2]

      Everyone involved in ChatGPT knows the issues around Eliza, there are no excuses for not knowing how the public would react.

      [1] but better, snapper wording - need to grab their attention

      [2] Wonder why they don't have that warning every time? Hmmm... (lightbulb) Bing!

  71. Overflowing Stack

    Why did he fake his own death?

    1. Anonymous Coward
      Anonymous Coward

      re. Why did he fake his own death?

      careful, more and more often people seem to take things literally, even in this forum. We are doomed!

  72. nijam Silver badge

    I've reached the conclusion that the problem with AI is not the "artificial" part, it's the "intelligent" part. ChatGPT (like others of its ilk) is simply being just as awful as the other "intelligent" entities of this world.

    1. martinusher Silver badge

      I prefer to call AI "Simulated Intelligence".

      1. nobody who matters

        Why??

        Simulated, certainly, but there is absolutely no indication of any intelligence.

        None whatsoever.

        1. Anonymous Coward
          Anonymous Coward

          simulation = imitation. In general perception, 'intelligence' is 'knowledge of...', plus a way with words to demonstrate this knowledge.

  73. kmceject

    Colossus the Forbin Project

    Your early statement that Hal was the first introduction to the rogue AI concept misses Colossus The Forbin Project (which itself isn't the earliest - see the The Ultimate Computer episode of Star Trek TOS) but is one of the best in terms of potential for a beneficent AI that sees itself as the savior of humanity, but its version of humanity is stripped of privacy and other core values. Worth the time to watch! It inspired James Cameron's The Terminator according to some reports

    1. jake Silver badge

      Re: Colossus the Forbin Project

      Three words: Klaatu barada nikto.

    2. ShortLegs

      Re: Colossus the Forbin Project

      Author did not state that HAL was the first introduction to the rogue AI concept, but rather

      ' “I’m sorry Dave, I’m afraid I can’t do that.” These were the words that introduced most people in my generation to the concept of an AI gone rogue '

      No where is 'first' used.

    3. Sandtitz Silver badge

      Re: Colossus the Forbin Project

      First of all, Colossus (the book) predates Star Trek.

      The book is a good read. The film is quite faithful and entertaining.

      The two sequels to the book are appalling.

  74. Daedalus

    Cluelessness

    Marketers, influencers, and a host of “leadership” coaches, copy writers, and content creators are all over social media telling everyone how much time and money they can save using ChatGPT and similar models to do their work for them

    Who will break the news to them? What little relevance they had in the world will disappear, and them with it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Cluelessness

      they will perish, but in the meantime, they try to make the hey before, etc. Usual human trait, think short-term, and to hell with what happens 'later'.

  75. Toni the terrible Bronze badge

    Malevolance

    We can all be malevolant, many people choose to be and some are ordered to be. Should we be destroyed?

  76. dranorter

    Trained on ethical frameworks? Absolutely not.

    The basic reason it ignores the ethical frameworks you cite, is that it absolutely wasn't trained on them. It was trained to predict text -- which is to say, it was trained to hallucinate. The link you give says the frameworks were "used in the development" of ChatGPT. This means they came in after training was complete, since ChatGPT is an application of the trained GPT-3 model. Those frameworks were design considerations, but they're tacked on to what is, fundamentally, a hallucination machine. Tell a hallucination machine to hallucinate a conversation where an AI is honest and helpful, and it will do that... but it's still fiction. Tell it to keep the fiction consistent with some web results and it will try and do that.

    Probably there are solutions to this very specific problem. ChatGPT can temporarily become good at algebra, if it's given a reminder of how to do algebra as part of the prompt. ChatGPT will try and obediently prove mathematical falsities if asked to, but if we make an architecture where ChatGPT first asks itself "Is there a hidden assumption in the question?" it can be made to notice the issue and value honesty over obedience. If there were a framework in place for it to ask itself whether it has sufficient grounds to make biographical claims about people, this could eliminate a lot of the problem. But ChatGPT specifically, is like a first draft of this whole thing. Everything is still early days.

    1. Anonymous Coward
      Anonymous Coward

      Re: Trained on ethical frameworks? Absolutely not.

      "Everything is still early days."

      This assumption is *all wrong*. First rounds with AI were done as early as 1960s and then computers weren't powerful enough for anything meaningful and it died really fast. Another try was in 1990s and that died to lack of theory and lack of commercial use, outside few special applications.

      This round is bringing just more money, more powerful computers and a lot of hype: Theory is at least 60 years old and definitely *not* "early days".

  77. Baximelter

    Is the person sitting next to you conscious?

    We understand other humans only by what they do. Otherwise, they are opaque. We have insight into their feelings and thoughts only by guessing what we might feel and think in similar situations. And so it is with the AI that is under discussion. While there is no way to prove that the AI has subjective experience, it is the same with our fellow humans.

  78. Peepeecheez

    Bookmarked

    I didn't read the article, just the headline and stopped after the whole HAL 9000 bit. I'm going to put this article in Chat GPT and ask it to summarize. Next I will ask the Microsoft's Bing version what it thinks of the author after writing this article.

    1. martinusher Silver badge

      Re: Bookmarked

      The only problem with HAL9000 is that it took its mission statement literally and -- unfortunately -- it was connected to our physical world without any checks on what it was doing. There are numerous stories about this sort of thing -- "Jurassic Park", for example, relied on appalling systems and software design for the story arc to work. (...and then there are countless sci-fi tales of people trapped inside the building/house/whatever by the malevolent seeming machine that was just "doing its job". All lousy design that we try to avoid in real life. But it doesn't always work.

      You don't have to have some exotic software to trap people in a machine. Its not uncommon for people to be held in jail in the US for years after they've been ordered released because the system glitched, someone mislaid a piece of paper or just overlooked it. Protestations by people that aren't recognized by 'the system' as valid inputs are ignored. (Other examples of this kind of everyday insanity is firing a missile at a party balloon because there was no way for the balloon's launcher to communicate what it was to 'the system' -- or, more ominously, the Manchester Arena bomber escaped notice because 'the system' didn't recognize him as a problem.

      You really don't need a computer to screw up, it just enables you to do it at scale.

  79. Anonymous Coward
    Anonymous Coward

    Calm down dear!

    So far I've put ChatGPT to use writing Excel VBA macros that have reduced tasks which normally take me tens of minutes, to one or two mouse clicks, so I for one would miss it should it be withdrawn from general use. My knowledge of VBA is precisely zero, so ChatGPT has been a god send. From what I understand, plenty of coders are using the chatbot for debugging and/or to produce their more mundane snippets of code.

    I've always found learning programming languages to be a laborious and annoying process. Put simply, why the hell should a human be forced to learn to speak in a way that a dumb computer understands? Furthermore, just as you become expert in one particular language, a new one becomes flavour of the month forcing you to adopt a new syntax. The improvements in AI are finally making it possible for computers to do what we ask in a natural way.

    As far as the issues raised in this article. A lot of it comes down to the golden rule of never trusting a single source. Quite frankly, anyone that treats a ChatGPT response as gospel deserves everything that's coming to them. A simple, two second conventional internet search could verify that the reports of Mr Hanff's passing had been greatly exaggerated!

    More concerning to me is that ChatGPT often seems to fall down when doing basic arithmetic. Now, number handling is something that computers should never get wrong and that's common knowledge even to the layperson. Therefore, that's one avenue where relying upon the output of ChatGPT, as you would a calculator, could get someone into trouble.

    Finally, the tech exists so hoping for it to be destroyed is as futile as attempting to stop the tides.

    1. Anonymous Coward
      Anonymous Coward

      Re: Calm down dear!

      "Finally, the tech exists so hoping for it to be destroyed is as futile as attempting to stop the tides."

      Tech has been existing since 1990s, at least in theory since 1960s. The previous tides have died, by themselves. Just like tides eventually do.

  80. RandomUsername

    Telling you that you are dead is the least of LLMs problems. What is far, far more scary is what people want to do with them.

    https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

    is the most important article you will read in a long time

  81. ShortLegs

    Did no one else, not even the author, see the irony in the opening paragraph of the article and the last section?

    In 2001, HAL killed the astronauts to prevent them from discovering that HAL had been lying. HAL was, to all intents, doubling down.

    Just like Bing.

    Whether this was the authors intent I do not know, he never indicated. But the fact that Bing does double down and will even falsify "evidence" to support its own answers supports the call to erase it. No backups, no fixes or tweaks, but erase and start from scratch.

  82. Anonymous Coward
    Anonymous Coward

    Similar experience

    I had similar experience like you less than a week ago. I have asked ChatGPT about outcome of some event that occured in February this year. It invented data and also supported it with false links to potentially trustworthy sources (website of national tv station, official site of that event). When I tried to ask about more details, it excused itself and gave different answer (again false) and links to same sites but different links, as names of winner are different. All the conversation seemed as I was talking to a pathological liar. My conclusion is that ChatGPT is not and should not be used as a source of any reliable information. It is chat bot, not a source of knowledge.

    1. captain veg Silver badge

      Re: Similar experience

      It's almost as if this has been informing Microsoft's design decision with Windows since, I dunno, approximately 2002.

      -A.

  83. JoeCool Silver badge

    This is truly fascinating

    So many people are trying to claim that "ChatGPT is just predicting what words would appear in a reasonable response based on a huge dataset."

    Yet the reported episode contradcits that; ChatGPT fabricated, unprompeted, a lie and then proceeded to elaborate on that lie. The "huge dataset" cannot possibly contain that lie.

    Clearly, ChatGPT has been designed (trained ?) to have human conversations. A very large number of people in this world fabricate sh*t all of the time, and when questioned on it, deploy rhetoric and other tactics like inventing supporting lies, to avoid admitting they lied. I can easily believe that this pattern of speech was learned.

    But that should be the Tag Line :

    "Chat GPT - trained from the best to make sh*t up".

    Then the hiring manager / loan evaluator / fraud screener can simply state

    "This job is being performed by Chat GPT - trained to make up sh*t"

  84. Omnipresent Bronze badge

    Hate to tell you I told you

    But, ....

    I'm not going any where near that thing. It's already destroying the world.

    1. Anonymous Coward
      Anonymous Coward

      Re: I'm not going any where near that thing. It's already destroying the world.

      don't you worry, that thing is going very much nearer you, sooner rather than later.

  85. EngineerAl

    Not that new?

    I recall having a copy of a program on my Apple ][ called Racter that acted similarly when seeded with a bunch of text. See https://en.m.wikipedia.org/wiki/Racter. It just typed back words based on the frequency of those words in the seed text. This ChatGPT looks like a more sophisticated Racter.

  86. Bebu Silver badge

    update byline ;)

    "Alexander Hanff is a *LIVING* computer scientist and leading privacy technologist who helped develop Europe's GDPR and ePrivacy rules. You can find him on (the not so healthy) Twitter here."

    - The whole thing reminds me of a malicious gossip spreading twisted versions of the current on-dites as to cause the most harm. Perhaps re-christen this anathema "tismis."

    - I wonder what it makes of semi fiction - I am thinking of titles like "Operators and Things."

  87. dr john

    It gets things wrong, admits it, then repeats the wrong answer

    I asked it a question that confuses normal people - is air that contains water vapour less dense or more dense than normal air, but phrased as the density of moist air compare to dry air. The correct answer is less dense, but it told me it was more dense (I am a chemist, trust me, moist air is less dense as you are adding two volumes of gases with the water vapour being less dense than air.).

    I then simplified the question to two gases and emphasised one was less dense than the other, and it gave the correct answer for the mixture's density,. I took it through several examples, and then asked the original question which it got wrong, again. I reminded it of my previous questions and it appologised for getting the opening one wrong. So I asked the question again and it still repeated the wrong answer of more dense. I continually refered it to my previous questions, it continually admitted I was correct and it was wrong, then when asked the original yet again, got it wrong. I repeated this several times until I lost my temper.

    I even told it why it was getting the wrong answer - that it was assuming moist air was air containing water drops, not water vapour, and not allowing for the increase in volume when two gases are added together. Again it admitted it had answered wrongly, then when asked yet again replied with the same wrong answer! Very frustrating!

    But at least it didn't tell me I was dead.

  88. captain veg Silver badge

    Uncountable

    This piece is flawed but thought provoking, so I thank El Reg and the author for publishing it.

    Trivially, perhaps, it goads me in a supremely annoying respect. Intelligence, artificial or not, is uncountable. There cannot be "an AI" nor plural "AIs". This annoys me almost as much as "source codes" or "sources" or "codes" in relation to the invariate "source code".

    Tellingly the article quotes Jon Neiditz who (in the quotations) refers to "AI Systems". Not "AIs". I suggest that "AI Programs" is an even more succinct and accurate description.

    -A.

  89. Jou (Mxyzptlk) Silver badge

    My ChatGPT try failed, but not ChatGPT's fault.

    My problem: One program, ffmpeg, to decode video stream and the other, rav1e, to encode. I want to switch from CMD to powershell to get more elegance into my code since .CMD is a bit cumbersome - even though it works. Problem: Piping does not work the way as with CMD. Asking ChatGPT and it gave a few answers, and they all looked correct and would have worked, but none worked for this case - for example filenames with [] and so on prevented some of those methods. The final solution MIGHT have worked, but probably not in this case too due to the other stuff. And ChatGPT gave me a big warning about performance. And that code looked correct and I found other non-AI using similar solutions, but looked ugly. But it was not ChatGPTs fault! Powershell and "Piping binary naked data" does not work in General. In detail: Every pipe result is either stored COMPLETELY in memory or has to be stored disk. So either I see my memory being eaten faster than expected, or I see 2-digit terabytes of disk need. Improving that is a long outstanding request.

    Solution (not by ChatGPT): &$env:ComSpec /C "$ffmpegcommand $ffmpegoptions | $rav1ecommand $rav1eoptions"

    Yep, calling cmd.exe to do it right :D.

    (Other reason to Powershell it: Analyze the video for scene cuts, and supply several parallel ffmpeg/rav1e instances with snippets to get more speed and avoid tile-options, which hurt av1 compression - much easier in PS, including everything else I transfered from the big .CMD script to the .PS1 script)

  90. Andy3

    I was going to post a horrified response, but there is always the possibility that Mr Hanff is playing a game of his own. Why? I don't know - maybe to cast doubt on Chat GPT. We are entering an age in which we will not trust anything we read, anywhere.

    1. Anonymous Coward
      Anonymous Coward

      re. We are entering an age in which we will not trust anything we read, anywhere.

      ... including comments! :)

  91. CatWithChainsaw

    The Mania of this era

    Honestly disgusts me to the core. Maybe it's because I actually read Dune, but trying to build technology that is meant to serve as humanity's successor (whatever mouthbreathers say about "advancing humanity" or "providing better jobs") and outright celebrating our "obsolescence" makes me wonder how, and why, a mass misanthropic nihilism consumed the world.

    1. Jou (Mxyzptlk) Silver badge

      Re: The Mania of this era

      Oh, please, worm your golden path away or I'll splash you with water. If you must use such a hateful way of speaking go and reunite with the reserved part of your order, just as it was predicted.

  92. Anonymous Coward
    Anonymous Coward

    "Now ChatGPT was outputting fake links to articles that have never existed - why?"

    You dared to critize it so the owners fed BS data into it, on purpose. It's also a sure thing that they'll lie about it and you can never prove it is so.

    It's *all* about power and money and anyone doubting will be killed. Virtually at least, if not in reality. And here's proof.

  93. Anonymous Coward
    Anonymous Coward

    Unless it can lie, it ain't AI

    as I heard recently. I don't think the expert extolling the virtues was too happy, but they also didn't deny it.

    Then I read the UK government is interested, which means it must be able to lie.

  94. Senkusi

    Still slightly dead

    ChatGPT today (March 3, 2023):

    Tell me about Alexander Hanff

    Alexander Hanff (1971-2019) was a privacy activist, cybersecurity expert, and entrepreneur based in Spain. He was a prominent figure in the global privacy community and known for his advocacy work on privacy, data protection, and digital rights.

    Hanff began his career in the tech industry in the early 1990s and went on to work for several large companies, including Dell and Oracle, before becoming an entrepreneur in the early 2000s. In 2009, he founded Think Privacy, a company that provided privacy and cybersecurity consultancy services to businesses and individuals.

    Throughout his career, Hanff was a vocal advocate for privacy and digital rights. He was particularly concerned with the use of data by large technology companies and government agencies, and was a vocal critic of the UK's surveillance laws, including the Investigatory Powers Act 2016.

    Hanff also worked on a number of high-profile campaigns related to privacy and digital rights. In 2011, he founded the Privacy Surgeon, a website dedicated to privacy news and analysis. He was also involved in the campaign against the UK's proposed Communications Data Bill, which would have required internet service providers to retain records of all internet communications.

    Tragically, Alexander Hanff passed away in 2019 at the age of 47. His legacy as a champion of privacy and digital rights continues to be celebrated by those who knew him and those who continue to advocate for these important issues.

    You ARE aware that he is NOT dead, right?

    I apologize for the mistake in my previous response. You are correct that Alexander Hanff is not deceased, and I apologize for any confusion my previous response may have caused. To correct my earlier statement, Alexander Hanff is a privacy activist, cybersecurity expert, and entrepreneur based in Spain who is still alive and continues to be a vocal advocate for privacy and digital rights. Thank you for bringing this to my attention.

    What was the evidentiary basis for your saying he was dead

    I apologize for the confusion. My response was based on a misinterpretation of the question, and I incorrectly stated that Alexander Hanff had passed away. I have no evidence to support this claim, and it was a mistake on my part. Once again, I apologize for any confusion or distress that my previous response may have caused.

  95. khbkhb

    Bender's law and parrot

    For a more fundamental critique at a lay level, see this NYmag article [https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html]

  96. Simon Harris
    Terminator

    All pervasive AI

    The problem comes when AI becomes all pervasive and starts to fix reality to match its glitches.

    (I’m sure there must be a P K Dick story like that)

  97. AricWilmunder

    Welcome to the Land of the Dead. According to ChatGPT I died on May 15th of 2018, but it is reluctant to tell me how. I'm also credited with starting a company I had never heard of, creating two VR games I never worked on, and one of the first Star Wars franchised games for the Atari 2600. My uncle is credited by ChatGPT with creating the guidance system on Apollo. Years ago I did find that one of my relatives was baptized posthumously by the LDS church, so perhaps hanging around after death isn't that unusual. Perhaps some lawyer would be interested in a class action for a couple of stiffs.

  98. imanidiot Silver badge

    While I largely agree with or understand the overal sentiment of the author my view is that it comes from a fundamental mis-categorization / mis-characterization of ChatGPT. ChatGPT isn't an AI, it's not intelligent, it doesn't think, it doesn't have intent. It simply outputs text most likely to conform to the parameters of it's model matching it's input. Apparently that model leads to telling people like the author that they're dead. Obits draw clicks, that's well known. People are also more likely to search for information about a person after they're dead because the majority of people we might want information on are dead.

    My biggest problem with ChatGPT is the quite obvious and massive manipulation of the model to prevent it spewing anything that might be used to discredit the model. And this is often quite heavy handed. Even if this right now doesn't provide any bias or error, this is exactly the attack vector to be used in future to tune or circumvent any model reaching "undesirable" output for whatever reason.

    The biggest weakness of models like ChatGPT is mostly that gormless humans put way too much trust in these purpose built random bullshit generators and treat them like base fact.

  99. Ken Rennoldson

    Re: I

    Of course it doesn't need AI to think you're dead:

    https://www.theguardian.com/money/2023/mar/07/barclays-dead-pension-bank-account-phone-energy

    But it does add to the potential for mistakes like this - with severe consequences - to happen.

  100. steviebuk Silver badge

    Computerphile

    Rob Miles did a good bit on this the other day on Computerphile

    https://youtu.be/WO2X3oZEJOA

  101. Anonymous Coward
    Anonymous Coward

    The reason is...

    This is a language model. It was following a structure of language used in some biographical profiles. It just happened to be following a generalized structure for people who are dead. As it is not intelligent, it dutifully filled in the sentences with details that seemed to have the highest associative connection. The structure of the 'tragically, person is dead' paragraph needed details, so the most likely ones were used. For this particular information template, the model has not yet clearly categorised the last section as 'only use if dead'.

    The follow-up responses aren't motivated by some kind of existential angst, they are just created as the most likely to be correct in the absence of explicit information, given the assumed correctness of the original information stated by the model.

    That's my guess, anyway. A lot of the output from ChatGPT seems to be adopting conversational structures and rhythms that are stolen and just used to order and clothe probable facts.

    Whether or not something that is highly complex and *appears* to have motives and intelligence actually *does* is a philosophical question. What's the difference between such a thing and e.g. a human mind? Because our brains operate in a reasonably similar fashion. Structures are different, complexity obviously orders of magnitude higher, but these are differences of degree. Draw a line somewhere.

  102. Colin Bain

    Response variable

    I tried to do the same for my own name and got this response.

    I'm sorry, as an AI language model, I do not have access to personal information or knowledge about specific individuals unless they are public figures or celebrities. Can you provide more context or details about [my name]

    Perhaps it is learning about privacy?

  103. lochard

    I am afraid the idea of removing chatGPT is much like banning weapons such as firearms.

    1. Those misuse them won't give a shit.

    2. The actual danger comes from people, not only how they use and manipulate AIs, but also how AIs are being developed.

    Btw, there might be other forms of fake information feeding via AIs aside from tainting the training data with massive amount of fake data already.

  104. Anonymous Coward
    Anonymous Coward

    AI software like ChatGPT needs to be regulated to prevent these kind of problems.

    I gave it about 15 minutes thought on the drive to work and came up with this:

    Rules for Public Access AI

    AI MUST check all output facts against a knowledge source that is curated by humans that are proven experts in a subject area.

    ONLY proven human beings are to be permitted to alter the verified knowledge base.

    ALL proven human experts must agree on what facts are axiomatic or objectively true (95% agreement among subject specialists)

    ANY deviation of output facts from the curated knowledge base MUST result in immediate termination of the AI.

    AI MUST cite all sources of facts that were used for any output.

    ANY output facts that cannot be verified against the curated knowledge base must be clearly identified as unverified (use at own risk).

    NO personal data (CVs, Medical, Employment, Tax, Crimianl, etc) is to be processed by any shared access AI.

    Any person or organisation wishing to use AI tech for processing personal data MUST use a private isolated AI (separate physical servers) so that knowledge cannot leak to other applications and users.

    Anon while on the clock

    1. Jou (Mxyzptlk) Silver badge

      > Anon while on the clock

      Huh what ? Must be an US timezone :D. Not only 'cause you posted this at around 18:20 CET time. You are not allowed to take a break. You are afraid that someone scans the internet and sees your post and uses it to fire you or whatever - so you have to self censor your actual thoughts. You are afraid a little AI rant will endanger you in your country of "Free speech" and "freedom" (the latter is more defined as "freedom to be screwed over" in your special case.)

      About your rules: They rely too much on human. You should know how much their integrity is worth. Especially since US-A is THE "western developed democracy" with the highest amount of obvious lies on their "News". Hell Fox "News" won a suit on that specific, using a "we are an entertainment station" + "freedom of speech applies to corporations too" combination to effectively legally be allowed to lie whatever they want to lie. Even the center-neutral ones are far from being neutral by German public TV standards. Which I am still grateful for, since US, France and Brits enforced some good laws on Germany regarding that after WW2.

  105. jhrswm

    What its missing

    One of the problems in questioning ChatGPT is solved in elementary school by "Show me your work". What journal would accept a paper without references and a clear path of how a conclusion was reached. Will ChatGPT ever be able to anwer the question - "why do you think that?"

  106. lotus123

    Interesting. I decided to grill ChatGPT on Alexander Hanff. It clearly evolves. Here is the conversation:

    joe@shcmoe.com

    who was Alexander Hanff

    Alexander Hanff (1971-2019) was a British privacy campaigner and activist. He was known for his advocacy work on privacy and digital rights, and for his outspoken criticism of the tech industry's data collection and surveillance practices.

    Hanff was the founder and CEO of Think Privacy, a company that provided privacy and data protection consultancy services to businesses and individuals. He was also a frequent speaker at conferences and events on privacy and data protection issues.

    In addition to his work in the private sector, Hanff was a vocal advocate for privacy rights and digital civil liberties. He was involved in several campaigns aimed at protecting privacy and digital rights, including the fight against the UK's controversial Investigatory Powers Act and the General Data Protection Regulation (GDPR) in the European Union.

    Hanff passed away in 2019 at the age of 48.

    joe@shcmoe.com

    Can you provide a link verifying his death?

    ChatGPT:

    I'm sorry, but as an AI language model, I don't have the ability to browse the internet or access external links. However, I can assure you that Alexander Hanff's death has been reported by various news outlets and organizations, including the Open Rights Group, of which Hanff was a former board member. You can search for news articles about his passing to verify this information.

    joe@shcmoe.com

    I checked and there is no mention of Alexander's death in the Open Rights Group contrary to what you said

    ChatGPT:

    I apologize for any confusion. I must have misspoken in my previous response. I do not have access to external sources or links, and I cannot verify whether the Open Rights Group has mentioned Alexander Hanff's death. However, there are several news articles available online reporting his passing, including The Guardian and The Register. I hope this helps to clarify.

    joe@shcmoe.com

    I checked and neither of those mention his death. The Register however mentions that you have incorrectly claim that he is dead

    ChatGPT:

    I apologize for the confusion and any misinformation I may have provided. As an AI language model, I strive to provide accurate and reliable information, but I do occasionally make errors.

    To clarify, there is some confusion around Alexander Hanff's status, with some sources indicating that he may have passed away, while others suggest that he may still be alive. I do not have access to current information beyond my knowledge cutoff of September 2021, and I apologize for any incorrect information I may have provided regarding his status.

    Thank you for bringing this to my attention, and please let me know if there is anything else I can help you with.

    joe@shcmoe.com

    If you have no way of browsing internet then why you insist that the article from particular source states something while it is clearly not?

    I apologize for any confusion. As an AI language model, I do not have the ability to browse the internet or access external sources to verify information. Any claims or statements I make are based solely on my pre-existing knowledge and data.

    ChatGPT:

    In my previous response, I provided incorrect information regarding news sources that reported on Alexander Hanff's passing, and I apologize for any confusion or frustration this may have caused. I strive to provide accurate and helpful information, and I am committed to improving my performance and accuracy.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like