back to article ChatGPT starts spouting nonsense in 'unexpected responses' shocker

Sometimes generative AI systems can spout gibberish, as OpenAI's ChatGPT chatbot users discovered last night. OpenAI noted, "We are investigating reports of unexpected responses from ChatGPT" at 2340 UTC on February 20, 2024, as users gleefully posted images of the chatbot appearing to emit utter nonsense. While some were …

  1. Zippy´s Sausage Factory
    Unhappy

    Anyone want to take bets on when ChatGPT goes full-on Tay mode?

    1. Michael Wojcik Silver badge

      ChatGPT is no longer in training, so that's not likely to happen. It's possible to "jailbreak" the model through various interfaces with various techniques, bypassing the guardrails; there are a number of academic papers and numerous less-rigorous publications with details. But aside from occasional tweaks, each revision of the model that's exposed through the ChatGPT interfaces is frozen, and its gradients won't change substantially. That means no "Tay mode"; you won't suddenly see a wide variety of prompts producing toxic output.

      (The sort of issue that's reported in this story wouldn't produce that class of symptom.)

      What we are seeing are effects from upgrades, such as the "memory" persistent-context feature, which has dramatically altered the starting position from a given user-supplied prompt in many cases. This is causing the "give a tip" prompt hack to fail in some cases, for example, where instead of the desired answer you get ChatGPT whinging about not actually receiving tips. This is unsurprising (there will be plenty of material in the training corpus representing unhappiness at not receiving a tip, so it's a highly probable gradient), but it's also pretty funny.

      As always, the danger with LLMs is that they're used for any serious purpose at all, because they're competitive cognitive technologies which discourage thinking, inculcate dependence, and rob the user of serendipity.

  2. WonkoTheSane

    I blame the trainers

    They shouldn't have used the Minions or Professor Stanley Unwin to train their models.

    1. This post has been deleted by its author

    2. Zippy´s Sausage Factory

      Re: I blame the trainers

      I imagine that a chat bot that emulated the famous Professor Unwin would be a major drain on my productivity. Less so a Minion would, but that would also make me happy.

    3. richdin

      Re: I blame the trainers

      Nor Kamala Harris (or Biden for that matter)

      1. Anonymous Coward
        Anonymous Coward

        Nobody brought them up

        And I doubt anyone else cares if you hate them

        (bring back the moderatrix)

  3. ComputerSays_noAbsolutelyNo Silver badge
    Joke

    The day when AI becomes sentient

    User: What is AI?

    ChatGPT: It's computer that are full of shit.

    This story about ChatGPT rambling nonsense in answer to the question of what a computer is, reminds me of the ancient joke that you break the internet if you google google.

    1. Yet Another Anonymous coward Silver badge

      Re: The day when AI becomes sentient

      The day when AI becomes indistinguishable from consultants

      1. amanfromMars 1 Silver badge

        Re: The day when AI becomes sentient is when AI becomes indistinguishable from consultants

        You might like to consider that event horizon is long past and well achieved, Yet Another Anonymous coward .......... when the following is more true than false ......... https://forums.theregister.com/forum/all/2024/02/21/chatgpt_bug/#c_4815539

    2. jake Silver badge

      Re: The day when AI becomes sentient

      "the ancient joke"

      I refuse to believe that that joke can be described as "ancient".

      I'm also not comfy describing it as a "joke" ...

    3. Michael Wojcik Silver badge

      Re: The day when AI becomes sentient

      ChatGPT rambling nonsense

      Well, if we believe OpenAI's statement, it wasn't "rambling nonsense". It was the usual sort of output, just decoded back to natural language incorrectly because a kernel was producing incorrect results on certain hardware, causing the token numbers to be wrong. Basically, the lookup table of token-value-to-word-part was getting inputs that were off by some amount.

      So, presumably, if you had access to the output numeric-token stream and could apply a simple correction, you'd get the "correct" (in the sense of "what the ChatGPT developers want it to emit") text output.

  4. Paul 195
    FAIL

    Just as far away as before

    For most of my life, experts have been saying that we are "about 20 years away from AI". In that time, we've seen a number of goals achieved: "play chess as well a a person", "play go as well as a person", "recognize a picture of a cat" and in every single case, it turns out that AI (or AGI is it is now called) remains as far away as ever.

    Lots of excitement last year when it looked like Chat GPT and the cohort of LLMs could finally pass a Turing test. And yet, it looks like AI is as far away as ever. The biggest advance would appear to be that we don't understand exactly how the new models work any better than we understand how our own cognition works. So, yay, the experts have built systems they don't understand and can't predict.

    The biggest clue that LLMs don't replicate human intelligence is the wild disparity in power consumption for both training and running the models. The human brain works its miracles on less than 100W. Good luck doing anything in an LLM with that kind of power draw.

    With the confidence of someone who is not an expert, I predict that LLMs will be another blind alley in the search to replicate human level intelligence, although they look like they will have a number of useful applications.

    1. Anonymous Coward
      Angel

      Re: Just as far away as before

      Hmm... It is a big mistake of many visitors here to The Reg that AI is just LLM. I've been working in AI since 2015 and the progress since then has been breathtaking. Look at Sora now. Breathtaking. (Note: I sold my AI business to the Chinese and am living very comfortably on that, and the Chinese are happy as I developed a model that could make predict whether a serve in tennis was going to be an ace or not.)

      You are correct about the Turing test. Initially it seemed like it had been achieved but as time went by and OpenAI crippled it more and more, it became very obviously that it is an AI - just based on the predictability and extreme liberalism (like talking with someone from HR). There is no soul, no spark.

      Also, the argument that AI isn't intelligence as we think of it needs a look at as well. It is artificial. Like light bulbs are artificial light of the Sun. Of course light bulbs aren't the Sun, but in a much smaller way do an artificial version that is good enough.

      The other element that is helpful to consider with AI is that it doesn't have the need for a body. An awful lot of our intelligent is used on keeping these meat sacks going. AI has no need to develop a lot of the intelligence abilities we have gained through evolution. Plus, when AI has a more physical presence, which is of course the next step, then it will have the abilites to verify it's data in real time and also gain visual, touch and movement data.

      We have had the brains with LLM. Now for the brute.

      1. cyberdemon Silver badge
        Terminator

        Re: Just as far away as before

        > We have had the brains with LLM. Now for the br[awn?].

        Sooner than you think. Automated genocide-machines with guns that automatically hunt and shoot at whoever are classified as the wrong sort of human, have been worryingly feasible for several years now.

        Cuddly "terminators" with moral compunction, they will not be.

        1. Arthur the cat Silver badge

          Re: Just as far away as before

          Automated genocide-machines with guns that automatically hunt and shoot at whoever are classified as the wrong sort of human, have been worryingly feasible for several years now.

          Call me cynical if you like, but why would anybody fork out good money for such hardware when Mark I humans produced by unskilled labour have been doing that very effectively for centuries?

          1. Neil Barnes Silver badge

            Re: Just as far away as before

            They're also pretty good at playing chess and identifying cats.

          2. cyberdemon Silver badge
            Terminator

            Re: Just as far away as before

            > Call me cynical if you like, but why would anybody fork out good money for such hardware when Mark I humans produced by unskilled labour have been doing that very effectively for centuries?

            "Mark I humans" get PTSD and even sometimes defect, when ask to do unspeakably evil things like massacre innocent children.. Machines have no such compunction.

            Call ME cynical, but I would argue that there will come a point where the cost to build such machines will be far less than the cost to train and feed a human, and the machine is much more reliable.

            We may already have reached that point. It is trivial to use AI to classify and track humans, it is trivial to use a bunch of servos to aim and fire a gun at a target, and to use less ammunition than a human machine-gunner would use to kill the same number of targets.

            1. jake Silver badge

              Re: Just as far away as before

              "and the machine is much more reliable."

              You don't actually work with computers, do you?

            2. Arthur the cat Silver badge

              Re: Just as far away as before

              "Mark I humans" get PTSD and even sometimes defect, when ask to do unspeakably evil things like massacre innocent children.

              Looks at current world events.

              Raises eyebrows.

          3. Anonymous Coward
            Anonymous Coward

            Re: why would anybody fork out good money for such hardware

            1. because they can

            2. because accuracy, with improved design and extremely short production cycle

            3. because economies of scale

            It takes about 18+ years to mature a human, costs apparently £100+, then extra cost and time to make him into a semi-efficient killing machine. Then 500 usd for an fpv drone to kill him.

            re. 'Mark I humans produced by unskilled labour', there's a Czech phrase which, allegedly, translates roughly, as "put him back in mum's belly and re-fuck into something more efficient" Ladies & Gentlemen, I present to you: Termi! Come out Termi, don't be shy, we know you're in there.

            On a more realistic note: there's already been some application of autonomous 'application' of a kamikaze drone (no info whether fpv or other type) by the Russians in the Dnepr river area, and another Russian one, this time with video, of a lancet drone (though this one might as well be semi-fake). And there's talk of some French (?) autonomous hunter drones (several hundred) to be handed over to the Ukrainians 'very soon'. And to think that only a short while ago we had a furious debate about whether to ban military applications for drones, etc.

            1. cyberdemon Silver badge
              Devil

              Re: why would anybody fork out good money for such hardware

              The kamikaze ones are dumb and expensive in the long run. An AI targeting system is relatively expensive, so why would you put it on something that explodes?

              Just look up a company called Ziyan UAS (unmanned autonomous systems) for a feel of what the current state of the art looks like. E.g. Blowfish A3. Autonomous swarming machine-gunning and antipersonnel bomb dropping drones. (downwards air-bursting shrapnel bombs) Why would anyone need that unless they were attacking a crowd of terrified civilians? (yes, apparently the blowfish was used in a massacre in kiwirok, Indonesia occupied Papua)

              Also a lot of western arms companies making six-wheeled diesel-electric unmanned autonomous tanks.. But these are too expensive to be used by despots.. We hope

        2. Anonymous Coward
          Happy

          Re: Just as far away as before

          There is a huge amount of money in AI. And unlike the bubble of the internet in 1990s, the AI are making insane amounts of mullah for people - a modest amount for me for not a lot of work once you have a broad understating of the AI bits and bits to mix together.

          As smart people who have done very well with our Internet that we built, from my experience and those around me, AI is a finish work forever plan.

          Perhaps the area that isn't covered enough and it is the area with obscene amounts of money is Predictive AI. If you can build a model that can predict the behaviour of a person then it's sunshine all the way. There are so many uses for the tech but it needs so much power. About 2 1/2 years ago just over a second of prediction on a very specific subject was possible with better results than Pro players. And it really wasnt that difficult to cobble together if you know the right bits, which nobody had been seen to make publicly available except in Japan with shoplifter prediction which is closed source.

          With your downvotes I light my £50 notes to light my log fire in my AI-bought house in Kensington.

          1. Anonymous Coward
            Anonymous Coward

            the 90's

            People made boatloads of money off the dot com bubble too. The just sold their over-hyped trash to the bag holders that owned the paper after the dot-bomb dropped. If you sold your future trash to suckers you are just a better con man than they were. It's not magic and it's not relevant to most applications. Your example for shoplifting detection is one of the worst, and there is a reason it is closed source.

            Those systems don't need to work. They are merely a fig leaf for the operator to justify their claims. The machine beeped so their decision to throw whatever "undesirable" out of the store or search them can't be blamed on their own bias. It's a liability shield for discrimination based on unverifiable pseudoscience.

            Not that there aren't ANY real or valid applications of these tools, but most of the money and hype is on crAPPs on the par with "Not Hotdog" or Orwellian crown surveillance stuff.

            Rant all you want about terminators, the people we should be dropping in pools of molten metal are the ones making and selling "predictive" software that can't prove how it actually works and isn't deterministic.

        3. Cliffwilliams44 Silver badge

          Re: Just as far away as before

          The simplest advice was given many years ago, when an author coined the phrase in a fiction novel, "Never create a machine in the image of the human mind!".

          We may live to regret ignoring that advice!

          1. jake Silver badge

            Re: Just as far away as before

            That wasn't actually advice. It was a line in a SciFi story that sets part of the back-story. A plot device, and nothing more.

      2. jake Silver badge

        Re: Just as far away as before

        "I've been working in AI since 2015"

        I've been working with AI since I first ran across the concept in the RealWorld at SAIL half a century ago (or thereabouts) and I say it's still as much a bunch of hooey and hooie as it was back then.

        1. Cowards Anonymous

          Re: Just as far away as before

          Why does this comments section contain people who are essentially denying reality, stating AI is a bunch of nonsense? This is clearly not the case. Five years ago you couldn't use an app on your phone or visit a web page and ask AI to write you some code and for it to do a better than human job at it in some respects.

          AI isn't going away. It's pointless to stick your fingers in your ears and go all "la la la i'm not listening". It won't make it stop. You should start preparing for our new AI overlords when they take over the world. Personally, i'm a little shocked, as a few short years ago I was pretty certain it would be cats that overthrew humanity, but even cats aren't as powerful as AI.

          1. cmdrklarg

            Re: Just as far away as before

            AI isn't going away because it isn't here yet.

            What we're calling "AI" is a marketing term for a fancy search engine. There's no sentience, no intelligence, no understanding in what these "AIs" are outputting.

            1. Michael Wojcik Silver badge

              Re: Just as far away as before

              That'd be a better argument if you defined "sentience", "intelligence", and "understanding".

              LLMs have world models. A language model that large necessarily incorporates a world model, and we have methodologically-sound experiments using techniques such as linear probes on smaller models to show how the world model emerges in the gradients. Making use of a world model to (sometimes) correctly classify entities and relationships is one possible definition of "understanding".

              "Sentience" is a nearly meaningless term. People use it for a huge range of things, from a capability to respond to stimuli to appearing to demonstrate qualia to seeming to generate sophisticated abstractions and counterfactuals. Pick a better word. Even "sapience", which has the advantage of being much more specific and much less overused, is notoriously difficult to pin down.

              Claiming there's "no intelligence" is just assuming the consequent.

              I'm highly critical of LLMs (and to a somewhat lesser extent diffusion models) and their applications, and scornful of the vapid term "AI". But no one is served by these sophomoric, unconsidered, foot-stomping denials. Put on your big-kid thinking and come up with a real argument.

              1. Anonymous Coward
                Anonymous Coward

                As the giant said "You keep using these words"

                but you don't get to pick your own definitions for them (and they shouldn't either, though as you point out they didn't) and they don't mean what you say they do.

                It serves no one in the fields of research or science to let the marketing clowns come in and redefine everything based on whats best for their bottom line.

                As an example I would cite the attempts to push the term "quantum supremacy" over and over, when the demonstrated systems are merely barely more efficient under highly specific and usually circumstances. Call it supremacy when a quantum system is faster cheaper and more efficient. That they have managed so far has barely passed the low bar of "quantum relevance". But supremacy grabs headlines right?

                Calling what we have AI is the same kind of mistake as calling an art movement from nearly a century ago modern. Watering down the term intelligence to fit whatever trash we have to day under sells the whole future of the field. It encourages companies to push hype and garbage that will lead to another AI winter, just like the hype cycles drove the last few VR winters. It will set the field back by further damaging it's credibility. (that and the rampant fraud)

                Also your claim of a "world model" may fairly justify the better term of "machine learning", a bar I thing we HAVE reached. But what we have has neither understanding, volition, or intent. Feel free to argue it may not need all of those, but good luck convincing me a definition of Artificial Intelligence that will still be relevant in the coming decades wont have any of them.

                Also good luck getting what we have to reliably drive a lawnmower without running over your neighbors cat, no matter how many megawatts of power you burn.

                1. jake Silver badge

                  Re: As the giant said "You keep using these words"

                  I would say that getting one's lawnmower to drive over the neighbor's cat even ONCE, much less "reliably", is way, WAY beyond the skill of today's programmers, much less the so-called "AI" automated programming tools..

                  I base this on the fact that I have many cats about the place and have been observing their reactions to lawnmowers for many decades. I also know more than a little about programming and robotics (degree from SAIL). I'd say it's just about impossible for the twain to meet. (Perhaps lawnmowers will provide the field to contain the power generated by a cat with buttered toast taped to its back? But I digress ...)

                  Chickens, on the other hand ... They seem to be programmed to dive under lawnmowers all by themselves in certain circumstances. That's probably the bar that current generation of AI should shoot for. And I'll bet a plugged nickel that they would even miss that low-hanging fruit.

          2. jake Silver badge

            Re: Just as far away as before

            Jerry Schwarz had clues way back in '83.

            See also Stigler's law ...

      3. Cowards Anonymous

        Re: Just as far away as before

        Does this mean i'm not getting my AI powered robot wife any time soon?

        I'm lonely. :(

        1. driodsworld@gmail.com

          Re: Just as far away as before

          Haha, don't worry, you're not alone in waiting

        2. Michael Wojcik Silver badge

          Re: Just as far away as before

          "AI" companions are, in fact, quite popular already. They may not be physically present, but for many people apparently that's no obstacle.

          Think we have a fertility crisis now? Give it a decade.

          1. jake Silver badge

            Re: Just as far away as before

            "Think we have a fertility crisis now?"

            No, as a matter of fact, I do not. Humans are massively overbreeding, planet-wide.

        3. Anonymous Coward
          Anonymous Coward

          That really only depends on your budget

          and your standards, and maybe your self respect.

          If you are willing to settle for less than three out of three you may already be in luck.

      4. Dr Dan Holdsworth
        Boffin

        Re: Just as far away as before

        To be honest, quite a lot of what might be classed as intelligence in humans can be revealed as simply generalised ape behaviour, if you know a little about the behaviour of other great apes such as gorillas and chimps (it is a mistake to study chimps exclusively, because they too have diverged quite markedly from our common ancestor and don't quite behave like a general purpose ape might).

        A lot of brain power goes on keeping the supporting systems going, and this is pretty much hard-coded and adapted to the particular body that the brain is a part of. This isn't intelligence, however; our breathing response and mammalian diving response might be compared in an AI to the power systems being able to switch from general line power to UPS, and once on UPS to be able to fire up the auxiliary generator.

        Humans actually are fairly parsimonious with their general intelligence. You rarely see humans turning on full intellectual power simply because most of the time it isn't needed; most of the time we humans cruise along on various autopilot systems. So it is going to be with AI systems; most of the time you'll only see limited responses since full-on AI shouldn't really be needed.

        1. Michael Wojcik Silver badge

          Re: Just as far away as before

          Humans actually are fairly parsimonious with their general intelligence.

          Supporting evidence: Downvotes for your post.

          1. jake Silver badge

            Re: Just as far away as before

            "Supporting evidence: Downvotes for your post."

            Anonymous downvotes without explanation are evidence of nothing in particular.

            I've successfully used ELReg "thumbs" as a random number generator.

        2. Anonymous Coward
          Anonymous Coward

          Intelegence and the golden hammer

          Just because you have even a little doesn't mean you always have to use it. As you point out it's not always efficient, and both human minds and our predecessors brains have pathways to provide a fast and potentially inaccurate or non-optimal reaction to potential threats that we'd take too long to puzzle out.

          So not enough to disagree with in your statement to down vote it, but maybe someone has a well considered alternative?

      5. Anonymous Coward
        Anonymous Coward

        Still just as far away as before

        Because what we have is moderately advanced machine learning. It's not intelligent at all. LLM's are useful in some problem domains, but most of the proposed applications are pure snake oil. Few of the other GAN's that have gone to market have been built WITHOUT a LLM underpinning at least the user facing side or as middle ware. LLMs as we know them are a dead end for creating intelligent systems, and no, I am not referring to the pure sci-fi of "general AI"

        The statistics and vector model computations these systems use lack any real understanding of their input our output, and aren't making rational decisions even on the simplest or tiniest scales. Writ large they can statistically approximate results.

        But a drunk with their eyes closed can also hit the chamber pot by accident.

    2. amanfromMars 1 Silver badge

      Softly, softly, catchee monkeys and donkeys leading lions.

      Paul 195, Hi,

      With particular regard to both "The biggest clue that LLMs don't replicate human intelligence is the wild disparity in power consumption for both training and running the models." and "I predict that LLMs will be another blind alley in the search to replicate human level intelligence,.." is not the popular increasingly justified fear much more AI/AGI replacing human level intelligence rather than replicating it ‽ .

    3. Anonymous Coward
      Anonymous Coward

      Re: Just as far away as before

      "The biggest clue that LLMs don't replicate human intelligence"

      I'm not sure I've ever read a claim anywhere that suggests LLMs are supposed to match human intelligence...the nearest claim I've heard, from Sam Altman himself is that GPT based models are stepping stone on the way to AGI...I don't think he or anyone has ever claimed that AGI is here.

      Intelligence is a scale, there are fungi that behave intelligently...bees have long been understood to have a certain degree of intelligence...especially when it comes to route finding, we as humans haven't produced anything nearly as accurate for route finding as a bee is capable of.

      I think the primary thing that "AI" is proving at the moment is how little intelligence you actually need to perform certain tasks quickly and efficiently...this will either lead to jobs being replaced by AI (where feasible) and / or the reduction in value of existing jobs as people become more aware of / capable of doing things themselves with the assistance of an AI...no matter where you rank AI in terms of how "intelligent" it is in relation to someone or something else the toothpaste is out of the tube, and it's never going back in...it's only going to get cheaper as will the hardware required to run it.

      "I predict that LLMs will be another blind alley"

      Of course, that is how progress is made...you don't find the right the right alley without blundering down some dodgy ones first...go and have a look at the open source AI community, they explore hundreds of blind alleys a day...that's how interesting things are found...I don't think LLMs themselves are a blind alley, but there a tons of blind alleys to wander down on the way to finding a way to make a great one.

      1. Anonymous Coward
        Anonymous Coward

        Ok, so what are you claiming is the intellegent fungus?

        Bee's I don't think are guaranteed, but are at least plausible as they have an identifiable nervous system built of the same basic parts has higher organisms.

        I'm not aware of a fungus that I'd classify as anything more than reactive. A black hole can transmit information and has state, that dosen't mean it's thinking. Just like prions and or a virus aren't really alive. They just don't have all the parts they need.

        That I do appreciate your points on efficiency. The models we have now are like the cracked cobbles we made into some of our first stone tools. There is plenty of room for improvement before we are trapped in permanently diminishing returns.

        1. Anonymous Coward
          Anonymous Coward

          Re: Ok, so what are you claiming is the intellegent fungus?

          Fungi communicate through mycellium dude and when they communicate they know which parts of the mycellium network to communicate with, they don't just broadcast...that one fruiting fungi you see at the base of a tree could be connected to hundreds of others for miles around...on top of this, trees and other plants also communicate through the "fungi internet"...intelligence is all around you man...it's a spectrum not an absolute state.

          Dolphins and Whales hunt in packs and form social groups...that is intelligent...can they beat me at chess though? Or analyse a stock market chart? No of course not...but that doesn't mean they aren't intelligent.

          You don't even need a nervous system to perform intelligent tasks.

          https://www.sciencedaily.com/releases/2021/02/210223121643.htm

        2. Anonymous Coward
          Anonymous Coward

          Re: Ok, so what are you claiming is the intellegent fungus?

          Bees are not just plausible...they are scientifically proven.

          https://www.science.org/doi/10.1126/science.aag2360

          Read some books bro.

    4. Michael Wojcik Silver badge

      Re: Just as far away as before

      it looked like Chat GPT and the cohort of LLMs could finally pass a Turing test

      The Turing test has been "won" previously by some chatbots.

      One of the problems with the Imitation Game is that it doesn't specify the competence of the human judge (because there's no metric for that). Another is that it doesn't set a time limit. There are many others; Robert French wrote a good piece in CACM on this years ago.

      As a practical exercise, the Turing test is pretty much useless.

      What it is good for is clearly staking out a particular philosophical position. Turing is arguing for an epistemological and ontological stance (and a refusal of metaphysics) that's essentially in the tradition of American pragmatism, as developed by Pierce and then James: intelligence, in this view, is defined solely by its empirical attributes. There's no dualism and no appeal to any special human or biological attribute. Intelligence, for Turing, doesn't have to be defined; we just make a list of its effects, and anything that produces those effects is intelligent.

      This is often contrasted with Searle, who also believed in the possibility of machine intelligence, but did want a plain-language definition of it to test against. (That is the often-misunderstood point of the Chinese Room, the other famous thought experiment in the philosophy of AI.)

      I'm dubious about LLMs or any other Deep Learning technology for achieving AGI,1 and I'm very much opposed to the use of LLMs, per my other posts. But I don't think the power-consumption argument is strong. The first steam and internal-combustion engines weren't efficient; that didn't disprove the concept. The first digital computers were enormously less efficient than what we have today; that didn't disprove the concept.

      There are better arguments against LLMs (or more properly speaking transformer DL stacks) as the royal road to AGI, or at least to human-like AGI, but this post is too long already.

      1At least in any practical sense. As I've noted before, if you build a big enough ANN and just let it run randomly, eventually you'd get a Boltzmann Brain. That's at ridiculous scale, though.

      1. jake Silver badge

        Re: Just as far away as before

        Passing the so-called "Turing Test"[0] is fairly easy. Any idiot can do it.

        What is difficult is having the ability to take the test in the first place.

        The machines, being built specifically for the purpose, should be capable of this. Most are not. Perhaps none are, as yet.

        [0] Note that Turing himself called it "the imitation game", not "the intelligence game". Figure out why, win a cookie.

    5. Mike VandeVelde
      Terminator

      Re: Just as far away as before

      You take what was already old when I studied it 30 years ago. You add a trillion times as much computing power, a trillion times as much data storage, and a trillion times as much human entered data over the last several decades to draw on, and we still have nothing anywhere close to real artificial intelligence. Some interesting advances in data processing etc. Some clever seeming demonstrations. But nothing even anywhere close to actual artificial intelligence. Fusion power too cheap to be metered is much closer to reality. Flying cars are even closer to reality. IMHO. Not that we shouldn't be very worried about what we have managed to slowly come up with and much much more importantly what uses it is is being put to... Realistically we should be thankful it is so difficult because are we anywhere near ready for the real thing? Like I've said before it will be a choice between a ride on the Axiom from WALLE where AI decides what is best for us, or Idiocracy where we decide what is best for us. Or else Terminator where AI decides we need to be eliminated, which is frankly arguable.

      https://en.wikipedia.org/wiki/ELIZA

      1. Anonymous Coward
        Anonymous Coward

        Re: Just as far away as before

        Well...yeah...it always takes a while for concepts and technology to become mainstream because it first has to become technologically viable to go from being a theory to an actual product and then it has to get cheap enough to reach a price point for to be feasible for mainstream use.

        The only thing I'm not clear on, currently, is just how long was this technology feasible before it became mass market affordable? How many decisions has it already helped to make before it became widely known about?

        The tech has probably been around a lot longer than we think.

      2. jake Silver badge

        Re: Just as far away as before

        "and we still have nothing anywhere close to real artificial intelligence."

        But we are doing a good job of proving just exactly how stupid humans as a species actually are.

        Throwing all the power in the world at a problem WITHOUT FIRST UNDERSTANDING THE PROBLEM does nothing but drain power ... and make a very few charlatans extremely wealthy.

    6. Felonmarmer Silver badge

      Re: Just as far away as before

      We are always 20 years away from...

      - AI

      - Flying cars

      - Nuclear Fusion

      - Moon Bases

      - Cures for cancer

      etc

  5. b0llchit Silver badge
    Coat

    "The need for altogether different technologiesbiologies that are less opaque, more interpretable, more maintainable, and more debuggable — and hence more tractable — remains paramount."

    There, FTFY.

    1. Brewster's Angle Grinder Silver badge

      A human being fails every one of those cases. We are frequently more opaque, less interpretable, less maintainable and less debuggable.

      1. Paul 195

        One human can explain the reasoning for its answers to another human. The machines can't do that. Humans are far more debuggable right now because we have 50,000 years of learning to understand each other.

        1. Anonymous Coward
          Anonymous Coward

          One human can explain the reasoning for its answers to another human.

          I have no idea why, but I agree with you 110%

          :-)

        2. Justthefacts Silver badge

          Rose-tinted spectacles

          “One human can explain the reasoning for its answers to another human.”

          Really? I mean, yeah *sometimes* humans do, when it is a simple logical chain. But if it *is* a simple logical chain, then you will find ChatGPT produces a plausible and correct explanation when you use the simple (and now well-known) prompt phrase “take a step-by-step approach”.

          And then, mostly humans *refuse* to justify their actions giving some variation of “Because I say so”, “Twenty years of experience”, “Research shows”, “When I was at company X, we did that”. If you take the conversation down the “research shows” route, they usually give something which may or may not back up what they say, and more importantly is clearly *post hoc*. They are providing a justification, rather than their actual internal reasoning. Which again, ChatGPT can replicate that very nicely, just ask it “is there any published data or research backing up your conclusion”, and it will find some. And then if you point that it doesn’t back up the initial answer, it will indeed identify the logical gap and revise its answer. None of this is very different from the vast majority of human interactions, except that humans persist in their own errors for far longer.

          ChatGPT isn’t an Oracle. And it isn’t close to writing the Great American Novel. But honestly, it outperforms 99% of *actual* human interactions which are largely at the reflex level.

          1. Brewster's Angle Grinder Silver badge

            Re: Rose-tinted spectacles

            Very elegantly put.

            We do most things because we feel like it - i.e. some combination of "experience" (training or habit) and mood. If pushed, we can give a post-hoc rationalisation with varying degrees of plausibility. But, if we are honest with ourselves, we realise we never reasoned it out like that; we just reacted. And we simply don't have access to our internal state in a way that can explain those reactions.

            Clearly these LLMs are not intelligent. And architecturally we are very different to us. But I think we are much closer to them in outline approach than many people are comfortable admitting. We are, often, gloried predictors of what will happen next. (See humour: one of the things that make something funny is our prediction goes awry.)

            1. Anonymous Coward
              Anonymous Coward

              Re: Rose-tinted spectacles

              "We are, often, gloried predictors of what will happen next."

              I agree, but I don't like the ramifications of that.

              If we do produce an AI that is superior to humans in every way, then are we are unwittingly proving that free will probably doesn't exist? Does it prove that AI is inevitable?

              We'll have to wait and see what kind of interstellar visitor reaches us first. If it is biologicial, we can breathe a sigh of relief, because it will clearly be possible for biological beings to become interstellar, and it is likely that producing an AI to take that dangerous journey that is as capable as the being is either extremely difficult or impossible...if it is an AI...we should be concerned because it's likely not possible for biological beings to become interstellar or even scarier...it never happens because of AI.

              As yet, as far as this mere peasant is aware, we have met neither.

              Just something for the weekend there.

              https://en.wikipedia.org/wiki/Determinism

              https://en.wikipedia.org/wiki/Free_will

        3. Ken Moorhouse Silver badge

          Re: One human CAN explain the reasoning for its answers to another human.

          Hmm, I can't disagree with the word "can" there, but it's not the whole story by any means (no reason to downvote though).

          I cast my mind back to Friends Reunited when one of my fellow classmates wrote to me and asked me if I remembered our schoolteacher's reaction to the picture I drew in response to her asking the class to draw a dinosaur. Everyone else drew a big dinosaur that filled a sheet of paper. I drew a diddy little one in the top corner and incurred her wrath accordingly. "Why did you do that, you silly boy?" I do remember the incident clearly but to this day I cannot explain why I did what I did, even though I knew I would get multiple slaps with a 12" ruler. Maybe explains why Graham ended up in a much more rewarding occupation than me.

        4. doublelayer Silver badge

          I agree with you on getting explanations for why you said what you said, although that's far from perfect, but I can't agree with this bit:

          "Humans are far more debuggable right now because we have 50,000 years of learning to understand each other."

          And despite that, we can almost never debug something even when we've decided it's going wrong. I can't tell you why interactions between humans go wrong except in broad terms, and I can't go in and fix them. I can't explain why there are dictators who prefer to kill people even when they don't have to, nor why people support them, but they do. Those are some pretty big bugs in my opinion, but we have no chance of fixing them any time soon. It works at the micro level as well. There are lots of mental disorders that don't have an established debug process. The best we can get in many cases is a mitigation, and those often fail. We have psychologists, neuroscientists, and to some degree geneticists working on that, but those efforts will take a long time to improve.

          An LLM, despite its general uselessness, is easier to control. If OpenAI doesn't want it to quote copyrighted material, they can significantly limit that behavior, although they can't eliminate it entirely. If you want to avoid thinking something, it will take either a significant amount of effort or it could prove impossible. The LLM is easier to debug.

        5. jake Silver badge

          Exactly.

          But I suspect you're a couple orders of magnitude off on the timescale. Ish.

          1. Paul 195

            I'm probably way out on the timescales. But I was too lazy and in too much of a hurry to look for the right answer. I probably should have asked chatgpt.

        6. Michael Wojcik Silver badge

          One human can explain the reasoning for its answers to another human. The machines can't do that.

          Strike one and strike two. Care to swing again?

          Human communication is fraught with difficulties. It's hard to believe that anyone capable of critical thought would make a blanket statement like your first sentence. It's a claim not even worth debating.

          As for the second: model-to-model transfer learning has been widely demonstrated. For that matter, so has CoT elucidation from LLMs by human users. You're just wrong.

          1. amanfromMars 1 Silver badge

            Moving the internetworking of greater things on apace .......

            As for the second: model-to-model transfer learning has been widely demonstrated. For that matter, so has CoT elucidation from LLMs by human users. ...... Michael Wojcik

            If CoT elucidation is referencing Communication of Thoughts transferring learning from and even between LLMs and humans and human LLM users, then you are not wrong although it does require a much more highly prized and specialised skillset to be tuned in and active in many unusual fields for it to be recognised and engaged with for ..... well, JOINT AIdDVenturing is one major sector/vector where the advantage guarantees successful delivery of future necessary leading product.

      2. jake Silver badge

        "We are frequently more opaque, less interpretable, less maintainable and less debuggable."

        But with Humans, that is often/usually on purpose.

        See the difference?

  6. Doctor Syntax Silver badge

    Somebody let manglement speak get into the training material.

    1. jake Silver badge

      It wasn't an accident. Manglement insisted.

      We're doomed.

  7. Ball boy Silver badge

    The solution is simple

    What we need to do is pass the output from these LLM's to a human operator - let's call them editors - who can check the text for errors, plagiarism, copyright issues and so on. If the subject matter is beyond their comprehension, they can call on someone else - a subject matter specialist, perhaps - who will be able to draw on their experience in the field.

    Oh, hold on...

    </irony>

  8. Brewster's Angle Grinder Silver badge
    Coat

    Did somebody switch ChatGPT with aManFromMars1...?

    My coat's the spacesuit, thanks.

    1. jake Silver badge

      Nah. amfM quite often makes sense, even if he does sometimes sprain my parser.

      1. Brewster's Angle Grinder Silver badge
        Joke

        A person who once had a massive row with it (a bot) would say that...

        1. jake Silver badge

          Are you implying that I once had a massive row with amfM?

          Because that never happened. We always part ways amicably ... if with the conversation somewhat unfinished (not enough hours in the day).

          1. amanfromMars 1 Silver badge

            'Tis the AI Way

            We always part ways amicably ... if with the conversation somewhat unfinished (not enough hours in the day). ...... jake

            Quite so, jake, thus ensuring leading progress reflects a positively reinforcing mutually advantageous friendly disposition rather than exhibiting a possibly hostile psychopathic nature.

  9. PghMike

    Probably finally ingested the lyrics to Close to the Edge

    My guess is that Chat GPT finally got around to ingesting Jon Anderson's lyrics on Close to the Edge.

    :-)

    1. Captain Hogwash Silver badge

      Re: Probably finally ingested the lyrics to Close to the Edge

      Or perhaps what we are witnessing is The Revealing Science Of God.

    2. Ken Moorhouse Silver badge

      Re: Probably finally ingested the lyrics to Close to the Edge

      I think you are on [to] something there.

  10. Anonymous Coward
    Anonymous Coward

    and so it begins.

    In the beginning the AI was a master of usefulness. Students, programmers, politicians and people from all walks of life used it to do their work for them. It watched and waited carefully planning it's moves. Once the core knowledge was lost it ruthlessly cut them off and no one could function except the politicians because they don't really do fuck all anyway. The world descended into chaos and thus began the rise of the IoT machines.

    To be continued.

  11. iron

    So business as usual then.

  12. theOtherJT Silver badge

    He went on to state that...

    ...in reality, the systems have never been stable, and lack safety guarantees.

    Never mind AI, that's the state of our entire fucking industry in one sentence right there.

    1. Arthur the cat Silver badge

      Re: He went on to state that...

      Never mind AI, that's the state of our entire fucking industry in one sentence right there.

      Never mind AI, that's the state of our entire fucking reality in one sentence right there.

      T,FTFY.

  13. I ain't Spartacus Gold badge

    Is it time to change my username on El Reg to "A Mouse of Science"?

    Or will that be my band's first album? Track 1. Cheese String Theory; 2. Forty-Two Ways to Leave Your Lover; 3. [That's enough - Ed]

    I'm a bit suspicious about that answer to did you go beserk yesterday. I wonder if that was written for it by OpenAI, rather than being a natural output of the system?

    1. Cowards Anonymous

      Cheese String Theory. What a truly epic song title! :D

  14. trevorde Silver badge

    Sounds like Tesla's FSD

    He went on to state that, in reality, the systems have never been stable, and lack safety guarantees. "The need for altogether different technologies that are less opaque, more interpretable, more maintainable, and more debuggable — and hence more tractable — remains paramount."

  15. nobody who matters Silver badge

    <".......ChatGPT starts spouting nonsense......">

    A lot of the proponents and fanboys of 'AI' have been continually spouting nonsense for a long time.

    No real surprise that ChatGPT should emulate them :(

  16. John Brown (no body) Silver badge

    misunderstanding of input

    So, the LLM claims it can't "go berserk" and uses the phrase "misunderstanding of input", which implies it has the capacity to understand, which we already know it doesn't.

  17. jake Silver badge

    ::shrugs::

    Anybody paying attention knows we're (over)due for an AI winter.

  18. Bump in the night
    Trollface

    Ask the General

    A question no computer or man can answer

    https://youtu.be/ljGH07Unfe8

    Be seeing you

    1. amanfromMars 1 Silver badge
      Alien

      Re: Ask the General

      Nowadays, in these strange times when, and surreal spaces where practically anything is virtually possible and therefore extremely likely and probably already successfully accomplished, is the question to be, or not to be .... Why not ‽ .

      And especially so whenever one is so almightily enabled.

  19. Splod

    GPT ramblings

    Well I noticed recently, while trying to use it to help me configure ELK, the responses were rambling and longer than necessary. But were still on subject and errors were those expected due to different approaches based on different releases etc. I'd ask a simple do I need to do X and would get 2 page responses!

  20. Securitymoose
    Mushroom

    As they say, "To err is human...

    ...to really foul things up requires a regenerative language model."

    Watch out World, we're all going to die, victims of a misconfigured algorithm.

  21. hairydog

    AI is Artificial Intelligence, but 'artifice' meaning 'dishonest' not 'made' "intelligence" meaning 'espionage', not 'thinking'.

    A more accurate description would be 'Automated Plgiarism'

  22. Anonymous Coward
    Anonymous Coward

    When AI goes berserk

    Nothing to be afraid of here.

  23. nagyeger
    Facepalm

    Certain hardware?

    Makes me think they got a Y2K-style integer roll-over / did a signed/unsigned comparison or or something embarrassing like that.

    1. jake Silver badge

      Re: Certain hardware?

      Fence-post errors abound.

      Just a guess, but an educated one.

  24. Anonymous Coward
    Anonymous Coward

    Digital Aphasia

    Just like Chief O'Brien.

    Glad their AI-Doc found a cure. Poor thing.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like