back to article Worry not. China's on the line saying AGI still a long way off

In 1950, Alan Turing proposed the Imitation Game, better known as the Turing Test, to identify when a computer's response to questions becomes convincing enough that the interrogator believes the machine could be human. Generative AI models have passed the Turing Test and now the tech industry is focused on Artificial General …

  1. Alan Bourke

    Can't remember who said it ...

    ... but they said something to the effect that what if we have now in the 'AI' sphere is like getting a person into Earth orbit, then AGI is like interstellar FTL spaceflight.

    1. The Man Who Fell To Earth Silver badge
      Terminator

      Re: Can't remember who said it ...

      Not a bad analogy. A bigger question is whether the goal is "human equivalent" intelligence which humans do with 10^11 neurons (most of which are not devoted to higher intelligence) or is the goal "omnipotent intelligence?

      1. amanfromMars 1 Silver badge

        Re: Can't remember who said it ...

        A bigger question is whether the goal is "human equivalent" intelligence which humans do with 10^11 neurons (most of which are not devoted to higher intelligence) or is the goal "omnipotent intelligence? .... The Man Who Fell To Earth

        There’ll be no prizes for guessing the most likely correct answer to that question, amigo.

  2. Paul Crawford Silver badge
    Terminator

    That's a huge number: "Five orders of magnitude higher than the total number of neurons in all of humanity’s brains combined," the authors observe

    Here I am, brain the size of a planet and you have me parking cars eliminating humans!!!

    1. Anonymous Coward
      Anonymous Coward

      Five (Fifteen) orders of magnitude

      That's a huge number: "Five orders of magnitude higher than the total number of neurons in all of humanity’s brains combined," the authors observe

      The numbers printed differ by 15 orders of magnitude.

      That is rather incomparable as the LLM's use feed forward networks that require many, many passes during learning, but only a single pass during inference. A feedback network with loops, like biological neurons, can easily use the same connections many, and can do so with "memory". A feed forward network has to do all the work using every connection only once.

      Also, the relevant parameter for human brains are not the number of neurons, but the number of synapses, which is already 4 orders of magnitudes larger than the number of neurons (which they got wrong anyway). And each synapse is governed by more than one parameter.

      In short, this comparison is utterly meaningless.

    2. Fr. Ted Crilly Silver badge

      Here, hold this piece of paper...

  3. xanadu42

    Generative AI models have passed the Turing Test ...

    ... which was proposed in 1950! (see https://www.britannica.com/technology/Turing-test )

    That's a lot of technological change for those "electronic thinking machines"!

    1. Version 1.0 Silver badge
      Facepalm

      Re: Generative AI models have passed the Turing Test ...

      So what is the normal AI IQ level? Looking around at AI everywhere is looks like it's IQ level is about 55 to 85, never any higher. We're using AI everywhere these days but never see anything indicating that its' IQ level have been determined, like ours was in school when we were kids.

      I'd probably be happier with AI if it was closer to my IQ level or even higher. I expect we'd all feel happier if we knew AI IQ was close to ours.

      1. Ken Hagan Gold badge

        Re: Generative AI models have passed the Turing Test ...

        It would be mildly impressive to see an AI that could even attempt answers to the average IQ test, let alone score 100. In my experience, these tests have all sorts of question types: verbal, numerical, and visual.

        Can you scan images into ChatGPT and ask it to draw the next in the sequence?

        1. JohnSheeran

          Re: Generative AI models have passed the Turing Test ...

          You can upload images to ChatGPT. You can request it to perform an action.

          It seems like a great experiment to see what the result is. I'm surprised no one has attempted it already.

        2. C.Carr

          Re: Generative AI models have passed the Turing Test ...

          Some models can pick the next appropriate image in a sequence, by figuring out the rule. Doing it close to human level requires a lot of very expensive inference compute.

          People in these comments really have no idea what they're talking about.

      2. LionelB Silver badge

        Re: Generative AI models have passed the Turing Test ...

        Meh. IQ tests are tests of one thing, and one thing only: the ability to do well in IQ tests. That, in my opinion, does not necessarily correlate that well with what I'd like to think of as intelligence as it applies to humans living in the real world. (They are also, despite the best attempts to mitigate it, horribly biased by cultural variation.)

    2. Ken Hagan Gold badge

      Re: Generative AI models have passed the Turing Test ...

      I keep reading this claim, but the link takes me to a paywall so I can't see the evidence.

      Meanwhile, the evidence that I can see suggests that ChatGPT and its ilk cease to be convincing if you actually bother to ask related questions or demand some sane levels of consistency from one question to the next.

      Obviously there are some people who also struggle with this, but I prefer to summarise that by ssying that they fail the Turing Test. I'm not prepard to lower the bar just because a few of my fellow Homo sapiens are ... a bit thick.

      1. Yet Another Anonymous coward Silver badge

        Re: Generative AI models have passed the Turing Test ...

        >demand some sane levels of consistency from one question to the next

        AI for president?

      2. Falmari Silver badge

        Re: Generative AI models have passed the Turing Test ...

        @Ken Hagan "I keep reading this claim, but the link takes me to a paywall so I can't see the evidence."

        The linked article maybe behind a paywall, but source for that claim is not. As the article is an opinion piece the source for that claim will be listed in the references, and all the papers referenced are public.

        Not read it yet, but https://arxiv.org/pdf/2305.20010 "Human or Not? A Gamified Approach to the Turing Test...", looks like the paper most likely to have give rise to Celeste Biever's article title "ChatGPT broke the Turing test"

  4. elsergiovolador Silver badge

    Vacuum

    One fundamental mistake they all seem to be making is evaluating their models in isolation, as if intelligence exists in a vacuum.

    For an AGI entity to function in a meaningful way, it must not only process information but also contextualise success and failure within a competitive and cooperative framework. Intelligence does not emerge from raw computation alone - it arises from interactions, competition, and the need to adapt. To achieve this, AGI must operate within an environment where millions of models are evaluated simultaneously, each observing a partition of others’ results, learning from their successes and failures. Intelligence is not just about solving problems but about deciding which problems are worth solving based on observed outcomes.

    Additionally, mere survival (avoiding failure) is not a sufficient driving force. Evolution has shown that species driven only by survival tend to develop just enough intelligence to maintain existence but not necessarily to innovate or generalise. Consider nature: organisms that focus solely on avoiding death (such as simple prey animals) develop survival strategies, not higher reasoning. Intelligence capable of abstraction, generalisation, and long-term planning arises when an entity has a driving incentive beyond mere survival - whether that be dominance, curiosity, power, or wealth.

    Humans exemplify this: our intelligence is not a direct by product of survival but of outcompeting others for resources, status, and influence. This is why human ambition often follows maxims like "Get rich or die tryin'" - where the objective is not simply to avoid death, but to achieve an aspirational goal, even at great risk. An AGI trained without such a pressure system will stagnate at the level of an adaptive but ultimately narrow intelligence.

    For AGI to truly generalise, it must exist in a framework where:

    - It learns from a vast, evolving ecosystem of competing and cooperating models.

    - It is driven by an incentive beyond mere function or survival - an incentive tied to achieving, not just existing.

    - It actively shapes its own objectives, rather than being restricted to static, pre-programmed tasks.

    Without these elements, AGI development risks becoming an endless loop of solving narrow, predefined tasks rather than evolving into an autonomous, self-directed intelligence.

    1. headrush

      Re: Vacuum

      You make it sound as if evolution is a choice.

      Every organism fills a niche and has just the required amount of faculties to perform in that niche.

      I would suggest that instead of survival being a sole motivator, there are at least 3.

      Food

      Sex

      Survival

      TV....

      It's not critical to a species for an individual to survive if they have already successfully mated. Longer lived animals have greater "free time" and can afford to experiment.

      Then other thing I think is missing from this studys' approach is competition. Not in the sense of one against another separately scoring points, but where one intelligence is directly threatened by another. That is where learning and innovation come to the fore, in unforeseen challenges.

      1. Caver_Dave Silver badge

        Re: Vacuum

        But then following your logic the AGI first ambition is to acquire all the electricity it can. Good luck trying to turn it off as it will have thought of that - stopping you of depriving it of its primary goal.

      2. Andrew Scott Bronze badge

        Re: Vacuum

        might not be critical to the species, but it probably is to the individual and globally to the individuals that make up the extant members of that species. if none of them are motivated the species won't survive. If Zebras ignore lions individually the species will go extinct. If you touch something hot you will remove your hand from the source of the pain, you will remember you will try to mitigate the pain you feel. Don't see that feedback in llm's. important for learning. can't see being turned off or being "retrained" as "motivation" for an llm to "improve". Survival is a strong motivation for most living things.

      3. LionelB Silver badge

        Re: Vacuum

        Yes, pretty much. The phrase "survival" (as in "survival of the fittest") is often taken to mean physical survival, and "fit" to mean physically fit [no, not that song... naughty]. Evolutionary biologists, however, do not consider either "survival" nor "fitness" in that sense. Rather, "fitness" means how many offspring you produce, or are likely to produce - and this may even be amortised into the future, to your offsping's offspring: fitness, in the evolutionary sense, means your propensity to propagate your lineage into the future. Nourishment, sex and (physical) fitness and survival obviously play into that notion, but not necessarily in straightforward ways - at all. See: https://en.wikipedia.org/wiki/Inclusive_fitness

        Another point - and I think this is highly relevant to AGI - is that high intelligence in nature is almost (but not always1) associated with social behaviour. Perhaps (as the OP more than hinted at) we'll only see AGI when AIs become social, interact, and learn from each other (as well as with/from humans); when, in effect, they become part of a social ecosystem (which will de facto include humans). This may not, I suspect, be such a distant prospect.

        1Curiously octopuses, one of the most intelligent species, are not, on the whole, social creatures. They might have been in their evolutionary past, though.

        1. CorwinX Bronze badge

          Re: Vacuum

          If the bomb dropped and the world went to xxx - the order of species-survival would run something along the lines of...

          Cockroaches

          Spiders

          Mosquitoes

          Rats

          Cats

          Mice

          Wolves/Dogs

          Rabbits

          Voles

          Dolphins

          Add in fish somewhere - not my expertise.

          Notice any species notably missing off that list?

          1. Rich 11

            Re: Vacuum

            Notice any species notably missing off that list?

            Tardigrades.

            1. LionelB Silver badge

              Re: Vacuum

              Plus myriad plants, fungi and bacteria.

  5. Anonymous Coward
    Anonymous Coward

    Reverse Turing

    How soon can we hope for a Reverse Turing, to check a human is not a robot?

    Like, a representative in order to become MP (or equivalent)?

    1. Handlebars

      Re: Reverse Turing

      Please click on any image which includes a motorbike...

    2. Anonymous Coward
      Anonymous Coward

      Re: Reverse Turing

      "How soon can we hope for a Reverse Turing, to check a human is not a robot?"

      That has won the Oscars last week:

      I'm Not a Robot (film)

      https://en.wikipedia.org/wiki/I%27m_Not_a_Robot_(film)

      1. Gene Cash Silver badge

        Re: Reverse Turing

        That Wikipedia summary is disturbing AF. I want to watch it, and I really don't want to watch it.

  6. Doctor Syntax Silver badge

    "Such trial-and-error learning is crucial in real-world applications, particularly in areas like ... self-driving cars"

    Trial and error is not a good idea for self-driving cars.

    1. Claptrap314 Silver badge
      FAIL

      Explain that to the regulatory agencies. Please.

    2. LionelB Silver badge

      > Trial and error is not a good idea for self-driving cars.

      Well, you'd want some sandboxing - possibly literally.

  7. ecofeco Silver badge

    To reiterate

    Overheard on the inter-tubes;' "The future will have AI 'bots arguing about the meaning of Christmas while people look for food in dumpsters."

    1. Yet Another Anonymous coward Silver badge

      Re: To reiterate

      It is nice that robots have taken over the art and science stuff, leaving humans free to clean toilets and pick fruit

      1. ecofeco Silver badge

        Re: To reiterate

        I can smell the freedom!

  8. amanfromMars 1 Silver badge

    Bravo, China. Well played. Greater IntelAIgent Game On .....

    Worry not. China's on the line saying AGI still a long way off

    Well they would say that, Thomas/El Reg, wouldn’t they, as would anybody/anything with an almighty overwhelming advantage able to deliver lead in everything way up ahead requiring the SMARTR Long March for IntelAIgent End Games delivering Singularity Superiority "the likes of which the world has never witnessed and perhaps will never witness again.” ‽ .

    Such is purely natural whenever wise and therefore fully to be expected and not gravely regarded and feared .... although be warned not to attack it for such for an adversary and opponent is a fatal mistake of IntelAIgent Explosive Design ‽ .

    1. FuzzyTheBear
      Pint

      Re: Bravo, China. Well played. Greater IntelAIgent Game On .....

      Long way off to the Chinese is in a month. Tops. We don't count on anything intelligent to come out the USA for the next 4 years at least.

      1. amanfromMars 1 Silver badge

        Re: Bravo, China. Well played. Greater IntelAIgent Game On ..... CHAOS versus Madness

        Long way off to the Chinese is in a month. Tops. We don't count on anything intelligent to come out the USA for the next 4 years at least. .... FuzzyTheBear

        If the full and clear unexpurgated truth be told, FuzzyTheBear, is any intelligence mining coming out of the USA and the West not to be counted on for at least decades if their present stagnant entrenched preserve the rotten corrupt systems at any cost mindset persists to tell tall tales for media to spin as future available success just around the corner in the next quarter amidst the evidence and mounting ruins of destruction caused by their latest series of monumental disasters and colossal clusterfcuks, for it is surely perfectly clear to more than just China in the East [and AI and IT] that they have nothing intelligent to use or to give ..... and are locked in a terminal spiral of existential decline leading them to nowhere great and good and everywhere further rotten and corrupt and bad and bound to get worse.

        Without doubt they need foreign AIdVentures* and alien virtual help if they are to survive and prosper and follow their peers into the Brave New World of SMARTR** Applications delivering Live Operational Virtual Environments ....... Clouds Hosting Advanced Operating Systems.

        * Advanced/Advancing IntelAIgently designed Ventures.

        ** SMARTR Mentoring Analysis Reporting Titanic Research

        1. amanfromMars 1 Silver badge

          Re: Bravo, China. Well played. Greater IntelAIgent Game On ..... CHAOS versus Madness

          Learn to choose your true friends very carefully .... for nothing ever is at it seems in most every crooked bad land as can be realised with a reading of these not unbelievable tales of wilful woes and politically inept recommendations which established mainstream media channels have chosen to decline to cover and reveal as accurate news to inform and enlighten the under-educated and massively deceived masses ......https://www.zerohedge.com/geopolitical/jeffrey-sachs-geopolitics-peace

          And please, TL;DR just doesn’t cut it as an excuse for the persistence of ignorance in any matter more clearly revealed as highly likely badly stage managed in the first degree ..... with premeditated conspiratorial aforethought .

  9. steelpillow Silver badge
    Boffin

    China is never good at understanding the context of its agendas

    "If an AI system can find a solution within a limited number of attempts, it is considered to 'survive'; otherwise, it 'goes extinct.'"

    This is exactly what happens anyway, and has always happened. The only variation here is to the demands on what it is asked to solve. And those demands are no more than our current idea of how to define General Intelligence - and those of course are heavily influenced by the capabilities we designed in to our beloved "AGI" system. Yay, this AI meets our current definition of our codebase! But will it always meet everybody's definitions? Some revolution, that.

    Note that it is NOT a substitute/replacement for the Turing test, which is to determine whether a respondent is able to perform as well as a human, even on open-ended and as-yet unforeseen kinds of task, the kind you can't specify for your codebase - an entirely different demand. Just one example: you will never get a codebase to deal with the halting problem without screwing the answer, you have to ensure you write finite algorithms. Not so for a human.

    As for so-called Turing tests where AI chatbots have "passed", you have to remember that Turing had in mind intelligent and educated people to do the judging, not Internet muppets. These latter-day tests are not Turing tests, they are Muppet tests. Note too that if only a muppet can pass a test, then only a muppet will claim it was worth passing.

  10. O'Reg Inalsin

    Too wrong!

    "In question answering, models are tested against three well-known datasets: MMLU-Pro, NQ, and TriviaQA. In mathematics, the test measures performance using three math datasets: CMath, GSM8K, and the MATH competition dataset."

    1. But wait, if an AI system were to learn these data sets through an evolutionary process : would that be AGI? Well, duh, NO! It just means they memorized the datasets.

    2. Oh - WAIT - they said it is too hard anyway, because it would take > 10**26 parameters (10 exabytes is 10**26 bytes = a US-billion gigabytes). It think that is probably wrong because the datasets themselves, in `.gz` format, take up less than 10**26 bits.

    Two wrongs don't make a good conclusion, although it might be good enough to get worldwide exposure, and "survive" the in brutally competitive world of academic publishing. (Hope not, though).

  11. CorwinX Bronze badge

    I think there's a category error here

    Google can give you perfectly decent information on anything you ask.

    Not intelligence.

    Feed a so-called AI "learning" model enough info then it can counterfeit artwork and music very handily.

    Not intelligent.

    It's still fake, and probably illegal, if you try to pass it off as an original/creative work.

    Big missing bit is *intuition*. When they try to build that in the things start hallucinating stuff and stating falsehoods as authorative facts.

    AI has an important place, IMHO, in scientific/chemical/medical technical analysis.

    Just don't ask the bloody things for dating advice ;-)

    1. steelpillow Silver badge
      Holmes

      Re: Google can give you perfectly decent information on anything you ask.

      Really?

    2. amanfromMars 1 Silver badge

      Re: I think there's a category error here @CorwinX

      Google can give you perfectly decent limited and/or limiting information on anything you ask, CorwinX. It doesn’t do omniscience nor claim not to advise evil .... as far as I know.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like