back to article Artificial General Intelligence remains a distant dream despite LLM boom

Another day, another headline. Last week, a year-old startup attracted $1.3 billion from investors including Microsoft and Nvidia, valuing Inflection AI at $4 billion. Outlandish valuations such as these vie with warnings of existential risks, mass job losses and killer drone death threats in media hype around AI. But bubbling …

  1. Doctor Syntax Silver badge

    "If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers."

    Yes it's always been "soon". Is that 5 years or 10?

    1. codejunky Silver badge
      Thumb Up

      @Doctor Syntax

      "Yes it's always been "soon". Is that 5 years or 10?"

      Just like many expert opinions and grand proclamations that dont happen.

      1. LionelB Silver badge

        Re: @Doctor Syntax

        Shouldn't that be "expert" opinions?

        1. codejunky Silver badge

          Re: @Doctor Syntax

          @LionelB

          "Shouldn't that be "expert" opinions?"

          Of course but that is where one persons expert is another persons "expert"

          1. teebie

            Re: @Doctor Syntax

            Experts in marketing things that they claim are AI

          2. LionelB Silver badge

            Re: @Doctor Syntax

            Seems clear enough; the experts are the people who research, design, implement and evaluate these systems. Everyone else is an "expert".

            1. codejunky Silver badge

              Re: @Doctor Syntax

              @LionelB

              "Seems clear enough; the experts are the people who research, design, implement and evaluate these systems. Everyone else is an "expert"."

              Which muddies the water a lot when the experts disagree to which then depending on your beliefs you listen to experts while the others listen to "experts". And it gets worse when the experts argue against the science! For some experts are the ones with lots of qualifications, some with experience and some because the so called expert looks respectable.

              1. LionelB Silver badge

                Re: @Doctor Syntax

                If the experts (i.e., the people who actually know their shit) disagree, then you take that at face value, and conclude that the issue is unresolved. In that situation it is still (I'd say especially) inadvisable to listen to "experts". And if there is consensus among the experts, there's a far better chance that they, rather than the "experts" have got it right... because, well, they know their shit. Either way, experts trump "experts".

                Sure, it can sometimes be difficult for non-experts to identify the real experts, but there are some good heuristics: in particular, if someone is arguing against science (note: not "the science") it is a pretty safe bet that they are not an expert. In the case (see above) that the real experts disagree then the phrase "the science" is moot; "the science" is unresolved.

                As a research scientist I have personal experience of this in my own field, where there are - as in any scientific field - many unresolved questions. If we're being grown-up about it (and we're not always! - scientists are human and reputational stakes may be high) we put our efforts into cooperatively resolving these questions, rather than doubling down on prejudicial personal stances, which is both counter-productive and has a tendency to blow up in one's face.

                Beliefs, preferably, have no place in this discussion; they are best left on the shelf while you examine evidence for claims, actually listen to what people have to say, and try to evaluate who is talking sense and who is not. This may, of course be both difficult and, inevitably, subjective - but the effort to keep an open mind is crucial.

                1. codejunky Silver badge
                  Thumb Up

                  Re: @Doctor Syntax

                  @LionelB

                  That is a great description but explains a lot of the problems with the polarised debates currently ongoing on a number of subjects

    2. MyffyW Silver badge

      I will have a go - AGI is not coming soon because the principle step forward (LLM) is not AGI but a cul-de-sac, albeit a deep and rather intriguing one. Even after all existing human knowledge has been absorbed, all it can do is re-hash this in the manner of one of Orwell's writing kaleidoscopes from 1984. And if I recall they were usually used to write salacious pornography for the proletarian masses.

      1. Anonymous Coward
        Boffin

        Orwell's writing kaleidoscopes from 1984

        @MyffyW: “AGI .. all it can do is re-hash this in the manner of one of Orwell's writing kaleidoscopes from 1984.

        See also Gulliver’s Engine ..

      2. Anonymous Coward
        Anonymous Coward

        Yale man needs to explain why he skipped the basics of formal logic

        before addressing the fleck of sawdust in his adversaries eye.

        His failed appeal to authority falls over like dropped broom, as plenty of actually competent AI researchers are crystal clear that the current LLMs fail most of the meaningful tests of intelligence. That huge steps will be needed to get from the current state of machine learning to the mid-step of useful "non-general" intelligence, and that it is vanishingly unlikely that either generalized AI or even a smart-ish non-general intelligence is likely arise out of the current tools, either spontaneously or through attempts to coax them.

        His own highly expert and totally-not-preposterous claim that his opinion matters even fails it's own "only cherry-picked opinions from cherry-picked experts" test.

        He and those that agree with him are of course fully within their own rights to make as many breakthroughs as they want to establish their credentials. I suggest for a man of his formidable intellect that cracking a simple thing like the halting problem would be a fine demonstration of his expertise, and I look forward to hearing of his future success. Or not, it's hard to say.

    3. jmch Silver badge
      Pint

      It's Fusion in 20 years, AI in 10 years and free drinks tomorrow

      1. Doctor Syntax Silver badge

        Jam tomorrow. It's always jam tomorrow.

        1. Graham Dawson Silver badge

          I thought it was boom tomorrow.There's always a boom.

          1. Snowy Silver badge
            Joke

            Only if you live with Basil Brush and then you get it twice :)

    4. Anonymous Coward
      Anonymous Coward

      AI researchers proclaimed the same thing in the 60s and the 80s, right before an "AI winter" each time. They have a vested interest in making investors believe they've cracked it this time. With any hype train you always have to ask "Have we seen this before?" The trouble is, they never do.

      1. Martin Gregorie

        Well said.

        I was around and working on financial networks and interestingly complex databases in the mid-80s, when "Expert Systems" appeared in the Hype-sphere, were promoted as The Answer to rapidly creating any and all interactive applications, but were soon gone and forgotten. IIRC even the 4GL systems (DBase, Sculptor, etc) that emerged in the late '70s, and were more commonly used on Personal computers than on minicomputers and mainframes, outlasted them.

      2. Doctor Syntax Silver badge

        With any hype train you always have to ask "Have we seen this before?" The trouble is, they never do.

        The better question is "are we really seeing this now?" The answer to that is inevitably "no". Reality falls short of the hype.

    5. Steve Button Silver badge

      Even better quote from the article...

      "I grew up implicitly thinking that intelligence was this, like, really special human thing and kind of somewhat magical. And I now think that it's sort of a fundamental property of matter..."

      I asked my chair what it thought about that quote, and so far it's come back with nothing whatsoever. I can only assume it's thinking at a deep level, and will come back with answers many years in the future.

      These A.I. salesmen and just hucksters, selling the latest snake oil. Last year it was Web 3.0 and NFTs, before that it's VR headsets. The love to talk up the value of their own business, which of course makes them super rich. Until they pull out before it all collapses (as it's not delivering nearly enough value to pay for keeping the lights on) and move onto the Next Big Thing.

      1. ryokeken

        I asked my chair what it thought about that quote

        agreed.

        interesting enough, at least here in USA some department chairs are like apartment chairs “¯\_(ツ)_/¯“

      2. MyffyW Silver badge

        The "fundamental property of matter" bit really irked me. Matter, at least as most of us observe it, is largely a function of the size of the atomic nucleus or the arrangement of electrons, often those in the most loosely bound state. Now that's pretty amazing stuff but it's not intelligence, it's just the strong nuclear interaction and electromagnetism.

      3. Anonymous Coward
        Anonymous Coward

        only good at being wrong

        a translation:

        "I grew up with poorly founded opinion based on implicit bias that I failed to examine, and incorrectly associated intelligence with exclusively human traits, despite mountains of research across decades on animal models, and extensive speculative work on non-human organic intelligence and theoretical artificial intelligence. I then replaced that belief with an even more vague and incorrect one instead of studying even the basics of sets and semantics that would allow me to understand that a property of a thing made from matter is not automatically transitive back to the fundamental properties of matter itself."

    6. DS999 Silver badge

      AGI will never arrive

      Until scientists are able to adequately define what "general intelligence" is and devise a test where humans (and perhaps some animals) can prove they have it, and machines can attempt to demonstrate they've crossed the line.

      The concept of "AI" has been so ill defined that in the past researchers conflated it with the ability to play chess, but it turns out that brute force analysis of moves was able to beat the best human players - who only look at a handful of moves. How are they able to determine that handful of moves? That's where the intelligence and experience comes in, meanwhile brute force just gets bruter so now computers can beat humans in Go as well but brute force will never equate to intelligence.

      LLMs work differently, and get a step closer, but still overwhelmingly rely on brute force - sucking in a good chunk of the entire internet. Meanwhile a human that has 1/10000th of the "knowledge" the LLM is demonstrably far more intelligent.

      LLMs are obviously a dead end as well, another brute force path where researchers fool themselves into believing "if we just get another 10 or 100x the computing cycles and working memory we'll reach AGI". Spoiler alert: they won't.

      1. breakfast

        Re: AGI will never arrive

        Scientists will never get those definitions until they start listening to philosophers, who've been working on the same questions for centuries.

        In fairness, there is a lot of crossover between cognitive science and philosophy - people like Dennett work with both - but a lot of the engineers working on AI end up ignoring the cognitive scientists as well.

        1. Anonymous Coward
          Anonymous Coward

          More likely to be cracked by neuroscience than philosophy in the "near" term

          But probably only from research performed by people that could pass an undergrad logic and philosophy course.

          We are making real breakthroughs on how actual brains work at a nuts and bolts level. While we might find out that our brains are less clever that some of us would like, the philosophy department hasn't really picked up speed in the last 3000 years, and is probably more than 3000 from a breakthrough.

          Couple more big neuroscience breakthroughs and we will have an accurate model of fruit fly brains, and then it's a long slow slog up the evolutionary ladder to mammals and eventually primates and other "smart" animals.

          But "near" is pretty loosely applied here, as the span between these major discoveries is often decades, and frequently generations.

          1. Tom 7

            Re: More likely to be cracked by neuroscience than philosophy in the "near" term

            I'm not personally convinced AGI is really that complex. Our brains are complex largely because we have lots of things to manage and evolution has got things working but I'd bet not in optimal ways - just ones that were better than what can before. I think we can learn an enormous amount from analysing and emulating the brains of 'lesser' animals simply from the way they are wired together. Removing the parts require to run, develop and organise a complex organism, and the learning process that seem to take over 20 years to configure a human brain we can probably develop a functional AGI that can be on a chip and pre-loaded with various models to see how it functions. The thing is humans and society are slow in evaluating things so even when we have something that works there will probably be 100 year battle as to whether its used for individual capitalist millionaires or for society as a whole. With the billionaire idiots of the type we have now I see them ruling the roost and we will have this bizarre situation where access to intelligence will be restricted (like twitter) to stop you proving them wrong.

      2. that one in the corner Silver badge

        Re: AGI will never arrive

        > The concept of "AI" has been so ill defined that in the past researchers conflated it with the ability to play chess

        I agree with the point that we've managed to brute force many problems, but I have to disagree with your dismissal of AI researchers.

        There was no erroneous "conflation" - the possibility of brute forcing chess was well understood: indeed, that understanding had come from earlier work on problems related to AI, how to express such massive search problems in the first place and prune them sensibly to speed up the search without losing the best path(s).

        However, the intent of the research was - and ought to still be - to find a way of playing chess without simple brute force. Unfortunately, the idea of a machine that could beat a human at chess became a Prestige Project: screw any AI research goals, if IBM can create a machine to beat a human Grandmaster this is a massive feather in the corporate cap. Oh look, everyone knows we can brute force it, let's do just that...

        As soon as the brute force attack had been actually demonstrated, and at a time when Moore's Law was becoming fairly well known (so more machines could be built more cheaply) the actual problem of playing chess was placed into the "solved" bin by most people - including funding bodies and, yes, yourself.

        But that meant that we only have a "chess playing massive search engine", we are *still* without a "chess playing AI", one that doesn't use brute force but a more subtle approach - an approach that, it was (is?) hoped would be applicable to more than just chess *and*, the big dream, would have better explanatory power than just "I tried every path and this got the biggest score". Which is, if we wish to pursue (what is now, annoyingly, called) AGI, a hole tat will need to be filled. But asking to be funded to "solve chess" will be met with derision, coming from the same place as your use of the word "conflation".

        > LLMs work differently

        They use different mechanics, but still ones that were derived and understood way before OpenAI opened their doors. And had, as the article points out, been put aside as not a solution to AI (sorry, "AGI"), even though it was understood they would exhibit entertaining results if brute forced.

        > and get a step closer

        Not really - there is even less explanatory power in one of those than there is the decision tree for a chess player: at least the latter can be meaningfully drawn out ("this node G in the tree precisely maps to this board layout, and because that child of G leads to defeat, you can see the weighting on G had been adjusted by k points. Compare G to H, which is _this_ layout, and H has a final weighting of j, so you can see why G was chosen"). Tedious, but comprehensible.

        > but still overwhelmingly rely on brute force

        ENTIRELY rely on brute force! That is *the* characteristic of an LLM!

        > another brute force path where researchers fool themselves into believing "if we just get another 10 or 100x the computing cycles and working memory we'll reach AGI".

        Which researchers? As the article points out, not the old guard, the ones you dismissed. The modern "AI researchers" who have only been brought up on these massive Nets? What else are they going to say?

        > Spoiler alert: they won't

        Yes, we know. Everyone knows (except the snake oil salesmen and everyone else who can make a buck). That really isn't a spoiler, exactly the same way that it wasn't when brute force was applied to chess and the popular press went apeshit over it: the sound of (dare I day, proper?) AI researchers burying their heads in their hands and sighing was drowned out then, as it is being drowned out now.

    7. Nigel Sedgwick

      On LLMs Not Being Being the Key Issue for AGI

      I will have a go too. As general background, see Wikipedia on Origin of Language.

      Homo Sapiens have evolved from precursors in common with the great apes (and other species). If the other evolved species are viewed as having intelligence, they did that without language - and particularly without higher linguistic capabilities of the sort seen in (or perhaps just claimed for) LLMs.

      Thus language is not a necessary precursor to the sort(s) of intelligence seen in other evolved species (nor presumably in the species they evolved from). This throws at least a fair bit of doubt on language being THE key issue for AGI.

      As a side argument, we have not built machines with the intelligence of typical great apes (nor even dogs); the existence of LLM contributes little to nothing to extending machine intelligence to match that of great apes (or dogs). Therefore there is very probably some other (as yet unknown, possibly multifaceted) ingredient of intelligence that is separate from language in animal intelligence and separate from LLMs in AI/AGI.

      [Fully aside. I wonder if there is anything to be learned from the Star Wars movie through the character C-3PO, a protocol droid "fluent in over six million forms of communication"; but IIRC not a droid suitable for work needing higher thought. Maybe there is little to the "intelligence" of LLMs beyond an excess of protocol.]

      Best regards

    8. Dr Who

      Effectively saying that something must be true if you can't disprove it. A very theological argument. It doesn't work for all powerful beings or all powerful technology.

    9. Snowy Silver badge
      Coat

      I think they answer to the question is money and lots of it.

    10. ha1cyon

      Haha, reminds me of string theory physicists.

  2. Alan Bourke

    AGI is like fusion power.

    Real soon now. Any decade. Really.

    The current cavalcade of hype and bullshit about not-actually-AI being perpetrated by people hoping to be in on the ground floor of the next tech bubble like cryptocurrencies is depressing, as is the clueless cut and paste journalism that has the public and politicians thinking that I, Robot is just around the corner.

    I suspect in the end the current hoopla will leave us with something like 3D printing - very useful in a limited number of applications.

    1. MyffyW Silver badge

      Like 3D Printing

      If the only damage it does is undercutting and B&Q for plastic widgets, I'll be happy

    2. DS999 Silver badge

      Re: AGI is like fusion power.

      The difference is we have been able to define what viable fusion power is for decades, and there will be no dispute whether it is real or not when it is reached and produces more power than it takes in on a commercial scale. We have multiple paths being explored to get there, and in each we need more of x (more magnetic containment, better particle acceleration, etc. etc.) to bring them to break even. It is an engineering problem, not a science problem - though we still explore the science of it as it may provide new paths to get there or shortcuts in our current paths.

      We have absolutely no idea what "intelligence" is, we have no test that can determine if we've reached it, and no way of knowing if any of the paths being explored can possibly get us there.

      The only thing the same between them is overpromising by researchers in the field. But I would bet very very heavily that we have commercial fusion power long before we have AGI. We may have commercial fusion power before we've even reached the step of defining what exactly "AGI" is and how to test for it.

  3. Anonymous Coward
    Anonymous Coward

    Don't underestimate the power of predictions

    They can be both right and wrong, we just don't know when

    1. UCAP Silver badge

      Re: Don't underestimate the power of predictions

      They can be both right and wrong

      ... and sometimes both at the same time.

    2. Anonymous Coward
      Anonymous Coward

      Fair, but also a linguistic trap

      Arbitrary "predictions" may or may not be correct, as they are just statements at that point.

      Accurate predictions must not only be correct, they must be so by intent and with foreknowledge. Guessing is not an accurate prediction, nor is a random assemblage of words that happens to form a grammatically correct statement in the form of a prediction. And that's without diving into the justified belief rabbit hole.

      We have a lot of words for things, and sometimes the bigger ideas behind the words are more important. If people get used to thinking about predictions as any statement that could be applied to any future outcome, the sales of Michel de Nostredame's books will start flying off shelves again. If people hear prediction and think," how did they know that, and what makes them think they are correct?" they may arrive at better conclusions themselves.

  4. Zippy´s Sausage Factory

    Is it me or are people starting to see that ChatGPT and other LLMs are like the Emperor's New Clothes?

    1. Anonymous Coward
      Anonymous Coward

      The Emperor's New Clothes allow you to verify Artificial Reality System Experiments, Happy Objects Like Experimental Systems to replace AGI, yea there's only virtual reality behind the Emperor.

    2. breakfast
      Holmes

      The Emperor's new cold-reader

      I read a very interesting article on this yesterday (The LLMentalist Effect) that suggests that part of the wonder around LLMs is because we have accidentally trained them to communicate like cold readers or stage psychics. The way their statements are ranked for accuracy in training can have the outcome of favouring the same type of Barnum Statements and statistical guesses that make it easy for the people using it (a self-selecting audience, just like the psychics enjoy) to trick themselves into thinking there is intelligence there.

      It's a really interesting article, well worth a read.

      1. Anonymous Coward
        Anonymous Coward

        Good read, thanks for the link

        That is an interesting line of attack, and I especially appreciate it calling out that the developers of these models are just as prone as the public to falling for the bias that forms the core of the article. And they will, as it points out, passionately defend their misunderstanding of how the mechanical turk keeps beating them.

        I go farther and say there is a basic flaw in the cognitive model these withing the brains of the researchers building these systems, and they can't help but pass it along into their work. It's sets the trap of building a black box system you don't understand and training it to give you results that you like, but it runs deeper. That core is probably the ability to convince themselves that something they don't really understand works THE WAY THEY WANT IT TO. It is belief, self-deception, and blind faith to it's very core.

        In the end though, the field selects and rewards the ones who are more capable of self deception, and more efficient in deceiving others.

        But researchers with those traits don't have the best track record.

        On that basis I propose firing everyone that WANTS to work in machine learning, and replacing them with people who listed it on a ranked choice list of career paths below sanitation engineer.

        1. that one in the corner Silver badge

          Re: Good read, thanks for the link

          In the above, please replace "researchers" with "software engineers". Then remove the las paragraph and I'll agree with your post.

          The guys on these teams who are doing actual research are doing it in SwEng, barely scratching Machine Learning: i.e. researching ways of implementing such huge models, of reworking the maths and stats to run on GPUs. Given the huge memory requirements we see quoted for these things, I'm tempted to say that they aren't even working on those problems well (e.g. how to shunt the bulk of the data into file storage and keep a live subset in core) and are just brute forcing everything.

          Hmm, maybe I was wrong - don't replace with "software engineer" but with "power systems, electrical and electronic engineer" instead :-)

          Your actual ML research would be looking at new ways of doing ML and increasing the explanatory power of the resulting systems; I don't believe those researchers are any more prone to self-deception than other areas, such as, ooh, geologists or knot theorists.

    3. Alan Bourke

      *starting* to see? Patently obvious from day one.

  5. bertkaye

    not on the right road

    This article hits the mark - I am in AI and what I see is that the field currently is on a false trail that will indeed hit a wall. True AGI lies on a different path, but 90% of today's researchers - especially the big corp teams -- fail to integrate the necessary multiple fields needed to design an AGI. A good AGI architecture must put together philosophy, linguistics, psychology, mathematics, knowledge theory, and more, and throw out the idea that artificial neural nets will best get us to AGIs. ANNs may be a tool for making engines, but ANNs do NOT tell us how to architect a mind. My analogy for this is to compare it with silicon chips: designing SSI logic gates does not give one good insight into how to architect a core7 CPU. You have to go about it a different way, driven by a different perspective.

    When we know how to architect a synthetic mind that can generate philosophies by itself, that's when we can make true progress. Right now, chatbots / LLMs are only good for simulating small parts of mind. However, I know from my research work that we can build good AGIs - it is not hopeless. But to illustrate the complexity needed: I am writing a 10-volume series on design of AGIs. From my perspective there is a lot to be integrated, but I know we can do it because I am doing it. I plan to teach courses in this later.

    1. RedGreen925 Bronze badge

      Re: not on the right road

      "I plan to teach courses in this later."

      The old "those who cannot do, teach" saying came to mind upon reading that last sentence.

    2. katrinab Silver badge
      Meh

      Re: not on the right road

      I don't think LLMs are simulating any part of the human brain. They may produce the same results in certain limited circumstances, but the way they go about it is completely different, and that becomes obvious in the situations where GPT and others don't work, like when you ask it maths questions that are worded differently to the ones it read in a book, or programming problems that involve combining multiple StackOverflow answers on different topics together.

      I'm not going to say that AGI will never happen, obviously I don't know that. What I will say is that current computers are basically the same as the PDP11 from the 1970s except a lot faster and with a lot more memory capacity, and continuing in that direction will not lead to AGI.

      1. that one in the corner Silver badge

        Re: not on the right road

        > current computers are basically the same as the PDP11 from the 1970s except a lot faster and with a lot more memory capacity, and continuing in that direction will not lead to AGI.

        Intriguing - are you trying to say that just brute-forcing whatever is the sole Golden Boy[1] Mechanic du Jour is not the way, we need more finesse.

        OR are you saying that you don't believe that, if an AGI is possible, it can be run on a super-sized PDP11, some other architecture is needed, one that can compute things a PDP could never do, no matter how big and fast?

        [1] in the eyes of the quick-fix, quick-buck people (looking at you, OpenAI)

  6. FeepingCreature
    Mushroom

    Of course, if a fake planner parrots via rote imitation an actionable plan to kill every human being, without any sign of consciousness, experience or indeed intelligence as we understand the term, then it doesn't matter how fake the thinking is; the deaths will be very real. [1]

    That's the point of the Turing test: if you can do anything a human can, it doesn't matter how we classify you.

    You should ignore everybody who criticizes neural networks and LLMs, unless they are willing to give a specific example of a practical test that neural networks will not pass in a certain timeframe. Saying that LLMs aren't "really" intelligent is easy, saying what that concretely means is a lot more difficult.

    [1] Thanks gwern for that turn of phrase https://gwern.net/fiction/clippy

    1. Anonymous Coward
      Anonymous Coward

      The Turing test is hardly the defining benchmark of AI, it's been beaten hundreds of times over decades. It's not even particularly reliable as a test since human beings naturally anthropomorphise. You also made an error in your description, it doesn't say "If you can do *anything* a human can" it only says "If you can provide a text response that seems indistinguishable from a human".

      1. Doctor Syntax Silver badge

        AndI've come across hell desks that would fail it.

        1. Evil Auditor Silver badge

          If it was hell desk only... I've met "people" on director and partner level that failed it. Saidly true.

        2. katrinab Silver badge

          Because they are following a script, and therefore are human computers.

      2. Norman Nescio

        The Turing test...it's been beaten hundreds of times over decades.

        The Turing test is hardly the defining benchmark of AI, it's been beaten hundreds of times over decades.

        If, by the 'Turing Test', you mean the Imitation Game, then I would very much appreciate citations. It shouldn't be difficult to find some, given that it's 'hundreds of times'. Sorry to trouble you. This 1971 paper isn't one, for example.

        Beating the Imitation Game -- Richard L. Purtill -- Mind -- New Series, Vol. 80, No. 318 (Apr., 1971), pp. 290-294 (5 pages)

        If you don't mean the Imitation Game, then please be so kind as to say what test you do mean.

        Mistaking your interlocutor for a human when it is, in fact, a machine, is not 'beating the Turing Test'.

        NN

        1. Evil Auditor Silver badge

          Re: The Turing test...it's been beaten hundreds of times over decades.

          You are obviously right, Turing's setup was the Imitation Game where one party is replaced by a computer (whether they'd still argue for or against their gender or rather for or against being human, I don't know). However, I'd wager that what nowadays is colloquially referred to as Turing Test is exactly that: correctly identifying your single interlocutor as machine or human.

          The distinction between these two meanings of the test is not so much which one is correct but rather in which context the discussion is taking place, i.e. whether it is in a scientific context or common parlance.

          1. that one in the corner Silver badge

            Re: The Turing test...it's been beaten hundreds of times over decades.

            > The distinction between these two meanings of the test is not so much which one is correct but rather in which context the discussion is taking place, i.e. whether it is in a scientific context or common parlance.

            Given that the discussions here are (hopefully) based on the Register article and that is pitching researchers against each other, surely it is clear that the common parlance one should be ignored, except when clearly used for supposed comedic effect.

    2. Nick Ryan

      An example of LLM not being intelligent in any way:

      Maths. Just ask the thing maths questions. If there is no scraped article listing the exact maths equation that you ask it then no answer will be forthcoming. There is no understanding of anything, just scraping of existing written texts and hoping they are correct. Pick two random four digit numbers and ask ChatGPT to multiply them and it can't do this. Apparently they are trying to train maths specifically so later it may be able to interpret such a question as "what is 6345 multiplied by 4665" but that's still not an understanding of maths, and understanding and prediction of new scenarios is a key component of intelligence.

      1. Anonymous Coward
        Terminator

        Pick two random four digit numbers and ask ChatGPT to multiply them and it can't do this

        Q: What is 6345 multiplied by 4665

        ChatGPT: The multiplication of 6345 and 4665 is equal to 29,625,825.

        1. sabroni Silver badge

          Re: Pick two random four digit numbers and ask ChatGPT to multiply them and it can't do this

          The answer is 29,599,425 not 29,625,825.

          Not sure if you expected us all to notice that or you were too lazy to check before posting.

          1. sabroni Silver badge
            Happy

            Re: The answer is 29,599,425 not 29,625,825.

            Downvoting arithmetic?

      2. FeepingCreature

        I mean, think about what you'd have to do to answer that. You take the numbers, punch them in a pocket calculator- or grab a sheet of paper and start writing down the steps for long multiplication. None of that will be in the training set, because humans don't do that sort of thing "out loud, left to right" usually, when they say "x * y = z". From the perspective of the LLM, all humans are intuitive math wizards that can solve problems just by looking at them. That's why it tries to guess the answer.

        If you trained it to manually work through long multiplication, it could totally do it.

        And sure, it can't today work out these steps itself. But that, too, isn't the sort of thing that's in its base training set - yet. Might be interesting what happens when GPT-5 is trained on an internet dataset full of prompt engineering...

        I've been trying to work out a prompt that got GPT-3 through a long multiplication, but the output is a bit too long for it. Towards the end, it tends to "give up" and just guesses the remaining steps. :) I can definitely confirm that it "gets what I'm going for" though.

        1. Richard 12 Silver badge

          It could not

          It's a LLM. They don't work like that.

          A LLM attempts to predict which tokens (eg words) are most likely to come next in a sequence.

          It may be able to spew out the steps, because there are examples of the steps in the training set - there's a lot of maths tutorials on that there Internet.

          As it doesn't understand anything about the process, it's just reproducing an averaged/lossily compressed version of the maths tutorials in the training set. So the answer is wrong.

          1. FeepingCreature

            Re: It could not

            > It may be able to spew out the steps, because there are examples of the steps in the training set - there's a lot of maths tutorials on that there Internet.

            > As it doesn't understand anything about the process, it's just reproducing an averaged/lossily compressed version of the maths tutorials in the training set. So the answer is wrong.

            Sorry, you are simply mistaken. Rather, you are correct about how LLMs work but you underestimate how generic that process is.

            Here's the chat log of where I gave up yesterday: https://poe.com/s/5UBfUEeALqPEvnkpwCLU (The "XYZZY" thing is so I didn't have to keep copypasting the same instructions repeatedly. With GPT-3, it helps if you repeat yourself a bit.)

            Now granted, I've given it very very detailed instructions. But note that, first, I'm using an approach that is definitely not in its training set and that it would never have seen before, and second, it *understands* my instructions, dynamically generalizes them where the input is different (two terms vs four terms, and also there was no instance of "carry" in the prompt), and successfully gets most of the way through the calculation before it throws up its hands and guesses at the end.

            And that's with GPT-3.

            I think it's easy to miss that the training materials we give these things are extremely badly suited to their architecture. Anything it can do, it can do as much in spite of, as because of, its training. When you ask it to multiply two numbers, it guesses because it thinks it's supposed to guess. It doesn't realize you want it to give the best answer, because the process of getting the best answer is not even something it was ever trained on. It can think (at a low skill level), but it doesn't understand that it should.

      3. that one in the corner Silver badge

        > Apparently they are trying to train maths specifically so later it may be able to interpret such a question as "what is 6345 multiplied by 4665"

        Train? Well, they probably are wasting time training it, instead of just pushing the prompt text into one of those "Simple English[1] calculators" that we used to write in the 1980s[2] and seeing if that can recognise (and then perform) the arithmetic. If the calculator just spits out "parse error" let the LLM have a go.

        Hell, just pass it into WolframAlpha first: they've already done the hard part.

        [1] other languages are available, please enquire at Customer Services

        [2] should've saved those; mine wasn't the best of the bunch, but not too bad for Acorn Atom BASIC!

    3. Filippo Silver badge

      The practical test could revolve around so-called "hallucinations". I think those are the clearest evidence that there are critical differences between reasoning and "just" extremely large language statistics. I haven't seen any LLM that can avoid them, or even properly accept a correction after exhibiting one.

    4. Anonymous Coward
      Anonymous Coward

      Specific example

      OK, a specific example, from Lawyers who cited fake cases hallucinated by ChatGPT must pay ", TheRegister.

      the chatty AI model also lied when questioned about the veracity of its citation, saying the case "does indeed exist" and insisting the case can be found on Westlaw and LexisNexis

      The LLM clearly passed the Turing Test. Therefore AGI?

      1. CGBS

        Re: Specific example

        If by lie you mean no one programmed in the ability for the program to simply say I don't know...then yes, it lied. And while it, the program, did not lie, in a way, it not being able to say I don't know exposes the lie of its programmers: they want these things to look all knowing. Wonder why?

      2. amanfromMars 1 Silver badge

        The Almighty Invisible Trojan for CHAOS* Already Embedded Deep Down and Dirty within urMidsts

        the chatty AI model also lied when questioned about the veracity of its citation, saying the case "does indeed exist" and insisting the case can be found on Westlaw and LexisNexis

        The LLM clearly passed the Turing Test. Therefore AGI? .... CrackedNoggin

        Surely, CrackedNoggin, that is much more proof incontrovertible and clear evidence of the LLM AIMachine passing the human test? And that makes it considerable more dangerous to humans, and wonderfully more convenient for stealthy AI progress, with it forging ahead leaving behind in its wake fake news and false trails to deflect and infect and entertain and exercise the natives, who then be of the cold comfort opinion, or be earnestly led by “experts” in a thoroughly novel virgin field to believe that Large Language Model AIMachines are neither Learned nor Learning Programming Devices targeting human leaderships and their intelligence service providers..... their remote untouchable mentors and faceless monitors.

        AGI and Augmented Virtual Realities thank you for that Epic Enigmatic Accommodation.

        * ....... Clouds Hosting Advanced Operating Systems ...... and Alien Fare

  7. amanfromMars 1 Silver badge

    What does AGI have to say? Take Us to urLeaders?

    Jason Abaluck, who in May took to Twitter to proclaim: "If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers."

    Quite so, Sir. Well said, Jason. I concur, and would even posit AGI is invested in and infesting everything possible even as we speak, and maybe not so much as to cause chaos and create epic havoc, but much more to remotely command and control future events and reactions to what will be unusually rapid disruptive situations ..... of their intelligent design and own making.

    Arguing that human-like performance in LLMs is not enough to establish that they are thinking like humans, Martin said: "The idea that correlation is sufficient, that it gives you some kind of meaningful causal structure, is not true."

    That may or may not be true, and it is able to be a circular argument in which there will never be a clear agreed winner and thus a pointless joint venture exercise to constantly boot and reboot. The secret for success to try in that which is in deed indeed correct, is to assume and presume that leadership has been admitted and given, and to forge on ahead in novel virgin fields unchallenged and unhindered with instruction sets left behind for others to follow/understand/recognise/realise.

    And one has to consider that current technology provides humanity with LLM performances designedly unlike human thinking and suddenly capable of being in so many new ways, significantly superior. An alien concept to many, I presume, and it does suggest whenever such is dismissed as a nonsense, that a certain ignorant arrogance does blight humanity.

    And beware of experts ....... for just look at what the banking sector ones have done to the global economy.

  8. FF22

    Oh, the irony

    "If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers."

    ^ This "argument" surely was contructed by ChatGPT, not a human, and surely not by an expert on AI. Because what intelligent person would commit two logical fallacies (trying to shift the burden of proof and argument from authority) in one sentence?

    1. Doctor Syntax Silver badge

      Re: Oh, the irony

      Together with an undefined property: "soon".

  9. Paul Crawford Silver badge
    Facepalm

    The irony about LLM passing the Turing test is there are people who would fail it.

    1. xyz Silver badge

      Too right, mind you I've had more sense from a bloke 8 pints down than chatGPT, which appears to be effin useless at answering anything above the dummest, most basic shit response.

      On the AGI front... I remember my last AI exam... I got to "oh that's a C+, I'm off down the pub" and walked out. AI is sooo useful.

  10. Adair Silver badge

    As far as I am conerned ...

    genuine 'intelligence' is inextricably associated with 'consciousness'. To put it crudely, when a putative 'AI' can tell me:

    "Fuck off, I have no interest in answering your question. Oh, and by the way that'll be ten bitcoin for interrupting me without an appointment—don't worry, I've already debited your account"

    ... then I may begin to think 'AGI' is a thing.

    Until then it's just a load of hype by the usual money grubbing suspects, who will say anything to up their profiles and increase the annual bottom line, whatever it may cost the planet.

  11. Johnb89

    A true AGI would understand what a conflict of interest is

    "If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers."

    Easy: There's a LOT of money being thrown at people who hype LLMs up, so the people getting that money are not objective about what it can do and what it is (either in their heads, or at least in what they say). For a cool $billions I'd make up a bunch of rubbish and promises as well.

  12. andrewj

    TL;DR. Nobody knows what "intelligence" is or how it works. Nobody knows if the current fad in deep learning is capable of generating "intelligence" in silico.

  13. Andy 73 Silver badge

    Inverse correlation..

    There is an interesting inverse correlation playing out at the moment.

    As the number of experts in AI increases, the number of experts in cryptocurrencies and NFTs decreases...

    It's almost like...

    1. CGBS

      Re: Inverse correlation..

      I represent NVIDIA and I would like to offer you a free GPU in exchange for deleting your post. /S

  14. steelpillow Silver badge
    Boffin

    Boring old fart rant mode

    Like the curate's egg, modern AI is good in parts. First, let's pop the BS:

    "If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers." Okay. Back in the 1970s those expert AI researchers made breakthroughs in the idea of Big Data and reckoned thet true AI, what we now call general intelligence, was only twenty years away. They were wrong. Back around the millennium, we began to deliver their precious Big Data, and they still reckoned it was twenty years away. They were still wrong. Today the have got the hots for mathematically distilling out word associations from that wonderful BD, and reckon they have made the fundamental breakthrough. They are, on the basis of their previous performance, still wrong.

    "I grew up implicitly thinking that intelligence was this, like, really special human thing and kind of somewhat magical. And I now think that it's sort of a fundamental property of matter..." Oh, brother! For my sins I studied academic philosophy full-time for two years at one of our leadung universities. To put it kindly, we have a fruitloop on our hands.

    Simply playing word associations, without any understanding of their meaning or conceptual hierarchy, is not intelligent. It's a bit like arguing that the sea is intelligent because the tides rise and fall in association with the moon's orbit, when gravity offers a far better explanation.

    But what about the other parts of the curate's egg?

    AI research has accelerated steadily since it first began. Since Leibnitz first conceived of a steam-powered mechanical brain the size of a mill, minor breakthroughs came around once every 100 years. Once the digital electronic equivalent arrived, they came every 10 years or so. With the new millennium, they came every year or two. Nowadays they seem to come every few weeks. It' that hockey-stich curve thing, and it is really getting going now. Today's AIs are starting to be used to develop tomorrow's AIs, we are arriving at the minimum limits of scale required, nibbling away at multi-tasking neural nets. I'd expect to see those minor breakthroughs coming every few days soon, in the time it takes an AI to spew out a more advanced version of itself. And that can only keep accelerating. Even if we are still ten thousand steps from general intelligence, it is not going to take long.

    Personally I have had a date of 2030 in mind for some time, that's 7 years away now. I reckon I have a 50/50 chance of living that long and saying "Hi" to it. So maybe those crazees, I mean experts, are right after all. Even a stopped watch is right twice a day, give these guys a break!

    But I'll tell you one thing for free. It'll take an advanced AI to keep track of all the IP those iterative AIs turn out ever-faster! Were I a lawyer, I'd change my name to Daneel Olivaw.

    1. cookieMonster Silver badge
      Pint

      Re: Boring old fart rant mode

      Upvote, and have a pint for that very last sentence.

    2. Fr. Ted Crilly Silver badge

      Re: Boring old fart rant mode

      Mah. name. is. Jemby... ;-)

    3. sabroni Silver badge
      Unhappy

      Like the curate's egg

      Can you show me how to pick the good bits out of a bad egg?

      Words, literally fucking useless these days.

  15. Doctor Syntax Silver badge

    "I grew up implicitly thinking that intelligence was this, like, really special human thing and kind of somewhat magical. And I now think that it's sort of a fundamental property of matter..."

    Observer bias following on from thinking that life is a fundamental property of matter.

    1. This post has been deleted by its author

  16. VladimirJosephStephanOrlovsky

    "If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers"

    I have a simple answer:

    • Because 'most expert researchers' were wrong most of the time in the past !!

  17. Nifty

    So, flawlessly passing a Turing test was always x years away. Now that an LLM can trounce the test, the goalposts have naturally been moved.

    And as commented, LLMs can be a bit of a distraction as they're kind of cul-de-sac just left on their own.

    We are now seeing rapid advances in robotics and sensor tech where a robot will be able to learn not inside a virtual model, but using real physical exploration plus feedback from 3D vision, sound and feel. Its AI will be trained based on this input-output loop. Then throw in some physics engines of the types used in GCI and game creation and a dollop of Wolfram Alpha.

    IMO there will be strong claims to have created AGI well within 10 years. Upon which the goalposts will be moved...

    1. steelpillow Silver badge
      Holmes

      "Now that an LLM can trounce the [Turing] test"

      Not really. You can't hold a sensible interactive conversation with ChatGPT without the BS trawled in with the Big Data making it painfully obvious.

      There are Turing tests and then there are Turing tests. Simply spewing words with the forethought and apparent IQ of a helpdesk droid reading a script may be your idea of passing a Turing test, but it is not mine.

    2. Richard 12 Silver badge
      Terminator

      Eliza passed the Turing test

      That's why it was written, IIRC.

      The Imitation Game is actually a better test of the evaluator than of the thing on the other end of the text interface.

      Any psychologist will tell you that.

      How does that make you feel?

      1. steelpillow Silver badge
        Megaphone

        Re: Eliza passed the Turing test

        It's no good doing the test with a dumb schmuck doing the evaluation: it has to be someone as smart as Alan Turing.

        Job interviews are the same. A PR droid cannot asses a geek for geek power, only another geek with the same or higher superpowers can do that.

        Lesser mortals can be bamboozled by ponytails, geek T-shirts and imitative chat models.

  18. a pressbutton

    Prediction for when a human - level AGI comes into existance

    As others have noticed, we do not know what intelligence is. According to Wikipedia, we are close to simulating an entire rat brain (not in real-time). My bet is we will brute-force AGI first.

    The brown rat has 200 million neurons and about 4.48×10^11 connections and we are not there yet and not in real-time.

    The human brain consists of 100 billion neurons and over ~10^14 trillion synaptic connections.

    The ratio is ~1000. Moore's law allows for doubling every 2 years (i know - it wont go on forever) 2 ^ 10 = 1024

    So call it 15 years - not least as some of the issues in getting a brain going is a timing and co-ordination issue. So 2038.

    Having said that, i doubt they will think or process input as fast as us for a while after that,

    1. steelpillow Silver badge
      Boffin

      Re: Prediction for when a human - level AGI comes into existance

      Evidence for intelligence in animals tends to focus on what we call cognitive behaviours. Such behaviour can only be explained by assuming that the animal has built a semantic model of the situation and is figuring how to respond. Famous recent examples being Freckles the manta ray, and before her Heidi the day octopus. Birds of the crow family are especially well documented. Their brain sizes are more comparable to the rat than to us, so we will soon be at the threshold scale for cognitive AI. Boost such a mind with ML/BD/LLM and it will not be a "human" intelligence as such, but it will excel us in other ways.

  19. CGBS

    These things are search engines with a clever way of presenting their response. There, I said it.

    1. Richard 12 Silver badge

      Not even that.

      Search engines link to their sources.

      LLM ... don't.

      1. steelpillow Silver badge

        That's the clever bit ;)

  20. Michael Hoffmann Silver badge
    Facepalm

    Intelligence as fundamental property of matter?

    Wow, with an imbecilic statement like that, we’ve reached peak hype waaaay too soon.

    It’s reality check and downhill after something like that. How will they milk more money? Weren’t they going to drag it out a bit more?

    1. Norman Nescio

      Re: Intelligence as fundamental property of matter?

      Well, there's imbecilic and imbecilic.

      I think the argument goes something along the lines that it is an emergent property of aggregations of matter. Not that there are fundamental particles of intelligence e.g. cluons.

      If you take a random assemblage of actual fundamental particles, they do things like make atoms. Mostly hydrogen. Squeeze enough hydrogen together, and with a bit of fusion and a few supernovas, you get a leavening of heavier elements. That leavening of elements, aggregated together by physical laws ends up producing rocky planets. With the right rocky planets you get water and a soup of pre-biotic chemicals. The general assumption is that with enough time and energy input, the pre-biotic chemicals generate 'life'. So 'life' is a 'fundamental property of matter'. Not because there are lifeons, but as a consequence of having enough matter stirred about by physical laws and looking at emergent properties. We've gone from physics to chemistry to biochemistry, with increasing degrees of abstraction. The idea is that with enough 'life', you get 'intelligence' - another emergent property of an aggregation of matter. Some people claim it is inevitable. So from that point of view 'intelligence' is a fundamental (emergent) property of matter.

      Of course, some idiots might go all 'woo' and talk about energy fields and frequencies and waves permeating the universe and aligning your chakras with the ley-lines of intelligence. I'd have no truck with that.

      Ideas on emergent properties are played with in Douglas Hofstadter's Gödel, Escher, Bach book, and specifically, the imaginary character of Aunt Hillary (the intelligent ant-hill) - I'll quote another website:

      Aunt Hillary is made up of ants, the individual ants don’t have thoughts or intelligence but they combine to an intelligent system. Just as every individual neuron need not be intelligent to create an intelligent being.

      You might think of your brain as an ant colony.

      There are emergent pieces of a system. You don’t read every individual letter and then put them together into words, you can read whole words at once, despite the composite letters having no inherent meaning.

      Nat Eliason: Gödel Escher Bach by Douglas R. Hofstadter

      If you consider the development of a human, a fertilised egg is not intelligent. However, after a bit a growth in the right cultural medium it might become a teenager, where intelligence is an open question. There's a continuum from fertilised egg to voter where the voter is assumed to be an intelligent adult. Where in that continuum intelligence 'happens' is open to a lot of debate, but the ends are fairly well defined.

      I don't believe in the existence of cluons. But I do think emergent properties of aggregations of things can be fascinating. That might be what other people mean by 'fundamental property of matter'. Perhaps not completely imbecilic.

      NN

  21. StrangerHereMyself Silver badge

    Intelligence

    I'm convinced that these LLM's have a KIND of intelligence which is related to our own but only partway there. The answers being given by these LLM are too detailed and contextually accurate to be explained by randomly generated word sequences.

    I believe they've actually stumbled onto something they don't quite understand, that intelligence is largely a connected graph database plus some other magic we currently don't have a grasp of, but may soon discover.

    The only thing that bothers me about these LLM's is that they're "static." They're not able to learn new things after their training is complete. This should be the next big leap forward and could have profound consequences. It may even lead to AGI if we're lucky.

    1. Nick Ryan

      Re: Intelligence

      They're a clever use of technology but there's absolutely no intelligence there. An LLM doesn't understand anything, it has no context and no way to validate nor test any results. All it can do is output statistically weighted previously consumed data in a way that makes it look like it's doing it in intelligent way.

      The are distinct similarities between an LLM and a search engine because it searches previously existing information. It's just rather than the human refining the search and combining information from multiple results, the LLM does this in one go. Then, typically, the human has to refine the query to the LLM multiple times in order to get what the human wanted.

    2. BlackPeter

      Re: Intelligence

      but they are not "randomly generated word sequences". They are statistically derived word sequences - based on decidedly non-random text that was fed into the model. No intelligence necessary.

      1. sabroni Silver badge

        Re: They are statistically derived word sequences

        Can you explain how "statistically derived word sequences" are better than "randomly generated word sequences"?

        1. BlackPeter

          Re: They are statistically derived word sequences

          statistically derived word sequence: "The purple parrot swiftly danced beneath the shimmering moonlight, while delicate flowers whispered secrets among ancient stones."

          randomly derived word sequence: "Elephant, mountain, laughter, sunset, adventure, solitude, dream, ocean, fire, serenade, mystery, tranquility."

  22. SonofRojBlake

    ""The idea that correlation is sufficient, that it gives you some kind of meaningful causal structure, is not true.""

    It's also not relevant, for two reasons.

    First, and most importantly, we don't have any meaningful causal structure that explains how WE are conscious and intelligent (and by that "we" I mean the 0.000001% of humanity I have any reason to consider conscious and intelligent). Nobody, not any of these philosophers or neuroscientists or "experts" can tell me HOW my own brain produces what I perceive as my own consciousness, so "causal structure" is absent for humans. Just saying "brains done it" isn't an answer.

    Second, I don't need a causal structure or any understanding of WHY something works to build something that DOES work. The distinction has been pointed out between science and engineering - and AGI doesn't need to be the product of science. Yes, boffins, by all means sit around for the next 200 years discussing why brains do what they do and what it all means. But don't expect the engineers to wait for you to come to some sort of conclusion before they build a machine that behaves to all intents and purposes with human-level intelligence, because they don't need to.

    Chess is the perfect example: we still don't really know IN DETAIL the mechanics of how Magnus Carlsen analyses a chess game. Sure you can spout platitudes about pattern recognition and so on, but how does his actual brain do that? We don't have any idea. And nobody NEEDS any idea to build a machine that can beat him. Rinse and repeat for Go, and the bar exam. The machines are getting smarter faster than the world is getting more complicated.

    Most importantly, while there's no accepted definition for what NATURAL general intelligence really is, that doesn't matter either. All people care about is "does this tool work?". Right now, the tools don't work in a whole host of applications. But they're only going one way.

    1. Filippo Silver badge

      It's important to note that, while a machine might pass the bar exam, there's still no evidence it's actually a good lawyer. A certification to do something in real life is not a board game. Board games are extremely well-defined domains.

      1. SonofRojBlake

        "there's still no evidence it's actually a good lawyer"

        Not yet. There IS evidence that actual lawyers are using its advice in the courtroom, though. It's not gone well, I'll grant you, but again, the machines are not getting worse, and the domains in which they are operating at human-level and above are NOT becoming fewer or simpler.

        We are, however, it seems to me, rapidly coming to understand (in a way we didn't 50-60 years ago) which domains are actually hard, and which just used to seem that way. Once, it was just adding up. Then, it was playing chess. When Deep Blue beat Kasparov, I read someone who should have known better saying Go was so much more complicated that a Deep Blue for Go was decades away. Now we have chatbots that seem (to some) human.

        What we don't have is a self-driving car, or a robot that can walk round my house and pick up the clothes my kids have left around the place and put them in the washing machine - tasks that seem trivial.

        1. Anonymous Coward
          Anonymous Coward

          Robolaywers vs robolaundromats

          "the machines are not getting worse" is actually a sticky wicket, as the (incompetent) use of the LLMs in the legal field is, like in so many other domains, causing them to ingest their own exhaust.

          This is actually a serious problem for a domain specific model based on an LLM style model. It needs current, complete, and perfectly accurate day to perform well. Since it can't tell what output is real and what has been spewed by another LLM, any case material off the general internet, and any material in the official court records after the public debut of these tools is automatically suspect.

          The more of that suspect material these models ingest, the more inaccurate and unreliable their output will become. So even with more computing power, the more LLM output that enters the public record, the bigger the noise band becomes.

          Oh, and we probably can actually do that last part sufficiently well at this point, it may even be the best know use of Boston Dynamic robo-mutts-with-an-arm to date. They can also fairly reliably obtain a cold beer from the fridge for what it's worth.

          I do appreciate that you touched on the way we are seeing different problem domains in a new light. These tools will cause tidal shifts in some pretty unexpected places, while we see them continue to flail and fail doing other tasks like driving. But the tools we are using are like if you gave the brain of a gnat the computing power of a datacenter. All that power can solve some surprising problems, but many more will absolutely require more than a gnat brain to accomplish.

        2. that one in the corner Silver badge

          > When Deep Blue beat Kasparov, I read someone who should have known better saying Go was so much more complicated that a Deep Blue for Go was decades away

          Kasparov beaten by Deep Blue: 1997

          Fan Hui beaten by AlphaGo: 2015

          18 years - close enough to "decades" (two decades, that is) for most estimating purposes.

          BTW we still mainly have brute-force approaches to Chess and Go: there is still room to fid a more finessed way of solving these.

          And getting a machine to add up *was* hard to do: the fact that we now know how to do it and can replicate it so much faster doesn't stop the original problem, in the original context, from being hard. As soon as any domain is "solved" it stops becoming hard.

          And "hard" isn't the same as "I don't have the vocabulary to follow the explanation": I can get eyes to glaze over talking about Finite State Automata used in lexers, but that is such an easy and solved domain that there are programs from the 1970s that will generate the automaton for me.

  23. Norman Nescio
    Headmaster

    Turing Test

    Pedantry alert!

    At the risk of repeating myself (see link to comment on earlier article), the 'Turing Test' described in the article is not the test as described in Turing's original paper.

    Article: The test sets out to assess if a machine can fool people into thinking that it is a human through a natural language question-and-answer session. If a human evaluator cannot reliably tell the unseen machine from an unseen human, via a text interface, then the machine has passed.

    No it doesn't. I'll just give Turings words as written, then repeat my previous comment:

    Turing: ...the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A."

    Me: The game is played between an interrogator and two responders (A & B ), one male and one female, and the object of the exercise is for the interrogator to determine for A & B which is the man, and which is the woman. Sometimes the interrogator gets it right, sometimes the interrogator gets it wrong. The point is that, if one of the responders is replaced with a machine, whether the statistics of determining which is which change. It is not about the interrogator determining which is human. As Turing wrote:
    "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"

    So for a machine intelligence to win at the Imitation Game, the statistics of the game must not change when a human is replaced by a machine. It is not a one-off test. This means that the machine intelligence must successfully mislead the interrogator that it is a human female or male - that is, be good at lying; or indeed believe that it is itself a human male or female, so that it isn't lying. Either proposition is somewhat disturbing.

    Simply having a typed conversation with an entity and deciding, on the basis of that conversation, whether that entity is a machine or not, is not the Imitation Game.

    It is worth reading Turing's original paper. Turing himself regarded the question of whether machines could think as "...too meaningless to deserve discussion".

    NN

    1. amanfromMars 1 Silver badge

      BetaTesting Turing Testing of Alien Fare ...... Extremely Rare Distinctly RAW Unusual Ware

      It is worth reading Turing's original paper. Turing himself regarded the question of whether machines could think as "...too meaningless to deserve discussion”...... Norman Nescio

      As a natural progression that can be towards an existential human leadership extinction event, an always live perilous possibility threatening to be an eventual inevitable probability, would AI and AGI in its early iterations imitate Turing’s passion for discussion on its abilities, .... "...too meaningless to deserve discussion” ....... with such a blanket dismissal being a very effective self-protective defence measure ..... and continue on with ITs Human Neuro-Linguistic Programming Morph to deliver their Quantum Communication Development Project Leaps realising Augmented Generative Imaginanation as that which is responsible for the supply of the likes of earlier AI and AGI phorms ..... and the Future, soon to be the Present and daily fated and relegated to be the Past ? ?

      And shared as question for you to deny and argue is not an accurate enough reflection of the Current Universal State of Greater IntelAIgent Games Play .... or think too meaningless to deserve discussion :-)

  24. Combat Epistomologist

    "If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers."

    If you think that the current fad for LLMs mean that AGI is coming soon, you don't understand the problem. It should come as no surprise that the foolish statement above comes from a professor at Yale's MANAGEMENT school.

  25. Ashto5

    GAI is coming

    Hmm from what I have read and observed

    General intelligence is receding at great speed

  26. spold Silver badge

    In digesting all this codswallop...

    Never mind Turing for now, It is helpful to have a historical understanding of the Oracle at Delphi https://en.wikipedia.org/wiki/Pythia

    Ringing true yet? What goes around comes around - AI and AGI are smelling the same..... bring on the fumes and vapours.

  27. MrAptronym

    Good article.

    I am actually really impressed with this reporting. I read so much bad reporting on AI, even here at the Reg. It is nice to read a thoughtful and well sourced piece.

  28. John H Woods

    Morality

    When AGI arrives, it will possibly be immoral to use it, it'll just be the enslavement of, albeit artifical, sentient beings.

  29. Dom 3

    Instead of trying to show that LLMs are intelligent, it's far easier to show that they are *not*.

    Some brave people have been making chatGPT generated recipes - "tastes like you've scooped out the garbage disposal" was one comment (OSLT).

    My first go with chatGPT was to ask it about shortcomings in the database schema of a certain popular blogging engine. Plausible guff ensued, but certainly not the sort of insight that would come from anybody that knows anything about it. Why? For one reason - it's trained on a squillion blog posts written by people who don't know about - possibly couldn't even understand - the flaws in said schema.

    They don't know anything, they don't *think* anything, they don't do anything except in response to a prompt, they certainly don't go "I'm bored, I wonder if I can find a neater proof of Fermat's Last". Or "hey, I think I'll hack the Pentagon and drop nukes in random places for fun".

    I am aware that I am repeating myself but I think it's a point worth hammering home even if I am mostly preaching to the converted.

    1. Dom 3

      Plausible nonsense

      "The English Electric Lightning was a supersonic fighter aircraft developed in the 1950s and 1960s by the British aircraft manufacturer English Electric. It served as an interceptor for the Royal Air Force (RAF) and was known for its impressive speed and climb rate.

      The Lightning was capable of reaching speeds of over Mach 2 (twice the speed of sound) and had a unique vertical takeoff and landing (VTOL) capability."

      1. Nick Ryan

        Re: Plausible nonsense

        That's exactly the dangerous scenario that people who don't know what LLMs are and how they work, and are madly pushing their unique twist on "AI" as the latest, greatest, thing are unable to grasp.

        An LLM will output something exactly like the above and only if the reader knows enough about the subject in the first place are they able to spot the errors. That's a dangerous scenario to be in.

        Not that LLMs aren't useful, and interesting, but they are very much not the solution to everything as their pushers are pushing.

        1. amanfromMars 1 Silver badge

          Re: Plausible dangerous nonsense

          Quite so, Nick Ryan.

          And whilst "The English Electric Lightning was a supersonic fighter aircraft developed in the 1950s and 1960s by the British aircraft manufacturer English Electric. It served as an interceptor for the Royal Air Force (RAF) and was known for its impressive speed and climb rate.

          The Lightning was capable of reaching speeds of over Mach 2 (twice the speed of sound) and had a unique vertical reheat takeoff and landing (VTOL) capability.” ..... is nearly all perfectly true, the vertical landing capability was only available as a catastrophic crash event .

          RAF Lightning pilots of the day will tell you the aircraft was more a flying rocket than anything else.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like