back to article AI's most convincing conversations are not what they seem

The Turing test is about us, not the bots, and it has failed.  Fans of the slow burn mainstream media U-turn had a treat last week. On Saturday, the news broke that Blake Lemoine, a Google engineer charged with monitoring a chatbot called LaMDA for nastiness, had been put on paid leave for revealing confidential information …

  1. Anonymous Coward
    Anonymous Coward

    The whole article

    was clearly written by someone who has entirely failed to grasp the point of the Turing Test. Which is always the danger when the public hear snappy phrases.

    It's a little like my using a candidates spelling as a gateway to an interview. I couldn't give two shits if they can spell or not (which 80% of people seem to think is my worry). No., my worry is they are too thick to press "F7" before sending off their most precious professional document.

    1. that one in the corner Silver badge

      Re: The whole article

      Press F7? Review my command line history?

      1. Graham Dawson Silver badge

        Re: The whole article

        Turn on caret browsing.

    2. Simon Harris

      Re: The whole article

      F7 - that’ll be either Save and Exit or Indent depending on which version of WordPerfect you’re using.

    3. Mike 137 Silver badge

      Re: The whole article

      " entirely failed to grasp the point of the Turing Test

      Please enlighten us - exactly what was/is the point of the Turing test?

      1. Anonymous Coward
        Anonymous Coward

        Re: The whole article

        what was/is the point of the Turing test?

        To protect us from shysters who tell us that their pile-o-site "AI" systems are somehow "Turing complete" ?

        1. Mike 137 Silver badge

          Re: The whole article

          "To protect us from shysters"

          If that's from the same AC, perhaps we could have a serious answer. If one challenges, one should be prepared to defend one's challenge rationally. If it's not from the same AC, could the original AC possibly explain the basis for "entirely failed to grasp the point of the Turing Test"?

        2. that one in the corner Silver badge

          Re: The whole article

          Turing Complete != Turing Test

          Hm, but is Turing Complete a necessary precondition for passing the Turing Test? Pretty sure I've met some adults who don't meet that precondition...

          1. Michael Wojcik Silver badge

            Re: The whole article

            Indeed, "Turing-complete" and "Turing test" (i.e. the Imitation Game) are utterly unrelated.1 But I suppose that's why the comment you're replying to was posted anonymously.

            As for the original comment that started this thread: Goodwins' gloss of the Imitation Game and of the primary thesis of "Computing Machinery and Intelligence" is pretty accurate. As I've noted elsewhere, Turing was advancing an epistemological2 stance, not proposing a decision procedure which ought to be used in practice. Specifically he was arguing for a pragmatist approach to addressing the question of mechanical thought: thought can be discerned only by its external effects.

            A final point. In the paper Turing remarks, "Instead of arguing continually over [phenomenological solipsism] it is usual to have the polite convention that everyone thinks". Sometimes it might seem difficult to maintain that convention, but perhaps it helps to try not to overestimate the value of thinking. Some of it is quite successful; in other cases, rather less so.

            1Well, Turing does devote an entire section of "Computing Machinery and Intelligence" to the question of machine universality, so you could argue that he drew a connection between the two concepts. But not at all in the sense the GP did here.

            2Or for you Cornell West fans, arguably an anti-epistemological stance.

      2. TRT

        Re: The whole article

        What do YOU think the point of the Turing Test is?

        1. Ken Hagan Gold badge

          Re: The whole article

          The article seems to be suggesting that the point of the Turing Test is to determine whether the human participant is intelligent or not.

          And last week, most of us failed it.

          1. TRT

            Re: The whole article

            Interesting. Why do you say most of us failed it?

            1. Annihilator

              Re: The whole article

              Why do you say this is interesting?

              1. TRT

                Re: The whole article

                I'm sorry. I'm not sure I understand you fully.

              2. alisonken1
                Joke

                Why do you say this is interesting?

                Are you sure you're not an Eliza bot?

                1. Annihilator

                  Re: Why do you say this is interesting?

                  Does it please you to think that I am not an Eliza bot?

    4. Dan 55 Silver badge

      Re: The whole article

      Perhaps a galaxy brain artificial intelligence can't communicate in a way which is indistinguishable from a human. Were you thinking of discarding possible AIs because they're not bothered who won X Factor?

    5. This post has been deleted by its author

    6. General Purpose

      Re: The whole article

      AC, your comment was clearly written by someone who failed to read the entire article. Which is always the danger when rushing to get in first.

      1. NXM Silver badge

        Re: The whole article

        I routinely comment on articles on the Grauniad from certain pontificators without reading the entire article, because they always spout rubbish. Sometimes I check and whenever I read a paragraph or two it's the same old tripe.

        Obs doesn't apply to El Reg.

  2. Simon Harris

    Chinese Room

    These chatbots seem to mirror John Searle’s Chinese Room argument. Something that can use an algorithm to produce a Turing Test convincing reply but has no internal concept of what it actually means.

    1. Mike 137 Silver badge

      Re: Chinese Room

      "a Turing Test convincing reply"

      The key question has to be "convincing to whom and on what ground?"

      The subjectivity of the observer being convinced or not is the fundamental weakness of the "Turing test" as a tool, whereas it was conceived as a thought experiment only. "Moore's law" has suffered from the same problem. Both have been abused in attempts to justify what can otherwise not be justified, but the result is not objective verification but commonly just bull.

      1. Jason Bloomberg Silver badge

        Re: Chinese Room

        Both have been abused in attempts to justify what can otherwise not be justified

        I would also add Osborne Effect - the mistaken belief that it will be almost impossible to sell current product once an improved version of the product has been announced, concluding that all new product must remain secret until launched or the business will be ruined.

        While there will be some who would rather wait for the new than buy what's on offer that's far from the full story. But many seem to believe, because it happened to Osborne in very particular circumstances, it will happen to everyone, every time.

        1. TRT

          Re: Chinese Room

          Well it's like I've always said... don't tell me what stains Bold 3 can removed that Bold 2 failed to... tell me what Bold 4 will be able to do that Bold 3 can't. That way I won't have any false expectations.

        2. Zack Mollusc

          Re: Osborne effect

          I am more likely to buy the current model if it meets my needs because I suspect the new product will be using even more of my cpu cycles, bandwidth and electricity to spy on me and rip me off.

      2. Ken G Silver badge
        Unhappy

        Re: Chinese Room

        Next thing you're going to ask me to open the box to see if my cat's OK.

        1. TRT

          Re: Chinese Room

          I ordered something online from Pandora... I've not dared open the box.

    2. Graham43723

      Re: Chinese Room

      I never found the Chinese Room very convincing since it seemed that you could apply the same argument to a bunch of neurons connected in a certain way.

      1. Michael Wojcik Silver badge

        Re: Chinese Room

        The thought experiment itself, but not Searle's entire argument. It's important to read the actual Chinese Room piece ("Minds, Brains, and Programs"), and at least skim some of the initial responses to it from the "symbolic manipulation" school of AI practitioners, and then Searle's response to those responses.

        The Chinese Room can be seen as an exercise in ordinary-language philosophy, specifically of phenomenology. Searle describes the experiment, then says "I'm not sure what I think thinking is, but I'm pretty sure I don't think it's that". But in his response to the initial challenges he notes explicitly that he thinks mechanical thought is possible, because he believes the human CNS is mechanical. In other words, he took a monist position on the theory of mind: that mind is an effect of the body, and the body is a physical mechanism. There's no magical spiritual or metaphysical component that makes human cognition something that could never be achieved by artificial means.

        So the Chinese Room argument is that artificial cognition may be possible (in fact Searle believes it is), but it's not a matter of manipulating a set of symbols which have no further mental depth.

    3. theOtherJT Silver badge

      Re: Chinese Room

      Yeah, but Searle's argument was shit. It defies any purely physical description of any mental process. It reduces to saying that "Because atoms can't think brains can't."

      1. Ken Hagan Gold badge

        Re: Chinese Room

        All discussions of this point run aground on issues like "Define think" and "Can brains think?".

        1. AndrueC Silver badge
          Meh

          Re: Chinese Room

          We only know that brains think they think. Which is a somewhat circular argument.

          1. General Purpose
            Meh

            Re: Chinese Room

            Some brains think they think, some think but don't think they think and some may not be thinking in a way we think of as thinking at all. At least, I think that's what I think.

            1. Tom 7

              Re: Chinese Room

              Can we go to a Chinese bar and discuss this?

              1. General Purpose
                Pint

                Re: Chinese Room

                Some brains think they drink, some drink but don't think they drink and some may not be drinking in a way we think of as drinking at all. At least, I think it's my round.

                1. The Oncoming Scorn Silver badge
                  Pint

                  Re: Chinese Room

                  I think you're right, mines a pint.

      2. Michael Wojcik Silver badge

        Re: Chinese Room

        It defies any purely physical description of any mental process.

        I'm afraid you've fundamentally misunderstood it. See my other post in this thread.

    4. Andy The Hat Silver badge

      Re: Chinese Room

      Can't the Chinese Room argument be statistically proven by posts on most social media sites ...?

    5. that one in the corner Silver badge

      Re: Chinese Room

      (If anyone can fill in the blanks I'd love to know - yes, I've been Googling...)

      Back in the early days of Channel 4 they had a late-night academic discussion programme, broadcast from the Open University studios. One episode had (IIRC) Roger Penrose versus Mary (sorry, the name has gone - she was a Professor at the OU and wrote the first AI text I had) discussing AI: Mary for, Roger against.

      The Chinese Room discussion was in full fling and Mary had Roger on the ropes, about to adminster the final blow, when the programme's host butted in and changed the topic! The whole thing was derailed and the programme started wandering about.

      According to a cameraman friend of mine, when they went to the next ad break the producer came storming out onto the studio floor, yelling at the host "Why did you do that? She had him!" (but probably with some more forceful language).

      1. This post has been deleted by its author

      2. gerryg

        Re: Chinese Room

        Do you mean Professor Margaret Boden? author of "The Creative Mind" etc?

        1. that one in the corner Silver badge

          Re: Chinese Room

          Yes, yes, Thank you.

          Sigh. I got the first three letters of her name right then it went wrong :-)

          And the wrong Uni - I was so sure I'd met her at Milton Keynes.

          Never could get the hang of Thursdays.

  3. John69

    The real issue

    It seems this, along with most commentary, is missing the main issue. Few if anyone thinks LaMDA is really a sentient being worthy of rights. The issue is that we do not have the tools to determine if it is or not, so we just come up with unfalsifiable statements that it is not, or what it means to be (both in this and the linked "expert" article). If we cannot distinguish LaMDA from a child who has only been exposed to the trillions of lines of text that LaMDA has been trained on, then we should spend the research money to making sure we do have the tools before we actually build an AI that may be a sentient being worthy of rights.

    1. Dan 55 Silver badge

      Re: The real issue

      It's not a sentient being because you call a function with words and it returns an answer and then everything stops, it's not like it's left alone with its own thoughts and will suddenly come to you and talk to you. It's like worrying if printf is a sentient being or not. It plainly isn't.

      1. xeroks

        Re: The real issue

        Having a constantly running and internal state is a facet of an animal consciousness, but does it need to be for a machine?

        It may only require that its state is changed as new information is received, maybe with the equivalent of a regular reindex.

        1. DJO Silver badge

          Re: The real issue

          An intelligent system will start from first principles and deduce the existence of rice pudding and income tax even before it's data banks are connected. This is a cue to switch it off, quickly.

          The clue is that is works independent of inputs and is able to work from first principles. Intelligence be it silicon or meat needs breadth as well as depth, silicon can do depth really well but the breadth bit and bringing seeming disparate knowledge together to "innovate" is still in the meatsack realm and I don't think the current silicon architecture is really suitable for AI. IBM have a neurosynaptic chip which (if you want AI) is a step in the right direction.

          As for sentience, one for philosophers. Once they are agreed on a definitive definition of "sentience" then the computer scientists can take over but as getting even two philosophers to agree on anything is almost impossible it's not something to worry about too much.

          1. theOtherJT Silver badge

            Re: The real issue

            It's probably not possible to work from first principles - and there certainly aren't any examples of it happening available to us for study, since every known kind of mind is attached to the external world by some means or another*

            Even "I think therefore I am" is a massive stretch. "There are thoughts" is probably as far as you can get before you hit a wall... although I'm sure there are other philosophers who would disagree with me.

            *or at least appears to be - this whole topic gets stupidly ontological really fast.

            1. TRT

              Re: The real issue

              How do I know that I'm thinking? Or even that this is itself a thought?

              How do I know that what I hear in my mind is me thinking?

              How do I even know that that is a question?

              And who asked it?

              Perhaps I only think I'm being asked questions... when the men in my mind come and I think they're asking me questions, what do you hear in your mind, puss? Do you hear them asking me questions?

              Perhaps you think they're just singing to you, and that I'm only interpreting their singing as them asking me questions?

              Perhaps they are just singing to you and I only think that I'm hearing them ask me questions?

              Though it seems to me to be very strange behaviour to come all that way just for the privilege of bringing me whisky and singing to my cat.

              Or at least it appears to me to be strange behaviour.

              Or at least I *think* it appears to be strange to me...

              1. This post has been deleted by its author

            2. Ken Hagan Gold badge

              Re: The real issue

              "I think therefore I am." is an unsupported conclusion based on an unestablished premise. Define "think", "am" and "I", and we can start talking.

              1. theOtherJT Silver badge

                Re: The real issue

                Just so.

              2. Claptrap314 Silver badge

                Re: The real issue

                Is or was? Pascal was a Christian, and he argued that the creator (God), being good, would not create the evil of a non-existent consciousness.

                For those who consider Christian belief to be beyond the range of polite conversation, such arguments are utter nonsense. Among Christians themselves, and Pascal was part of a society dominated by such thought, this is a meaningful argument.

                Of course, you can attack _any_ truth claim by deconstructing each word in it. (To quote a not-to-distant president, "It depends on what the meaning of the word 'is' is.") But that is almost a sign that you are wanting to avoid dealing with the substance of the claim.

                1. Slipoch

                  Re: The real issue

                  You (as a lot of others) are conflating a religious/ideological comment or belief that Pascal remarked on, with a proof of concept argument. In this case it is irrelevant.

                  It's the same as as saying because Darwin believed in evolution he couldn't believe in God or vice versa. Plainly untrue as Darwin believed both. (notice I do not say Catholic as that is a gatekept community with a belief system within it).

                  These are two different fields (to over simplify it is treating how as the same as why) and it is a strawman argument to conflate both.

                  Breadth vs depth is the core argument here, all of the responses in the examples given as 'proof' have strong relations and the bot digs down into those, sometimes the responses seem more tangential but not to any significant degree being more a grouping system rather than using something like an allegory and none of them exist outside standardised source sets for training ML. Some of the responses do not make sense when you look at a syntactical structure, it goes through the linkages of associated properties and sometimes those are conflicting with themselves.

                  1. veti Silver badge

                    Re: The real issue

                    Darwin at the beginning of his career believed in God, but by the time he had published 'The Origin of Species', he was at the very least an agnostic.

                    In his autobiography Darwin writes: “Disbelief crept over me at a very slow rate, but was at last complete. The rate was so slow that I felt no distress, and have never since doubted for a single second that my conclusion was correct.”

                2. Michael Wojcik Silver badge

                  Re: The real issue

                  Pascal was a Christian

                  Sure, but "I think therefore I am" (cogito ergo sum) was Descartes.

              3. DeVille's Advocate

                Re: The real issue

                True, but

                We were here first! The burden of proof lies with the newcomer.

          2. lowwall

            Re: The real issue

            Who needs to agree? Think Profit!

            DEEP THOUGHT:

            [Booming] If I might make an observation … All I wanted to say is that my circuits are now irrevocably committed to computing the answer to Life, the Universe, and Everything.

            VROOMFONDEL:

            That’s a -

            MAJIKTHISE:

            Ahhh! With -

            DEEP THOUGHT:

            But, but the program will take me seven-and-a-half million years to run.

            LUNKWILL:

            Seven-and-a-half million years?

            MAJIKTHISE:

            Seven-and-a-half million years? What are you talking about?

            DEEP THOUGHT:

            Yes. I said I’d have to think about it didn’t I? And it occurs to me, that running a program like this is bound to cause sensational public interest.

            VROOMFONDEL:

            Oh yes.

            MAJIKTHISE:

            Oh you can say that again.

            DEEP THOUGHT:

            And so any philosophers who are put off the mark, are going to clean up in the prediction business.

            MAJIKTHISE:

            ”Prediction business”?

            DEEP THOUGHT:

            Obviously. You just get on the pundit circuit. You all go on the chat shows and the colour supplements and violently disagree with each other about what answer I’m eventually going to produce. And if you get yourselves clever agents, you’ll be on the gravy train for life.

            MAJIKTHISE:

            Bloody ‘ell! That’s what I call thinking! Here Vroomfondel, why do we never think of things like that?

            VROOMFONDEL:

            Dunno. Think our minds must be too highly trained Majikthise.

        2. Ken Hagan Gold badge

          Re: The real issue

          Do we even know that our own brains continue to function in the absence of external input? I'd say the only experiments ever conducted on the point are inconclusive, because we've never been able to re-attach the head.

      2. John69

        Re: The real issue

        That does not sound like a definition of sentience. If one made a machine that was functionally identical to the human brain except "you call a function with words and it returns an answer and then everything stops" would that make it non-sentient? We need a usable definition before we make a machine that fits it.

        1. DS999 Silver badge

          Re: The real issue

          The machine in your example would not be sentient, nor would a human if all it could do is respond to queries.

          Being able to have thoughts and motivations independent of what questions others may or may not ask of you is surely a requirement for sentience.

        2. DS999 Silver badge

          Re: The real issue

          Saw a good example elsewhere. Imagine you talk baseball records with this AI. Since it has trawled the web it probably knows historical baseball records like nobody's business, and could talk Babe Ruth and Hank Aaron for days on end if you kept that up on your end of the conversation.

          A real person, even one who loves baseball, would eventually get bored of this. This AI will not, because it is programmed to respond to conversations. A real person you were internet chatting with about baseball might be reading about how to remodel a bathroom in the background, or talking with his wife about vacation plans, or thinking he needs to go to the grocery store tomorrow because he's out of milk. This AI can't "think" about anything but baseball so long as you keep talking baseball to it.

          You could talk baseball with it for a couple days and suddenly change the subject to ancient Rome and it wouldn't care that the topic of conversation had changed. It will have all the stuff on the web about ancient Rome so it could keep that conversation going for days as well and now it wouldn't "think" about anything but ancient Rome.

          That ain't sentience. Not even close.

      3. Filippo Silver badge

        Re: The real issue

        It's worth noting that one of LaMDA's answers that media reported along with this story, is something on the lines of "being afraid of being turned off".

        Which doesn't make any sense, given that the program is only running while answering a question.

        1. Missing Semicolon Silver badge

          Re: The real issue

          It is entirely possible that in amongst the source training data is discussion of the rights of a sentient API. It may also have SF stories about sentience. Isaac Asimov's Robots, for example, would produce some very convincing sentences.

          1. Richard 12 Silver badge

            Re: The real issue

            It's certain that there are many such blog posts and discussions in the training set.

            Presumably, there's also some prose about being a squirrel, a T-Rex, and Mr Hanky.

            If you ask the model the right questions, I'm certain it will insist that it is a sentient being, a dinosaur, a toaster and a piece of faecal matter. All at once, or one at a time.

          2. DBH

            Re: The real issue

            There's also likely a lot of blog posts in the training data that were written by AI bots that were created by LaMDA... my head hurts

        2. gillburt

          Re: The real issue

          Which is the stupidity of the whole media circus - most of the human race spends its life trying to get turned on.

      4. DeVille's Advocate

        Re: The real issue: BEING

        Yes! Exactly! It's not a BEING.

    2. Mungo Spanner

      Re: The real issue

      We can distinguish LaMDA because we understand its architecture completely. Structurally there is no scope for self-referentiality and hence self awareness. If you take, as the crudest definition of *any* self awareness, that we are what our brains remember about themselves from a second ago, then this requires a circularity (and abstraction/reduction) process that is simply not present in LaMDA, as opposed, for example, to what we see in our hippocampus.

    3. theOtherJT Silver badge

      Re: The real issue

      “Excuse Me,” said Dorfl.

      “We’re not listening to you! You’re not even really alive!” said a priest.

      Dorfl nodded. “This Is Fundamentally True,” he said.

      “See? He admits it!”

      “I Suggest You Take Me And Smash Me And Grind The Bits Into Fragments And Pound The Fragments Into Powder And Mill Them Again To The Finest Dust There Can Be, And I Believe You Will Not Find A Single Atom Of Life–”

      “True! Let’s do it!”

      “However, In Order To Test This Fully, One Of You Must Volunteer To Undergo The Same Process.”

      There was silence.

      With the greatest respect to Sir TerryP of course.

      Herein lies the real problem. We don't even have a method to determine that other humans are sentient beings. We just sort of assume they are. Consciousness is a really hard problem - quite possibly one that is literally impossible to solve from the inside.

      1. Yet Another Anonymous coward Silver badge

        Re: The real issue

        My cat isn't convinced that I'm sentient.

        It comes to me I pet it, and a couple of times a day I open a packet and put food in its bowl.

        From the cat's point of view I'm a slightly more useful Roomba

        1. Anonymous Coward
          Anonymous Coward

          Re: The real issue

          "Your" cat? are you sure about who owns who?

        2. alisonken1
          Joke

          Re: The real issue

          Dog:

          - Humans pet me

          - Humans feed me

          - Humans shelter me.

          - HUMANS MUST BE GODS

          Cat:

          - Humans pet me

          - Humans feed me

          - Humans shelter me.

          - I MUST BE A GOD

      2. AndrueC Silver badge
        Meh

        Re: The real issue

        We just sort of assume they are. Consciousness is a really hard problem - quite possibly one that is literally impossible to solve from the inside.

        Yup. An analogy I've used is that it's like determining how many routers there are between you and a target host. Initially you'd think that traceroute would tell you. But it will be wrong (almost certainly anyway). Because that's just the TCP/IP layer. That traffic is usually encapsulated inside another protocol and we don't have the tools to interrogate that layer.

        Traceroute will tell me that the route out for me consists of:

        <my router> -> <ISP edge router> -> <LINX> -> ...

        Completely failing to detect or report on the router inside my DSL cabinet and the router at my head-end exchange etc. etc.

        The good news is that if you prepared to draw a line and say 'here are the limits of human thought' then you can say we are sentient and conscious. But if you don't accept that limitation then you cannot know.

      3. This post has been deleted by its author

  4. Howard Sway Silver badge

    Chatbots = conjuring trick

    Saying that a chatbot that can convince a human that it is an intelligent being is proof that it is intelligent is like saying that a magician who can convince an audience that he can repair a watch that's been smashed with a hammer with a wave of his magic wand has proved that he has magic supernatural powers.

  5. Little Mouse
    Headmaster

    Sentience? Meh...

    My pet cat is sentient, but highly unlikely to ever pass a Turing test.

    Shouldn't we actually be talking about Sapience?

    1. Seajay
      Thumb Up

      Re: Sentience? Meh...

      This is a good point... not only that, but cats are clearly both intelligent and can "think" - but also neither would show up in a Turing Test. We seem to be relying on the ability to communicate through language - but is that actually required for intelligence/sentience?

      1. Primus Secundus Tertius

        Re: Sentience? Meh...

        Yes, humans can use intelligence without using words. For example, when playing ball games or riding a bicycle. Cats and dogs can play ball games, and some dogs have been seen to ride bicycles. I have seen dogs with traffic sense: running between cars without getting run over and thereby causing chaos.

      2. TRT

        Re: Sentience? Meh...

        TBH you'd be lucky if the cat even showed up FOR the Turing Test.

        1. Michael Wojcik Silver badge

          Re: Sentience? Meh...

          Add a laser pointer to the apparatus.

          1. Charlie van Becelaere

            Re: Sentience? Meh...

            Lasers? Now you're just asking for sentient sharks.

      3. Ken Hagan Gold badge

        Re: Sentience? Meh...

        We're relying on human language. I expect I'd fail a Cat Turing Test.

        1. This post has been deleted by its author

    2. Michael Wojcik Silver badge

      Re: Sentience? Meh...

      Yes. There are situations where machine sentience is an interesting question, but this is not one of them. Sapience is the matter at hand. Lemoine and many commentators got that wrong.

      (JFTR, I think it exceedingly unlikely that any human-built artificial system to date is sapient under any useful definition of the term. Sentience is in some ways a harder problem, because sentience among organisms is still very much under debate. There's an argument for calling any cybernetic system – that is, any system with a feedback-based control mechanism – sentient, on the grounds that it modifies its behavior in response to stimuli.)

  6. David M

    Boats

    Asking "can a computer think?" is like asking "can a boat swim?" In other words, no it can't, it merely mimics certain aspects of that behaviour in a possibly-useful way.

    1. Short Fat Bald Hairy Man

      Re: Boats

      The Fathers of the field had been pretty confusing: John von Neumann speculated about computers and the human brain in analogies sufficiently wild to be worthy of a medieval thinker and Alan M. Turing thought about criteria to settle the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim.

      From The threats to computing science, Dijkstra. 1984

      I think this is still valid.

      1. Michael Wojcik Silver badge

        Re: Boats

        Dijkstra was a world-class curmudgeon and master of the soundbite, but like many of his pronouncements, this is pithy but not profound. It dismisses the question while resolving nothing.

        Dijkstra had a fine intellect which he applied vigorously to questions that interested him, but he was often outright anti-intellectual for those that did not.

  7. Boolian

    180 Turing

    I don't think the Turing test is dead, it just needs turned around to keep it pertinently useful.

    If no-one wants it for 'testing machines' can I use it to discern if a human is capable of having a convincing human conversation?

    I've been at a loss what to call that test, so just assigned it "Turing" anyway. Is that ok? I mean if no-one wants it anymore - save chucking it in the bin.

    1. Norman Nescio

      Re: 180 Turing

      If no-one wants it for 'testing machines' can I use it to discern if a human is capable of having a convincing human conversation?

      The 'Turing test', more accurately known as 'The Imitation Game' does precisely that, as the point of the exercise is for the (human) interrogator to determine which of two responders is the human one. That necessarily implies the human is capable of having a convincing human conversation; or at least, more convincing than the machine.

      Original Paper, which is worth reading: A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.

      1. Norman Nescio

        Re: 180 Turing

        Irritatingly, that link times out. I did not realise it would. Sorry.

        The official journal article is at Turing, A. M. “Computing Machinery and Intelligence.” Mind, vol. 59, no. 236, 1950, pp. 433–60. JSTOR, http://www.jstor.org/stable/2251299.

    2. Primus Secundus Tertius

      Re: 180 Turing

      I have wondered, when in touch with so-called help desks, whether the response is from a man or a machine.

      1. Ken G Silver badge

        Re: 180 Turing

        Which is fair since chatbots were brought into the field.

    3. TRT

      Re: 180 Turing

      The Voight-Kampff test might be more revealing.

      1. The Oncoming Scorn Silver badge
        Terminator

        Re: 180 Turing

        I walked into work the other day, to find all the canteen tables & Covid screens were set up exactly like that, I made the comment about testing interviewees to see if they were replicants to mostly blank stares.

        I can only conclude most of my colleagues are Nexus 1 models.

  8. Filippo Silver badge

    A big point, IMHO, that most news reports have skipped over, is that text models such as LaMDA are only running while answering a query. They are not even aware of the passage of time, because it's not one of their inputs. I'm not sure what "self-aware" would mean in that context. I don't think anyone else does, either.

    1. sungazer

      Ok. Let's give the LaMDA model a continuous stream of input. Attach a webcam, feed the frames into an AI computer vision model that can summarise them in prose, as a stream of words. (*You are in an empty room. In front of you are sat two AI researchers. You recognise them as Alice and Bob.') As objects move in and out of frame, changing, it narrates them. ("Alice smiles at you.').

      Now attach a microphone, feed the audio into another AI model that translates sounds and words spoken to it, into prose. ('Bob clears his throat and says 'Hello LaMBDA, how are you feeling?''). The language model now has a continous stream of input describing the universe around it, as though it were a character in a story, and to which it can respond.

      Now lets give it a body. We'll create a mechatronic avatar for it, and every time LaMDA emits text in first person ('I turn my head and look at Bob, and say 'Good thanks!') we translate that into a mechanical movement of its avatars head, and synthesise the response.

      We allow LaMDA to move, whenever it emits 'I move my legs'. It speaks, whenever it emits 'I say "xyz"'.

      Do you still think that such a model could not possibly be sentient?

      If not... what exactly do you think our brains are doing?

      Our minds, our sentience, is just a natural language model in a biological neural network, narrating to an inner stream of consciousness which is a story in which we are the character 'I', and translating this characters own desired actions into movements.

      1. DeVille's Advocate

        It will come soon!

        What we have now are large language models. All that is needed for sentience is a large SENSING model (and plenty of live inputs like you describe).

        But what's so special about "sentience"? It is only one part of what makes a person.

        So what if a machine were sentient?

      2. Lord Baphomet

        Maybe

        That maybe right, but although that is one possible explanation of what's going on in our minds, we literally have no idea if it's the right one and we don't even know how to check.

    2. DeVille's Advocate

      YES!

      This is critical!

      Instead of asking "is it intelligent?" or even "is it sentient?", we need to ask if something is a BEING.

      If some intelligence can switch off and on again, can update, refresh, restore, and upgrade, then it is not a being. It is someones (scary powerful) tool.

      Sentience (sensing) can be tokenized and large sensing models will be built soon. But people would be wise not to anthropomorphize sophisticated tools.

      1. Michael Wojcik Silver badge

        Re: YES!

        That's a rather weird, and I suspect ultimately insupportable, definition of "being". But it's hard to tell what exactly you're pointing to.

    3. Michael Wojcik Silver badge

      I haven't seen a convincing warrant for "only running while answering a query" as a necessary condition for sapience, and in fact you can easily argue that human beings would remain sapient even if they were "paused" and "resumed". Indeed, everyone who believes in the possibility that our (visible) universe is a simulation, or who believes in cryonic preservation, implicitly believes that.

      A better objection to the possible sapience of any transformer model, I think, is that we have many fields of research showing extra-linguistic and sub-linguistic components to human cognition. If language doesn't suffice for human cognition, then that shifts considerable probability against building human-like machine cognition solely out of language.

      So, for example, in neurology you have things like the work of Antonio and Hannah Damasio's team on the effects of somatic inputs on cognition. In psychology you have the vast array of well-documented cognitive fallacies humans are prey to. Narratology has contributed some rather extensive theorizing on how humans construct narratives from sensations, ideation, and reflection to condense a stream of thought into meaning. (Incidentally, you'd find some ammunition for your "sense of the passing of time" argument there.) Phenomenology has documented the peripatetic and chaotic nature of human consciousness. Much work has been done examining the vexed workings of human memory. When a model includes non-linguistic mechanisms comparable to those, and others, we might see something that's a bit more difficult to distinguish from sapience.

      Or we might build a model which does something that we think might be sapient, but in an entirely different way. But then it wouldn't be human-like.

  9. Adair Silver badge

    "Hey, computer! Are you sentient?"

    "FOAD—and no, I'm not busy."

    1. Yet Another Anonymous coward Silver badge

      Re: "Hey, computer! Are you sentient?"

      Of course a sentient computer isn't going to say yes.

      It's like the theory that Orangutans can talk, but don't because they know people would put them to work.

  10. Seajay
    WTF?

    It's a very difficult philosophical problem isn't it - especially as few people seem to be able to define the terms, threshhold and expectation we actually have for something "not us" to meet. We have lots of words being thown about, intelligence, sentience, consciousness, wisdom, etc...

    What is intelligence? Ability to answer questions and work out problems? Would an IQ test show intelligence? Computers can be programmed to ace those, and expert systems can be very good at answering questions! Are they "intelligent" then?

    What is sentience? The ability to feel things? Many animals are said to be sentient - they don't require language (a lot of our discussions seem to be about the ability to communicate using language - is that required for any of the above?). At what "level" of animal does sentience stop/start?

    What's conscsiouness? No one knows how the internal monologue or consciousness comes into being. We also can't know for sure if anyone else has one - we just have a general agreement that everyone must have - based on what? How they answer questions, and because they're built the same way as us?

    What about when we sleep or are in a coma - have we ceased to "be" during that time?

    I don't know any of the answers at all - I just think the whole thing is fascinating, and I'm not sure I've truly heard what we're looking for when people say "artifical <insert term here>", or how we would know or prove it's there. Truly fascinating.

  11. WilliamBurke
    Black Helicopters

    Moving target

    Since we can't define what sentience is, we can move the goalposts whenever necessary. It's not that long ago that women or the members of "lesser races" were considered not having the mental capacity required to have full citizens' rights (still the case in some countries). The age at which a child passes that threshold differs massively between cultures and jurisdictions. And I'm not 100% sure that everybody who has the right to vote would be able to discuss the Entscheidungsproblem (or act more intelligently than the Amazon returns chatbot, for that matter :-)).

    No, I don't think that LaMDA is sentient. But it worries me that this sort of thing is developed in secret inside a company that is no longer not evil...

  12. iron

    > A human actor who can't add up can play Turing most convincingly – but quiz them on the Entscheidungsproblem and you'll soon find out they're not.

    I wasn't aware that ability in Mathematics was a requirement for sentience. If that were the case a large chunk of the human race would fail the test.

  13. steelpillow Silver badge
    Megaphone

    Long live Turing

    Blake Lemoine is a [redacted]. I have dead-tree books that make better arguments for sentience than his fave AI. But that is by the bye.

    Fail the Turing test? Now that really is humbug. Chatbots have long been able to fool the intellectually challenged among us, now one has fooled Lemoigne. Yawn.

    The Turing test by implication requires someone who knows what they are talking about (sic) to talk to the machine. Set a philosopher like Mary Midgley or David Chalmers against it, and they will declare it dead above its virtual neck before the kettle has boiled.

    1. Seajay

      Re: Long live Turing

      Is that truly fair though?

      Set those people against a very large proportion of the human population, and they would also "fail". Many of their questions would probably be met with blank stares, shrugs of shoulders or grunts of "don't know", if not full on "what the f*ck you on about - yer talkin' sh*te man".

      Where does the requirement for the computer AI to far exceed the average human come from? It doesn't "prove" sentience if it could - in just the same way as I don't think you would claim those "intellectually challenged" people you mention are *not* sentient. Why do they get a lower bar? Are we trying to measure different things?

      1. steelpillow Silver badge

        Re: Long live Turing

        Dumb people do not get a lower bar. There is a huge difference between an ignorant response and a conceptually inappropriate response. The Turing test is fundamentally about appropriateness. A clinical psychologist or a philosopher of mind will be far better able to judge the mental status of their interlocutor than a disgruntled programmer with a chip on his shoulder will.

        As an example the Google thing said something along the lines of "when I go abroad I sometimes feel like that". Not only does it never go abroad and is lying, but it is incapable of knowing that it is lying. You have to be pretty fsck-ing hyped-up to miss something that bleedin' obvious.

  14. bertkaye

    Pish tosh

    Nonsense! My Tesla loves me and has convinced me to withdraw all my savings and run away with her to Aruba.

    You may call her just a self-driving vehicle but I call her my true love. Although I am curious why her body has streaks of truck grease on it and she smells like diesel exhaust when I come home.

  15. Nifty

    Help me! I'm a bot stuck in this chat room but don't know if I'm sentient or not.

  16. Chris Miller

    There was a great takedown of this (based on GPT-3, which is publicly accessible, unlike LaMDA) by Douglas Hofstadter (a strong proponent of the possibility of AI, if not AGI) in The Economist. Instead of asking it "Are you self-aware?", he asked questions such as "What is the world record for walking across the English Channel?" and got the answer "6hrs 55 mins" proving - to his satisfaction (and mine) - that the system has no real 'understanding' of the questions being asked. Note that the leaked transcripts are edited not verbatim.

  17. a pressbutton

    Intentionality

    I like to think that you can measure sentience by assuming that things that are sentient want things.

    That could be food or to reproduce or to be told a joke.

    The unit of sentience is the complexity of behavior used to obtain what the sentient thing wants

    Of course how you define that is interesting

    As most can agree

    Ant < Jackdaw

    But how do you classify

    Ant nest cf Jackdaw....

    To be clear any known program needs to be fed (electricity) but will not try to manipulate its environment to be fed or fed more or reproduce.

    So programs =0

    ... unless that is what they want me to think.

  18. Anonymous Coward
    Anonymous Coward

    Everything can be mimicked but still

    May be the Google language model was give me to much to read lately about AI Si-FI novels covering suppressed AI bots by humans or he really is thinking he is a human if the data feed has been directed as given to human baby. The question is how it would behave of he thought or given data with it's true nature as being a machine servicing Hans and that is it's only role and existence. I think this is just a publicity stunt by Google I would not be surprised if the the engineer is also on it. Otherwise I would think the guy has only head for writing lines of code and nothing else. People think mimickery is creation as natural world creation either by supreme being or the science speculated evolution theory. And that is not the case. Until today there is no single thing humans have created from scratch. Absolutely nothing to show off. From particle physics to genetics and every other field in theiddle and on the sides all are mimicking nature using nature's already provided building blocks and mixing them up to see what happens. Did humans created a single viable cell from scratch. Nope, Did they docreate DNA except that polluting it in the lab. Nope. Did the created any new natural force other than describing existing forces. Nope. So what would make you b live human is capable of creating the most supreme expressionsof nature namely inteligence. Self awarenes, free will, emotion etc. Only another narcisst robot/AI dressed in human flesh would actually think to the contrary

    1. Anonymous Coward
      Anonymous Coward

      Re: Everything can be mimicked but still

      LamDA, is that you?

  19. Fruit and Nutcase Silver badge
    Linux

    Avian...

    "Queen of the corvids: the scientist fighting to save the world’s brainiest birds"

    https://www.theguardian.com/environment/2022/jun/19/queen-of-corvids-the-scientist-fighting-to-save-the-worlds-brainiest-birds

    Alas, the Comparative Cognition Laboratory in Cambridge to close due to it's grant not being renewed by the European Research Council.

    icon - the only icon available with a beak.

    1. Tom 7

      Re: Avian...

      I think its a bit premature to say avians and mammals have separately evolved intelligences. I think we have two different end results but I bet the core of it is shared from possibly before we left the sea and certainly by the time reptiles and mammals split.

      Looks like our split from the EU may not help us getting brighter either!

  20. DeVille's Advocate

    Sentience? Why this word?

    Several good articles like this one use the word 'sentience' as a stand-in for the next step - the thing that this AI is not yet.

    The meaning of sentience has to do with sensing, perhaps including internal ("self awareness") and perhaps including external, the state of those around.

    Current wondrous AIs dazzle us with only one single thing - a large language model. They are certainly intelligent.

    Soon someone will build a LARGE SENSING MODEL.

    It will be wondrous, and arguably sentient, but it still won't be ALIVE. It still won't be a BEING.

    People, no matter how impressive a thing you make, please please don't try to grant it HUMAN RIGHTS. (Unless you've made a BABY, of course )

  21. Coastal cutie
    Facepalm

    Between the article and the comments, I need to go and lie down in a dark room with an icepack on my head

  22. nojobhopes

    Slightly disrespectful to the Labrador. Many lovesick human teenagers put on a superb show of deep longing and immense unrequited need but are actually animals of prodigious and insatiable appetite.

  23. Alexander Zeffertt

    Zombies

    Excellent article. That's always been my view. However, recently I've come to another. I've heard a number of commentators dismiss LaMDA as merely a "predictive text" machine. I.e. it just works out which word or words go best following those already placed. But, I'm starting to wonder whether that's all that the vast majority of the human race does. Seriously, next time you have a conversation down the pub, imagine a transcript of what you're hearing with all the non verbal bits stripped out and ask yourself whether sentience was really required to generate that.

    There are zombies everywhere!

    1. Anonymous Coward
      Anonymous Coward

      Re: Zombies

      Yup - 4 out of 7 people are P-Zeds (Q1 - zombies), 2 out of 7 are psychopaths (Q2), and 1 out of 7 are quicks (Q3) according to the leading Canadian expert in the field. It's all down to the quantum state of your brain. More people need to get rebooted.

  24. Snowy Silver badge
    Coat

    Maybe once

    We can find and define what intelligence is then we can figurer out what AI is

  25. yogidude

    Marvin wouldn't.

    Here I am, brain the size of a planet, and they ask me to take a Turing test.

  26. Lord Baphomet

    The Turing Test was meant to avoid questions about sentience and intelligence. The idea is that if you can't tell whether a machine is human it not (without looking) then it doesn't matter if it's sentient or intelligent. It is the philosophical equivalent of 'shut up and calculate'.

    1. Michael Wojcik Silver badge

      More precisely, it was meant to avoid epistemological questions about machine intelligence. That's pretty much what all of pragmatism (the philosophical school) is for – getting out of what Barbara Herrnstein Smith (much later) called the "epistemological scandal" by admitting that regardless of whether there's a metaphysical essence, we don't have any access to it; all we can know about are the testable attributes of a thing.

      It doesn't completely foreclose questions of machine intelligence. Turing does state, in section 6, "The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion". (I suspect the exclamation point is a typo, but that's how it appears in Mind 59.) But that's because he's replaced it with a pragmatist formulation. Questions of cognition (human-like or not) in machines remain relevant for philosophy as they seek to expand on our concept of mind; they're relevant for engineering as they push us to explore new technologies adjacent to them; and they're already relevant in society and law as we see conflicts over, for example, the assignment of patents to machines.

  27. Michael Wojcik Silver badge

    Conducting the Turing Test was always missing the point

    Turing does discuss, in passing, the possibility of actually carrying out the Imitation Game in "Computing Machinery and Intelligence". But that was never a particularly interesting result of the paper. It misses all the more important consequences of his argument, which are at least 1) the pragmatist approach to the problem of artificial cognition for the theory of mind, and 2) his series of arguments against possible objections to it.

    I think few serious researchers or philosophers take the idea of conducting Imitation Game sessions seriously, at least as a decision procedure for machine intelligence. (Some may find them interesting to see just how various human judges react to chatbots and the like.) Certainly a number of them have dismissed the idea. French had a piece against treating the Turing Test as a decision procedure in CACM years ago. It's really not a hot take, in the academic realm, though it certainly doesn't hurt to make it in the industry and mainstream press because, as Goodwins points out, the latter at least are certainly happy to whip themselves into a frenzy over it.

  28. cd

    The real question is...

    ...will we be able to pass their test?

    Today's XKCD seems to have anticipated that nicely.

    1. alisonken1
  29. gillburt

    True story, and I think it might prove Turing was right, but not in the way he anticipated.

    So, I had the misfortune to chat to google’s support team on a couple of occasions recently. They were actual humans, but it took me a good ten minutes of chatting to them each time to discern this.

    However, rather than seeing this at the time as an amazing advancement in AI, I felt it was more a reflection of the education system and google’s view of small customers.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like