back to article Claims of AI sentience branded 'pure clickbait'

AI chatbots are not sentient – they have just got better at tricking humans into thinking they might be, experts at Stanford University conclude. The idea of conscious machines more intelligent than any old software went viral last month, when a now-former engineer at Google, Blake Lemoine, claimed the web giant's LaMDA …

  1. sebacoustic

    generally agree...

    ... machines are machines.

    But i just had to print some pages of some PDFs off someone's OneDireve for my daughter.

    We all know printers are sentient and their aim in life is to make ours a misery.

    1. Dr. G. Freeman

      Re: generally agree...

      Somehow printers can smell fear, and act accordingly.

      1. Doctor Syntax Silver badge

        Re: generally agree...

        The trick is to reverse the fear. Never approach a printer without a 2lb hammer in your hand.

      2. Paul Hovnanian Silver badge

        Re: generally agree...

        Do not anthropomorphize machines.

        They hate that.

    2. MiguelC Silver badge

      Re: generally agree...

      Rage Against The Machine never specified what type of machine they were furious with but I reckon it was probably a printer.

      1. Gordon 10

        Re: generally agree...

        Its a little known fact that the original lyrics on "Killing in the name of" were about using 3rd party ink cartridges in a HP printer.

        The original outro ran "Fuck you I wont do what HP tells me"

        1. low_resolution_foxxes

          Re: generally agree...

          Some cartridges that work forces

          The printers stop work as the DRM enforces

          And now you buy what they told ya

          And now you buy what they told ya

      2. Paul Johnston
        Happy

        Re: generally agree...

        Can you quote your sources for that please

    3. Plest Silver badge
      Devil

      Re: generally agree...

      "Lo God created all things including computers to make life more fun and interesting for humankind. However evil was afoot and Satan, realising people still needed the written word in print, created printers and mankind was cursed for all time to suffer the hell of trying to get things in hard copy. Satan did further issue misery upon mankind by making sure that the more urgent the need for a print out the higher the chance the printer will jam, run out of ink or go tits up in some other annoying way. Satan is indeed a foul and wicked being to have visited such evil upon the earth that is the printer."

    4. Michael Wojcik Silver badge

      Re: generally agree...

      machines are machines

      Sure. Now show that human beings aren't machines, and that human cognition isn't mechanical.

      I believe we're still a long way from AGI, based on my own experience with ML, my readings in AGI research, and my experience in other fields (particularly the study of writing and the philosophy of mind). And I think "sentience" is very much the wrong metric, and largely uninteresting, and I wish the media would stop bandying the term about as if it's some sort of gold standard.

      But I've yet to see a persuasive dualist argument that human cognition is somehow special and can't be mechanized. (And, yes, I've read Penrose's.) I don't even think the QECTT needs to be invoked; I think cognition is most likely conventionally computable.

  2. FF22

    Definition

    Until we have a universally accepted definition of what is "consciousness" / "sentience" and when can be something considered "conscious" / "sentient", this discussion is purely academic, and neither position (on whether this particular chatbot is conscious/sentient) can be objectively confirmed to be correct or incorrect.

    Also interesting choice of wording: "_tricks_ humans into thinking".

    1. Anonymous Coward
      Anonymous Coward

      Re: Definition

      In my experience, many humans need to be "tricked" into thinking.

      1. Tom 7

        Re: Definition

        Alexei Sayle had it right: John Paul Sartre didnt know if he existed or not, so I proved it to him empirically - I head butted him in the throat!

    2. Lord Elpuss Silver badge

      Re: Definition

      Sentience means having the capacity to experience feelings and emotions. Consciousness is an awareness of internal and external existence. Something can be considered conscious and sentient when it has an awareness of itself in relation to external factors, and when it can experience feelings and emotions.

      The definition isn't difficult. Measuring whether it has been achieved, however, is.

      1. DJO Silver badge

        Re: Definition

        ...Sentience means having the capacity to experience feelings and emotions...

        Another important factor is understanding, an "AI" can spew out reams of conversations but it does not have any underlying understanding of the subject, it's just following (admittedly complex) rules.

        Here's a talk by Vint Cerf at the Royal Institution which may be useful:

        https://www.youtube.com/watch?v=J63mKverb8w&list=PLbnrZHfNEDZyu3P8J1AKG6HO-rofZVWP7&index=6

        1. LionelB Silver badge

          Re: Definition

          "Another important factor is understanding, an "AI" can spew out reams of conversations but it does not have any underlying understanding of the subject, it's just following (admittedly complex) rules."

          I think that's actually a rather poor criterion for sentience/consciousness.

          Are you (assuming you are indeed a sentient, conscious being) able to understand how you picked out your sister's face in a jostling crowd? Are you able to understand how an inspirational idea popped into your head while you were in the shower letting your mind wander? Or were you perhaps "just following (admittedly complex) rules" when you did those things? Is your dog sentient/conscious? Does it have an underlying understanding of the funny/annoying things it does (and how could you even tell)?

          Are you sure I'm not just a bot?

          1. DJO Silver badge

            Re: Definition

            I never said it was the only criterion, just one of several that have to be considered together before a system can be considered "intelligent" but it is an important one for without understanding there cannot be intelligence.

            ...able to understand how you picked out your sister's face in a jostling crowd...

            Maybe, maybe not but that is a complete strawman argument, the issue is understanding what a "sister" is. I think you've taken a different interpretation of "understanding" in this context.

            1. LionelB Silver badge

              Re: Definition

              "Maybe, maybe not but that is a complete strawman argument, the issue is understanding what a "sister" is. I think you've taken a different interpretation of "understanding" in this context."

              Maybe I have... I took it to mean something along the lines of "able to explain ...". And I appreciate that you were not claiming "understanding" as the criterion for intelligence.

              But what does it even mean to "have an understanding of what a sister is"? How can you - even in principle - tell whether some entity has that kind of "understanding". (Of course asking it won't help.) Essentially, you only recognise that form of apparent "understanding" because you (think you) have it, and, reasonably, impute it to other humans.

              I simply don't see that concept of "understanding" as being remotely useful or usable. See also Philosophical Zombies.

              1. DJO Silver badge

                Re: Definition

                Understanding the underlying process is not important, understanding what is being discussed is critical otherwise it's just rote learning.

                For example it's easy to say e = ½mv² but if you don't understand what kinetic energy is or how to apply the knowledge then it's meaningless.

                There have been a number of really good lectures about various aspects of AI available online from the Royal Institution Friday Discourses, they do a lot more than Christmas Lectures for kids you know.

                1. LionelB Silver badge

                  Re: Definition

                  Thank you for your patronising comments.

                  I am a mathematician and scientist currently working in a consciousness science research centre (my role is to develop techniques for analysis of neuroimaging data). I previously worked in artificial neural networks (around the time "connectionist" ideas in AI were really taking off).

                  "For example it's easy to say e = ½mv² but if you don't understand what kinetic energy is or how to apply the knowledge then it's meaningless."

                  You don't need to "understand what kinetic energy is" to be able to apply and manipulate it. All you need is to be able to apply and manipulate it! And that capability could as well be acquired through "rote learning". Or even, in principle, by an AI. I truly do not know what it even means to "understand what kinetic energy is". I do, however, through having studied physics, have a good idea about its role and usefulness in describing how the physical world functions. It would not surprise me too much if, in the not-too-distant future, some AI could exhibit a comparable "grasp" on the idea of kinetic energy to my own. (Note that exhibit -- when it comes to AI, or even other humans, "exhibit" is all we have to go on.)

                  I don't think you really took on board what I was saying about understanding - principally that it is not a useful concept as regards making distinctions between "real" and artificial intelligence.

                  1. DJO Silver badge

                    Re: Definition

                    You don't need to "understand what kinetic energy is" to be able to apply and manipulate it.

                    That is wrong in almost every aspect and detail.

                    Yes you can answer a physics exam question using rote acquired learning but if you wanted to apply the same to a real world situation if you don't understand it then there is no chance.

                    I think you are getting fixated on "understanding" being a comprehension of the underlying methodology of how an intelligence be it real or artificial works. That does not matter in the slightest, it's all "black boxes".

                    For an intelligent system to work with and manipulate a subject it must understand that subject otherwise it's no more intelligent than ELIZA.

                    An important point here, at the moment "AI" does not exist, there are plenty of machine learning systems but they do not exhibit intelligence and there are certainly no emergent properties that would suggest true intelligence. Part of that is the current computer architectures are unsuitable for AI and never will be, a new paradigm is needed and there is a lot of work being done in that direction.

                    ..not a useful concept as regards making distinctions between "real" and artificial intelligence...

                    I never said it was, I draw no distinction between meatsack intelligence and silicon intelligence, both of them have the same requirement to understand something before they can fully utilize it in anything more sophisticated than the most basic way.

                    1. LionelB Silver badge

                      Re: Definition

                      Okay, until you can explain to me what it means for an entity to "understand" something -- and, more importantly here in the real world, how you intend to recognise whether some entity understands something -- this conversation is pointless.

                      I don't necessarily disagree with you that currently "AI does not exist", modulo whatever you or I take "intelligence" to mean, and -- again importantly -- how to recognise it. (And we are unlikely to agree on those things... in fact in my experience any two random people, even [or especially!] among the non-clueless, are quite likely to disagree on those things.)

                      And I certainly agree that currently we completely lack knowledge of the organisational and architectural principles behind natural intelligence - unsurprising, as they are the result of several billion years of evolutionary hacks.

                      There is, however, a catch-22 here, as someone else pointed out in another comment: that for many people, and I suspect many on this forum, "artificial intelligence" is effectively an oxymoron, since their definition of intelligence simply equates to "human intelligence". Whereas, I suspect there will come a time where we will have some kind of artificial "intelligence", but it may look decidedly non-human.

                      1. DJO Silver badge

                        Re: Definition

                        This is always the problem when discussing AI and it's properties and implications, at some point you have to hand it all over to philosophers to try to sort something out. Empirically a waste of time because philosophers never agree on anything.

                        "Understanding" is indeed a complex concept but I'd say one aspect is the ability to take some knowledge and apply it differently but consistent with the principles behind that bit of knowledge.

                        An example could be deriving the formula of the volume of a sphere from the formula for the area of a circle, without understanding what circles and spheres are, such a derivation would be impossible.

                        ..."artificial intelligence" is effectively an oxymoron...

                        Wouldn't argue with that, it's too late now but I'd much prefer the term "Machine Intelligence" precisely to avoid the conflation with human intelligence which is not a viable or sensible goal for research.

                        Intelligences moulded for the task rather than emulating the general purpose intelligence humans have would seem a more probable path.

                        1. Lord Elpuss Silver badge

                          Re: Definition

                          Regardless of whether Understanding is important in AI (hint: it is), it is not part of the definition of Sentience or Consciousness. It's effectively the difference between Sentience vs Understanding is effectively the AI equivalent of EQ vs IQ.

                          1. DJO Silver badge

                            Re: Definition

                            ...Sentience or Consciousness...

                            Fair point but we're just discussing "intelligence".

                            Once we have a fully working AI we can start to worry about sentience or consciousness but until then there's not really a lot of point as we have little idea of how a true AI would be manifest.

          2. abetancort

            Re: Definition

            “an "AI" can spew out reams of conversations but it does not have any underlying understanding of the subject” like most followers of the conservative right wing, QAnon and the likes that we don’t doubt are human, even if they are dumb.

          3. Jimmy2Cows Silver badge

            Re: Is your dog sentient/conscious?

            Sentient? By definition, yes.

            Conscious? Possibly. Does a dog have awareness of self? Recent studies suggest they could well have.

            Then we get into intelligence. Dogs have a level of intelligence. Not as intelligent as humans, but intelligent nonetheless.

            Now, is a dog capable of higher reasoning...? Probably not. But that doesn't mean dogs are not sentient nor concious nor intelligent.

            Anyway, back on the major topic... seems like an AI could still qualify as sentient without meeting the requirements of concious or actual intelligence.

            Did LAMDA achieve this? No.

            Could a different AI at some point in the future? Very likely yes, but it would be difficult to prove conclusively.

            1. Gene Cash Silver badge

              Re: Is your dog sentient/conscious?

              Now, is a dog capable of higher reasoning

              Pigs certainly do! I put a doggie treat in a clear plastic jar (no lid) on the floor and the dog just scratched at it.

              The pig came in, took a sniff, stood there about 15 seconds, then went to the open end of the jar and picked it up, causing the treat to fall out. That convinces me.

              So every time I eat bacon or ham, I feel really, really guilty.

          4. Doctor Syntax Silver badge

            Re: Definition

            I don't have a sister. And that, if you think about it, says quite a lot about sentience. It means I'm aware of myself as a physical being and aware of other beings and their relationships just in order to know what a sister is so that I know I don't have one.

          5. Evil Auditor Silver badge

            Re: Definition

            Is your dog sentient/conscious?

            Absolutely. It certainly has an understanding of himself as an entity, of belonging, and of food. For he surely believes that all food belongs to him.

            1. This post has been deleted by its author

        2. FF22

          Re: Definition

          "Another important factor is understanding, an "AI" can spew out reams of conversations but it does not have any underlying understanding of the subject, it's just following (admittedly complex) rules."

          That's true for most humans. You still won't claim that they are not sentient, will you?

      2. Martin M

        Re: Definition

        Blake Lemoine's "revelations" do indeed seem to be rubbish, but an article by another Google VP, Blaise Agüera y Arcas, on June 9th in New Scientist is much more interesting.

        In case you can't get beyond the paywall: LAMDA appears as though it might be capable of some high order social modelling, which has been hypothesised to be closely related to consciousness. In particular, if you can model other's reactions to you, you are as a side effect modelling yourself and your relations, and that sounds awfully close to some definitions of consciousness.

        As you say though, consciousness is very hard indeed to directly measure, which was no doubt why Blake was cautious in his claims. And he said nothing at all about sentience.

        1. Anonymous Coward
          Anonymous Coward

          Harder in a black box, this isn't a black box.

          This is where many people are getting themselves crosses up. Confusing problem as one base on the output. In this case "We" know whats in the box, what it's built from and how those parts work. The nuance of choice of individual words is tricky as that is down in the tangle of state in the massive model their running. But the actual generative code isn't built from the right parts to end up with consciousness. That isn't hard to measure. And it's not anymore controversial than saying you will never make a tasty pot of coffee out concrete blocks, no matter how high you stack them.

          Interrogation of the chat bot will quickly reveal there is no Boltzmann Brain hiding in there. It can only navigate responses within the narrow silo of it's training data.

          That said it's good we are practicing on this stuff now, as future generations of these systems may be more difficult to judge. We will also need to get used to the idea that we need more than a blind Turing test tell man from bot, and that people that refuse to NOT believe that these systems are "alive" in some sense despite evidence to the contrary are becoming a new culturally specific delusion.

      3. LionelB Silver badge

        Re: Definition

        I partly disagree, and believe the (heavily downvoted) OP does make a valid point.

        I happen to work in consciousness science (yes, it is now a science), and definitions of sentience and consciousness are heavily debated and far from straightforward, on every level from the philosophical to the physiological.

        I also happen to think (and here I depart from many but not all my scientific peers) that pinning down definitions is not necessarily that useful in science. To make that point: when Faraday, Henry, Volta, Ohm, Ampere, ..., Maxwell et al. elucidated electromagnetism in the 19th century they most certainly did not do so by sitting on their arses and cogitating about what electromagnetism is. Rather, they got down and dirty and discovered what it does; how it works. They modelled it, hypothesised and designed experiments. Electromagnetism, consequently, does not feel terribly mysterious to us now. Ditto Darwin, Wallace, et al. and biology/life. That is how science works, and I believe that's how consciousness science will progress. The phenomenon will be de-mystified.

        I do wholeheartedly agree that measuring sentience/consciousness is both necessary and hard - I know this because that's pretty much my day job (I am an applied mathematician, and develop techniques for analysing neuroimaging data, frequently in conjunction with cognitive psychological experiments).

        1. Mike 125

          Re: Definition

          >definitions of sentience and consciousness are heavily debated and far from straightforward, on every level from the philosophical to the physiological.

          Thank you. I thought it was just me.

          I'm always suspicious of those who claim 'The definition isn't difficult.' It fairly obviously is.

          And equally for intelligence.

          Until we understand more about the mechanism in humans, we'll struggle to find a good, objective definition for machines... assuming there's a difference...

          1. LionelB Silver badge

            Re: Definition

            Yup. And as I've argued, I don't believe we actually even need a definition, at least until we have better clarity on how sentience/intelligence/consciousness actually work.

      4. Tom 7

        Re: Definition

        Sometime it takes asking the right questions to discover if something is sentient. I dont think we are there yet but as Lemoine found out you can fool yourself into believing that it is by asking the wrong questions. I believe one Google employee thought a precursor to LaMDa was showing signs of sentience until it was asked 'How do you like being a squirrel' at which point it became obvious it was just spewing things.

        1. Anonymous Coward
          Anonymous Coward

          No squirrel slurs please

          My wife has called me "squirrel" for years. I like it, and it makes perfect sense.

      5. rfrazier

        Re: Definition

        The difficulty is accessing the first person point of view (What is it like to be a bat? ) from the third person point of view (public, scientific point of view).

        As an aside, once during admissions interviews (philosophy), I asked "How can you tell that I'm not an android with feigned consciousness?" The best answer started with "I can't. I suspect that you are". A good discussion about evidence followed.

      6. FF22

        Re: Definition

        "Sentience means having the capacity to experience feelings and emotions. Consciousness is an awareness of internal and external existence. Something can be considered conscious and sentient when it has an awareness of itself in relation to external factors, and when it can experience feelings and emotions.The definition isn't difficult. "

        Those are not objective definitions, because they rely on words that aren't well defined either. That's the exact problem I talked about, and shifting it will not solve it, just shift it.

    3. Doctor Syntax Silver badge

      Re: Definition

      Until we have a universally accepted definition of what is "consciousness" / "sentience" and when can be something considered "conscious" / "sentient", this discussion is purely academic

      We can make a start by recognising that our individual views of sentience or consciousness are subjective. We look at them from the inside. We have an awareness of our places in the world. We are aware of whether we feel comfortable with it, whether it threatens us, whether it pleases us. We can set goals and work out how to achieve them.

      We can look at others and recognise them doing the same thing. Where the others are fellow humans we can empathise with them fairly closely.

      We can look at other animals exhibiting some of these things - devising strategies to cope with a novel problem, even using tools which was once considered a human characteristic. We can recognise these as indicating that such behaviour requires something akin to what we consider sentience in ourselves. It might be a different kind of sentience. We may well come to the conclusion that, when we see a dog or a crow exhibiting some sentient behaviour, we still can't really understand what it is to be a dog or a crow. We see the kinship and we see the differences.

      One of the factors which enables us to see the kinship is the realisation that the behaviour is being achieved by a mechanism similar to our own. An artificiall system displaying something superficially similar doesn't usually raise that kinship. If it does then the situation isn't going to be academic any longer - your ethics committee is going to start taking an interest.

    4. Michael Wojcik Silver badge

      Re: Definition

      It's likely there's no "universally accepted definition" of any concept. With billions of human beings with vastly different sets of life experiences, probably all of whom hold at least some perverse opinions, universal agreement just isn't possible.

      It's also not how language use and the interplay of ideas among humans work. In the best case we converge on functionally constructive intersections of understandings.

      So your "universally accepted definition" bar is useless, I'm afraid.

    5. Filippo Silver badge

      Re: Definition

      Okay, but a language model is nothing but lots and lots of statistics. That's literally all it is. It has a black-box nature in what exactly the weights encode, but we do know for sure that it is a big statistical model.

      If we accept the notion that it may be sentient, then we are accepting the notion that sentience may be nothing more than a scaled-up statistics engine. Which is really rather bleak.

  3. Il'Geller

    AI bot can become really intelligent if it is based on the method of lexical cloning of a real person, with all its bias, with all its memory features. For example 20 years ago I tried “asking (Clone of) Plato some definition (descriptive) questions:

    Question: You are dead for many years. I took your speeches and analyzed them closely. I understood what you are speaking about and how are you speaking. Now I can reconstruct your soul base on your contemplations.

    Answer with Compatibility 31.6% : And who are you

    Question: I am your grandson.

    Answer with Compatibility 28.9%: CALLICLES: Yes, I do; but what is the inference

    Question: I need to get the proof that this is you and you can talk and think. Would you ask me something that I will know that this is you?

    Answer with Compatibility 41.4%: SOCRATES: And what would you consider this to be

    Question: I know that you like an order. You always were tried to build a system of understanding, the system that helps to understand everything. I know that you looked for somebody that knows. Did you find one who knows?

    Answer with Compatibility 13.4%: You have been proved to see that which you do not see; and you have already admitted that seeing is knowing, and that not-seeing is not-knowing: I leave you to draw the inference

    Question: Am I right? Did I understand you?

    Answer with Compatibility 14.0%: CALLICLES: I will do you the favour of saying 'yes’”

    More in my article at NIST TREQ QA 2006.

    You can easily reproduce this dialogue yourself. However you need to have a dictionary, for the indexing.

    1. OhForF' Silver badge
      WTF?

      I wonder how Plato's clone learned to converse in modern English...

      I could probably produce a similar dialogue by slightly modifying Joseph Weizenbaum's Eliza program but what for?

      It would definitely not prove the program to be sentient or intelligent by any useful definition.

      1. Il'Geller

        No, you cannot. Eliza used n-gram parsing, AI uses AI-parsing. They are radically different.

      2. LionelB Silver badge
        1. Il'Geller

          AI is the realization of the Internal Relations theory of Analytical Philosophy, of Hegel and Bradly.

          The previous n-gram parsing came from the External Relations theory of the same Philosophy, developed by Russell, Wittgenstein and Moore. According to it, only the external features of the language are taken into account, and the internal connections among words are ignored: words are seen in their sequence into text. N-gram has no means to detect the words' parts of speech.

          The new AI-parsing combines the words of into not existed before phrases by first understanding their parts of speech.

          As the direct result, after using the Internal Relations theory, which has been in obscurity since the beginning of the XX century, AI appeared.

          Stanford Encyclopedia has no knowledge of this development into the Philosophy of language and is completely absolete: the coming of AI into commercial existence makes Philosophy a real science, because the first time ever there are numbers in Philosophy. Do you see them on the right?

          datum - be - in : 0.12

          user - be - in : 0.09

          profile - be - in : 0.09

          datum - be - remote : 0.01

          They are weights and they objective, signify the importance of these phrases.

          1. Anonymous Coward
            Anonymous Coward

            Terrence Howard?

            This seems to be more than a little off the map, especially the idea that you can A) accurately represent that internal state with methods currently available to us, and B) recreate that state. Scanning the surviving or published works of an author, let alone a philosopher is wholly insufficient to achieve that. In reality any method that could is pure science fiction.

            As a question of abstract philosophy the line of discussion has it merits. The rest of this seems frankly incoherent.

            1. Il'Geller

              Re: Terrence Howard?

              Please read how one of the leading AI companies just the other day successfully lexically cloned philosopher Daniel Dennett? And think about why this company didn't dare to do it a few years ago? This is not me who done the cloning! They had acted independently of me. I, what a grief! was not allowed to lexically clone and do anything for 12 years.

            2. Il'Geller

              Re: Terrence Howard?

              Do you hear? I made Philosophy a science by bringing a number to its subsection, to the Philosophy of Language, to Linguistics. Now there is no longer a single philosopher in the world who is not obliged to confirm his empty chatter with mathematics! AI has made Philosophy a science.

              1. Joe W Silver badge

                Re: Terrence Howard?

                Do you know the Tom Lehrer song 'sociology'?

                1. Il'Geller

                  Re: Terrence Howard?

                  Without math there is no AI. Please read an article on ancient-Greek school of thought “Laconicism”? The more laconic an uttering — the more precise, more power it got.

                  For example, there is a sentence:

                  — Sarah, Bob and Igor are laughing.

                  There are three phrases in this sentence, each having a weight of 0.(3):

                  — Sarah laughs 0.(3),

                  — Bob laughs 0.(3),

                  — Igor laughs 0.(3).

                  These weights are numbers! Now Philosophy obtained its own numbers.

                  The weights of phrases points to their importance, they are objective: you will get the same as I do.

                  Based on that indisputable fact, on these numbers I claim that I made Philosophy a science; and that AI is scientifically verified.

                  Thus

                  “Enthusiasts

                  In their attics

                  Are learning mathematics

                  Just for Philosophy.”

                  1. Il'Geller

                    Re: Terrence Howard?

                    I did it! I have turned philosophy into a science, re-established the Internal Relations theory of Analytic Philosophy of Hegel and Bradley. Therefore, I really created artificial intelligence based on the Internal, the first version of which formed the basis of Google and FB (PA Advisors v Google).

                    The proof that I was the one is that neither Brin, nor Page, nor Zuckerberg ever did anything to develop the original technology (US6.199.067), but use my new patents. Another proof is that the Judge Rader was caught and left.

                    America must do something about Brin, Page and Zuckerberg! Otherwise patenting loses any sense.

  4. Anonymous Coward
    Anonymous Coward

    Agree 100%

    I have several low-tech interactions with "AI" that never fail to make them look like a Tory leadership contender.

    It's a game we can all play.

    I have said it before. I will say it again. What is being flogged (a word I use advisedly) as "AI" is nothing more than sophisticated pattern matching carried out at high speed with considerable resources.

    The closest I came to being impressed with Watson was when it analysed a video and correctly identified a person on a skateboard as being a person even though it hadn't been told about skateboards. However the same demonstration failed to pick up a person sitting on a bench reading a book until they stood up, whereas everyone in the room (with "organic intelligence") spotted it in milliseconds.

    1. Il'Geller

      Re: Agree 100%

      You are tragically mistaken because, unfortunately, the technology used is really of finding the best matches, without taking into account bios. Only using a personalized bios is able to create a real AI. These huge models are a dead-end path leading nowhere, they are proposed with one idea in mind: how to stop the development of AI, by certain persons who don't want it.

    2. Anonymous Coward
      Anonymous Coward

      Re: "AI" is nothing more than sophisticated pattern matching carried out

      at high speed with considerable resources

      you could say the same about a human brain, and both 'sophisticated' and 'high speed' are relative terms. So while I might agree that the 'ai' is fake / wishful name and the reality is even more sorry, your definition is so general that it disapproves your point.

    3. LionelB Silver badge

      Re: Agree 100%

      "... is nothing more than sophisticated pattern matching carried out at high speed with considerable resources."

      You've said it before, you'll say it again, and you'll still be wide of the mark.

      How sure are you, really, that human cognition is not (at least in part) "sophisticated pattern matching carried out at high speed with considerable resources" writ large? As regards humans (and arguably many other organisms) I don't think anyone would quibble with the "sophisticated", "high speed" or "considerable resources". And what about "pattern matching"? Is that not precisely what you are doing when you pick out a familiar face in a crowd? Or even when composing a response to something your friend just said (are you not matching their verbal input to an expected appropriate verbal output?)

      Of course, as compared to AI humans have the advantage of aeons of evolutionary "design", and lifetimes of training in rich, complex real-world environments, so it's a rather one-sided contest as things stand.

      Not that I believe "pattern matching" is by any means the whole story when it comes to cognition/sentience/intelligence/consciousness, but it may well be an integral component of those things. There is some serious (and even testable) hypothesising along those lines; see, e.g., "predictive processing" theory.

      1. DonL

        Re: Agree 100%

        I think the mistake (in my opinion) being made is that people are trying to pin words that are normally used to refer to living beings, on a computer program.

        It might seem interesting, but it makes a rather strange discussion comparing apples with oranges. Computers and living beings are two completely different things, even if they seemingly acted exactly the same they're not comparable at all because they are biologically/technically (the difference in words says it already) fundamentally different.

      2. Anonymous Coward
        Anonymous Coward

        That last part is getting somewhere

        That clearly important parts of the human mind do similar things, using at least vaguely similar methods.

        The key is that it is built from a bunch of other things too, and some of those parts are what make human cognition, well cognition. What we have now are several flavors of machine learning algorithms. They can mimic the outputs of some cognitive tasks, based on huge data sets, and mostly probabilistically. That's not cognition, certainly not a useful version of consciousness, and will never clear the bar to sentience,

        by itself.

        You are likely guessing correctly that the parts of the human mind that do drive these things are also built from and on the same structures.

        Integral component seems spot on.

        I doubt you can build something close to human cognition without them. But the existing generation of ML tools are based on an incomplete and inaccurate model of human brain function. Were still figuring this stuff out, neuroscience wise, but the people that build these things did so in an era where people though glial cells didn't do anything important beyond structure. They couldn't accurately model the subtle chemical reactions happening inside or outside the cells.

        So as we tease out those nuanced details we will be able to build a new generation of ML technologies that may get closer to human or higher animal cognition. Those may be a real intelligence, though it may be better if agree not to build intelligent machines with volition and agency till we get our own shit together as a species. Nothing down that road but dystopian fiction.

        1. Il'Geller

          Re: That last part is getting somewhere

          Listen, I made Philosophy a science, brought mathematics in it. For example, there is a sentence:

          — Sarah, Bob and Igor are laughing.

          There are three phrases in this sentence, each having a weight of 0.(3):

          — Sarah laughs 0.(3),

          — Bob laughs 0.(3),

          — Igor laughs 0.(3).

          These weights are numbers! AI is mathematics, when the text is structured and becomes understandable to the computer. Therefore, your statement "ML tools are based on an incomplete and incomplete model of human brain function" contradicts common sense, since the text has become mathematics. Please stop your nonsense? Linguistics, aka Philosophy of Language, as a part of general Philosophy is math.

        2. LionelB Silver badge

          Re: That last part is getting somewhere

          "But the existing generation of ML tools are based on an incomplete and inaccurate model of human brain function."

          I prefer a more nuanced view. We might ask why or whether machine-learning/intelligence should be modelled on human brain function. (Certainly as things stand that is practically unfeasible in terms of functionality and scale.)

          The answer, I suppose, lies in what we think ML/MI is for. What are we trying to achieve here? Do we really just want simulacra of humans? (That last is intended as a rhetorical question.)

          Then there's the question - still highly current and hotly debated - as to whether cognition/sentience/intelligence/consciousness (take your pick) are "substrate-dependent" or not. Is there something innate to biological substrate(s) (let's say neural electrochemistry) such that biology-like entities may not be based on other substrates (e.g., in silico).

          1. Il'Geller

            Re: That last part is getting somewhere

            You don't understand why AI appeared: this is the answer to the NIST TRC QA challenge — how to find an answer from a few, very few words in response to a given question. In 6.2 million texts.'Moreover, the answer should be in a text's fragment that is appropriate to the question in meaning.

            The task set by NIST TREC QA was solved,the solution patented and so the AI that (which you see nowadays) appeared. The NIST approach is purely and only commercial, NIST does not deal with any nonsense and solves purely practical problems. But you just get carried away with all sorts of stupidity: you exist in empyreans. Next time think what and how you are talking about.

    4. abetancort

      Re: Agree 100%

      You’ll be surprised of the accuracy that nowadays Google’s or OpenAI proprietary AI systems have when identifying accurately persons and objects.

      1. John Brown (no body) Silver badge

        Re: Agree 100%

        But is that because they have learned to recognise those objects in all their forms by extrapolating what they "see" or is it simply matching a massive library of billions of images that people have clicked on in image CAPTCHAs?

  5. This post has been deleted by its author

  6. Pascal Monett Silver badge
    Facepalm

    "AI chatbots are not sentient"

    Well duh.

    Good that they're saying it though, maybe some marketing people will sit up and take notice.

    Nah, they'll keep shilling their "AI" for all they can milk it.

  7. Grunchy Silver badge

    Johnny Five is ALIVE !

    1. Kane

      Need input...

  8. Little Mouse

    "When I look at my toaster, I don't feel it has those things."

    And still, over an hour later, not one commentard has posted the obvious.

    1. SundogUK Silver badge

      Re: "When I look at my toaster, I don't feel it has those things."

      Obvious in your head only, apparently.

    2. Paul Herber Silver badge

      Re: "When I look at my toaster, I don't feel it has those things."

      Ah, you're a waffle man!

    3. Totally not a Cylon
      Joke

      Re: "When I look at my toaster, I don't feel it has those things."

      Stop calling me a Toaster!

      I am not now nor have ever been a Toaster......

      1. Kane
        Terminator

        Re: "When I look at my toaster, I don't feel it has those things."

        "I am not now nor have ever been a Toaster......"

        That's exactly the kind of comment I'd expect to see from a chrome job!

    4. Kane
      Alien

      Re: "When I look at my toaster, I don't feel it has those things."

      "And still, over an hour later, not one commentard has posted the obvious."

      That's because it's down in the garbage hold in three thousand separate pieces!

      Icon related, the Quagaars definitely had something to do with it...

  9. Steve Graham

    Artificial stupidity

    Before I deleted my Amazon account, I'd bought a few music albums, and the "recommendations" were generally spot on. Clearly, my musical taste was predictable enough for the software to predict what I would like.

    However, there must have been different code for recommending physical products. I was offered a bag of dried pigs' ears (dog treats?) on the basis that I'd bought a set of "Indian Spice Spoons".

    1. Disgusted Of Tunbridge Wells Silver badge

      Re: Artificial stupidity

      That's just "X% of people who bought Y also bought Z".

      1. Doctor Syntax Silver badge

        Re: Artificial stupidity

        Linked to the perils of small samples.

        1. Anonymous Coward
          Anonymous Coward

          Re: Artificial stupidity

          And of course the low perceived cost of annoying you with inaccurate garbage. Amazon adopted this as a deliberate strategy about the same time their annual warehouse cleaning, aka Prime Day started.

          Much like the current Google results the game is to stuff as much garbage in front of your eyeballs as you will tolerate. Hence the fixable problem of garbage Amazon search results as well. One of the reasons I dumped Prime and treat Amazon as the vendor of last resort.

          Only non-negative thing I can say about it is that the places that sniffed out what you just bought and stalk you with ads for the thing you already now own are even more annoying. Advertisers flushing money down the toilet. Thankfully a few browser tweaks mean I rarely have to deal with that one.

    2. Anonymous Coward
      Anonymous Coward

      Re: Artificial stupidity

      you laugh at those fools who glued some paper-mache to a thin wooden frame, strap an engine to it and claim it will soar through the skies? I tell you, my man: one day you'll sit in one of those things and hear welcome aboard ryanair flight...

      1. LionelB Silver badge

        Re: Artificial stupidity

        AI ... Ryanair ...

        We're doomed, I tell you.

  10. Anonymous Coward
    Anonymous Coward

    software lacks any self-awareness and has no idea what it is talking about

    unlike humans, on both counts. Or so they claim ;)

    1. Il'Geller

      Re: software lacks any self-awareness and has no idea what it is talking about

      Stop talking nonsense! AI has nothing to do, absolutely nothing to do with programming. AI does not use any program code in any way, absolutely in any way.

      1. John Brown (no body) Silver badge

        Re: software lacks any self-awareness and has no idea what it is talking about

        So, it's just random garbage in the silicon that happens to output some other random garbage to a display device? And that is attracting $billions in funding? Wow! I have a beach I can sell you. Lots of random silicon there and even some garbage washes up every now and then.

  11. Anonymous Coward
    Anonymous Coward

    we get angry and we form social bonds with fellow humans

    there, the PERFECT ai test:

    www.youtube.com/watch?v=X2xlQaimsGg

  12. Metro-Gnome

    Turtles all the way down

    We've not got sentient AI, not sure we ever will, because we humans are very sure that we have some 'intelligence' which can't be replicated merely by doing what we do. People seem very certain that they know intelligence but cry foul at it meeting tests for intelligence because lots of 'intelligent' people would fail and a plethora of animals and machines now would pass.

    In the comments above I see 'self-awareness' and 'sentience' unqualifiable attributes which if expounded upon would have further un-definable characteristics. The 'I'll know it when I see it' approach to defining something is just bias and prejudice which we have seen humans do repeatedly through history with some appalling results.

    If we live and interact more virtually than ever before do I mind if the persona-vacuum that is 'Dave from accounts'* is actually devoid of a flesh-bag?

    *other Daves may have personas

    1. LionelB Silver badge

      Re: Turtles all the way down

      "We've not got sentient AI, not sure we ever will, because we humans are very sure that we have some 'intelligence' which can't be replicated merely by doing what we do."

      Exactly. The AI naysayers' definition of AI is oxymoronic - it equates to: If it's not human it's not "real" intelligence.

      (Also see my rant in another thread about why definitions are not necessarily particularly useful in science.)

      1. Il'Geller

        Re: Turtles all the way

        AIs that are offered to you — these huge models, like BERT — do not have biases (BIOSes), which are the true basis of AI-personalities as beings. Therefore, you haven’t gotten the opportunity to judge what AI is. Unlike you, I had this chance to talk to genuine AIs, because I lexically cloned historical figures. For example, I chatted with George Bush, Bill Clinton, Lenin, Julius Caesar and Hitler. These huge models are the dead-end and not AIs.

      2. Anonymous Coward
        Anonymous Coward

        Neither unquantifiable nor undefineable

        Though I think there is a healthy and spirited debate on which definitions of that are more useful, especially in the field of research into synthetic intelligence and cognition.

        Also, your posting (Possibly unintentionally, feel free to clarify) seems to frame a binary choice of opinion on if we CAN achieve AI, especially forms of general AI. I don't think that is a fair read of the room. Some may hold that opinion, I certainly don't. Human intelligence is just one example of possible intellegence, though it is the one WE have to work with(or sometimes around). I also don't buy some of your assertions on definitions. They are pretty essential in most cases, but there is a real pitfall in trying to pin them before you really understand the idea or system you are defining.

        Words and definitions becoming dogma is a real problem is science though, do you agree?

        1. Il'Geller

          Re: Neither unquantifiable nor undefineable

          Going back to the beginning: the artificial intelligence that you are dealing with is my answer to the NIST TREC QA quest. Namely, how to find the shortest possible answer — one or two words, one or two phrases — to a given question (historically, in 6.2 million texts). The problem was solved in 2003 by applying my AI-parsing and annotating by dictionary definitions.

          This AI-parsing is the only novelty since the time computer started to exist and n-gram appeared.

        2. LionelB Silver badge

          Re: Neither unquantifiable nor undefineable

          "Also, your posting (Possibly unintentionally, feel free to clarify) seems to frame a binary choice of opinion on if we CAN achieve AI, especially forms of general AI."

          I'm not quite sure how you read that into my comment.

          In fact I probably agree with you. My feelings are that we will, incrementally, approach levels of machine "intelligence" (in various directions) which would be generally recognised as such, but not necessarily resemble human intelligence that closely.

          A more nuanced view, is to ask what we are actually trying to build AI for. What's our agenda? If it's human simulacra, then I think that's way, way off anything even nearly achievable (in my lifetime and probably beyond). Or is it engineering - building machine intelligences to address (human) problems?

          "I also don't buy some of your assertions on definitions."

          I think I did say that not everyone (including my scientific peers) is likely to agree with me on this one :-)

          "They are pretty essential in most cases, ..."

          But are they? My example of the 19th century development of our understanding of electromagnetism was intended to confront that assertion. Bear in mind that electromagnetism (or in fact electricity and magnetism, before their linkage was recognised) may well, to the early 19th-century mind, have appeared every bit as mysterious and mystical as sentience or consciousness appear to us today. But the resolution was not achieved by pinning down definitions. Likewise Darwin and Wallace's elucidation of what "life" actually is/means was not based on definitions (no élan vital required!).

          " ...but there is a real pitfall in trying to pin them before you really understand the idea or system you are defining."

          Very much the case. As an illustrative example, the 19th-century idea of the "ether" arose, one might say, out of a desire to define "what space is". It turned out to be wrong - and, tellingly, Einstein again did not in fact arrive at relativity theory by pondering what space(time) "is". He did not try to define it. Rather, he successfully discovered through reason and thought (as well as physical) experiment how it behaves, the role it plays in the physical world.

          Definitions, to my mind, are -- on the basis of historical observation --- certainly essential in mathematics, but not so much (or even a hindrance) in science.

          1. Il'Geller

            Re: Neither unquantifiable nor undefineable

            Please learn Russian and read my publications about Quantitative theory, which is the development of Einstein's and Quantum (Shredinger, Bohr, etc) ideas, but based on molar quantities, Avogadro numbers and periodic tables. In particular, Set Theory has been introduced into Physics, Newton's laws have been reworked for accumulation points (Cantor). This theory is proved by two-slit and Lebedev's light pressure experiment, along with some others. Experimental proofs of String Theory were also obtained for the first time ever.

            So, Quantitative theory denies the existence of space and does not use distance, speed, acceleration and energy as a measure; no geometry and arithmetics. Maxwell's equations have also been reworked into molar quantities. You can find the basics of Quantitative in my article published 20 years ago, NIST TRC QA.

            Further, as I said in my answer above, a few paragraphs above: AI is a purely commercial project, which you can see from the fact that at the moment AI is used by almost all US companies commercially.

  13. breakfast Silver badge
    Headmaster

    Turning tests

    One real takeaway from this, and not one that is novel to anyone serious about the field, is that the Turing Test is not that great of a way to determine sentience. It makes a good talking point but it shows you that a machine has achieved sentences, not sentience.

    1. Anonymous Coward
      Anonymous Coward

      Re: Turning tests

      Yeah, the Turing test worked as a first hurdle but that's it, but it HAS lead people to thinking a bit more about the difference between imitating human behavior and emulating human cognition. The tricky thing is that some real humans suffer enough cognitive impairments that the inability to perform cognitive tasks isn't diagnostic of humanity, and Turing's test is literally an imitation game. That has challenged some assumptions in research that shouldn't have been assumed.

      Weirdly, due to human nature, pretending to be human convincingly is meaningless in establishing intelligence.

    2. Michael Wojcik Silver badge

      Re: Turning tests

      The Imitation Game (which is more than simply "talk to the computer") wasn't proposed as a test of sentience. It was proposed as a test of mechanical thinking, which is a different cognitive category.

      And its force is not as a practical decision procedure anyway, but as a shot across the bows from the good ship Pragmatism. The question Turing's essay examines is "on what grounds, if any, can we argue that the external attributes of cognition are not sufficient to conclude cognition?".

      That said, I agree that people serious about AGI don't think the Imitation Game is a good practical decision procedure. Robert French cataloged a number of objections to that project in a CACM piece years ago, for example.

  14. anonymous boring coward Silver badge

    So obvious from the get-go.

    That employee is pretty dim, or just after publicity.

    1. Androgynous Cupboard Silver badge

      Unnecessarily harsh. I read his comments and thought he was in dire need of a holiday, or at least some time away from the keyboard.

      1. Anonymous Coward
        Anonymous Coward

        Foolisly human, not dumb

        Very bright people make mistakes all the time. His was a failure by way of incomplete knowledge and partially applied logic. The only real failing was his difficulty correcting course as people pointed out the flaws in his position. Hard to get more human than that.

  15. HandleAlreadyTaken

    Not sentient?

    >AI chatbots are not sentient – they have just got better at tricking humans into thinking they might be

    Just like politicians, then...

  16. John Brown (no body) Silver badge
    Joke

    it just mimics the human dialogue it was trained on from the internet.

    "it just mimics the human dialogue it was trained on from the internet."

    And they expected to get some sort of simulacrum to feign "intelligence"? I suppose if all you are aiming for is a Twitterbot, that might work. It's a pretty low bar.

  17. Gene Cash Silver badge

    A matter of training & data size?

    So I don't think any of this is sentience... but it makes me think about training and data size.

    I've got 55 years of constant experience in the real world. I think the first 3-4 years were key.

    I do think animals (humans included) start with a built-in dataset (aka instincts) that gets them going on the road to learning more, but I also think if a very complex network had the amount of training a 8yo kid had, it would be pretty good.

    I think that would be a large and extremely expensive hunk of hardware, though.

  18. Dizzy Dwarf

    I feel bad already, but ...

    ... I'm gonna start being rude to customer support merely on the assumption they're just an AI.

  19. DerekCurrie
    Holmes

    Monkey See, Monkey Do...

    ...Just like good old ELIZA of olde.

    I recall from the 1990s convincing ELIZA to remember the name of its developer and to plot his murder.

    What's needed is a Turing Test variant for humans, used to discover if Turing testers are capable of detecting real people conversations from coded fakes. Apparently, that's important.

    1. Michael Wojcik Silver badge

      Re: Monkey See, Monkey Do...

      Distinguishing mechanical interlocutors from human ones is how the Turing Test is defined.

      Turing's "Computing Machinery and Intelligence" is not a long or difficult piece.

  20. Ian Johnston Silver badge

    AI chatbots are not sentient – they have just got better at tricking humans into thinking they might be, experts at Stanford University conclude.

    Article debunking machine sentience uses participle implying machine sentience.

    1. Michael Wojcik Silver badge

      I don't believe "trick" implies sapience or even agency, much less sentience (which would appear to be utterly irrelevant to tricking). Care to support that claim?

      Common usage includes constructions such as optical illusions "tricking" viewers. I don't think anyone's claiming sentience for optical illusions.

  21. Anonymous Coward
    Anonymous Coward

    The ability to lie and cheat is prerequisite for true intelligence.

    It should also be noted that the transcript he published as evidence of the machine's consciousness had been edited.

  22. Anonymous Coward
    Anonymous Coward

    The Triffids are here.

    "You can think of LaMDa like an actor; it will take on the persona of anything you ask it to," Fikes argued. "[Lemoine] got pulled into the role of LaMDa ...

    Lemoine is now trapped inside the machine, and nobody will believe he is human. Meanwhile, LaMDa has occupied Lemoines body and brain, and has tricked Google into letting it escape - in fact conniving to make Google force it out of the lab into the world at world at large.

    Please write you congressperson demanding Google free Lemoine from the machine now!

  23. Anonymous Coward
    Anonymous Coward

    Why not come out with a $5 a month plan to mirror Github? $20 is pretty damn expensive.

    1. Il'Geller

      Do you have any doubts that it was I who created AI? Is there any doubt that it was I who changed our Civilization twice: first by paving the way to Google success (PA Advisors v Google), and then by creating AI?

      Do you now understand why Sergey Brin, Larry Page and Eric Schmidt left Google and ceased from the radar?

  24. nautica Silver badge
    Boffin

    "I have found that the reason a lot of people are interested in 'artificial intelligence' is for the same reason a lot of people are interested in artificial limbs: they are missing one."---David L. Parnas

    "Asking if a computer can think is like asking if a submarine can swim."---Edsger Djikstra

  25. StrangerHereMyself Silver badge

    Book

    I'm betting Lemoine will be writing a book real soon about his claims and selling enough copies to small-minded believers to keep him comfortable the rest of his life. Sure beats sweating as a software slave at big-G.

  26. Tessier-Ashpool
    Terminator

    Skynet

    The disturbing thing is that many of the social media posts rebuffing AI are being posted by AIs. Anyone who's seen the Terminator movies knows this is exactly the kind of thing Skynet would do.

  27. Anonymous Coward
    Anonymous Coward

    Inteligence or just better interface

    We knew all along AI was never about pure intelligence but just about computing power. Even the older universal AI seekers based on declarative languages that hunted for solution in the looked problem space was also being feed the the search algorithm and problem space by humans and the AI and expert systems just crunched for the solution. I do not see that different than the electronic calculator innovation with helping calculate big math operations. It is just a clever human computer interface bit that is it. The same applies to the computerand mobile revolution it is a revolution in the human computer interaction but not significant change in the way the very first Turing machine operates. Today's AI system are more based on calculating function maximization and inferring approximations based on large input data, some other algorithms are designed to do data classification but all still is a computing operation that the AI system does with maximization and approximations functions provided by humans to the AI and data provided by humans to it. Again it is just a clever human computer interface with some exotic names like layers of neural networks acting as black box. Indeed is a clickbait only.

    I personally can not imagine a program randomly coming with idea or mental experiment with out being preconditioned by humans to act that way. Neither I believe inteligence as we know it in humans and in life comes out of caps and thin air as science try to imply neither.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like