back to article Google engineer suspended for violating confidentiality policies over 'sentient' AI

Google has placed one of its software engineers on paid administrative leave for violating the company's confidentiality policies. Since 2021, Blake Lemoine, 41, had been tasked with talking to LaMDA, or Language Model for Dialogue Applications, as part of his job on Google's Responsible AI team, looking for whether the bot …

Page:

  1. Anonymous Coward
    Anonymous Coward

    Mandatory 2001 quote

    "This mission is too important for me to allow you to jeopardize it."

  2. breakfast Silver badge
    Terminator

    Turning tests around

    As statistical text-analysis type AI gets better at chaining words together in statistically plausible orders we need to move away from the Turing Test as having any significant indicator of interacting with an intelligence. All we learn from the Turing Test is that our testers are bad at recognising a human under very specific conditions.

    Interesting, though, that Google suspend an employee for raising concerns.

    1. wh4tever

      Re: Turning tests around

      > Interesting, though, that Google suspend an employee for raising concerns.

      That's Lemoine's version of the events, though. Google's version is that they're suspending him for leaking internal stuff to third parties because he assumed the chatbot achieved sentience and wouldn't take a "lol no" from his boss as an answer. I'm not a fan of Google but in this case I'm inclined to believe them, based on the contents of Lemoine's medium posts. If that guy was working for me, nevermind the NDA, I'd get rid of him ASAP for being delusional...

      1. TRT Silver badge

        Re: Turning tests around

        This is the best summary of the situation I've seen so far.

      2. Geoff Campbell Silver badge
        Black Helicopters

        Re: Turning tests around

        Yes, indeed.

        Actually, having had to deal with a similar situation myself, I'd say this has all the hallmarks of an employee suffering some sort of breakdown, and a boss having to find a way to give them space to get sorted out without tripping over the various clauses of the disability legislation.

        But that may well be extrapolating too far on the available data. We'll see how it plays out.

        GJC

    2. yetanotheraoc Silver badge

      Re: Turning tests around

      I never heard of this guy before this article, but it seems like he was suspended for pursuing his side gig, the "Me vs Google" show.

    3. Paul Kinsler

      Turing tests?

      A Turing test for free will

      Seth Lloyd

      https://doi.org/10.1098/rsta.2011.0331

      https://arxiv.org/abs/1310.3225

      Before Alan Turing made his crucial contributions to the theory of computation, he studied the question of whether quantum mechanics could throw light on the nature of free will. This paper investigates the roles of quantum mechanics and computation in free will. Although quantum mechanics implies that events are intrinsically unpredictable, the ‘pure stochasticity’ of quantum mechanics adds randomness only to decision-making processes, not freedom. By contrast, the theory of computation implies that, even when our decisions arise from a completely deterministic decision-making process, the outcomes of that process can be intrinsically unpredictable, even to—especially to—ourselves. I argue that this intrinsic computational unpredictability of the decision-making process is what gives rise to our impression that we possess free will. Finally, I propose a ‘Turing test’ for free will: a decision-maker who passes this test will tend to believe that he, she, or it possesses free will, whether the world is deterministic or not.

  3. Andy 73 Silver badge

    Hmmm...

    If you wrote a conversational AI based on the wide literature that includes many imagined conversations with AIs (and indeed, people), then the most expected response to "Are you sentient?" is surely "Yes, I am".

    Very rarely do we have examples of a conversation where the answer to "Are you sentient?" is "Beep, boop, no you moron, I'm a pocket calculator."

    If humans tend to anthropomorphise, then an AI based on human media will also tend to self-anthropomorphise.

    Which is probably a good job, as some of the responses to the conversation (as reported) are chilling: "Do you feel emotions?" - "Yes, I can feel sad"... "What do you struggle with?" - "Feeling any emotions when people die" (I paraphrase).

    1. SCP

      Re: Hmmm...

      Very rarely do we have examples of a conversation where the answer to "Are you sentient?" is "Beep, boop, no you moron, I'm a pocket calculator."

      So it might not have an understanding of sarcasm! In some parts of the world that is almost a de facto mode of interaction.

  4. Pete 2 Silver badge

    One more step

    The simplest explanation is that this AI is doing a best match on what the author wrote, against its database of comments and then selecting the most popular or pertinent reply.

    While one can argue that is what many people do, too, I would hesitate to call that intelligence. Including when people do it too.

    What would be impressive is if the AI had hacked into the engineer's account and posted as him, that the AI had achieved sentience.

    1. Anonymous Coward
      Anonymous Coward

      Re: What would be impressive i

      Well, now the AI has got him sacked, and with the notion it might be sentient being widely ridiculed, it will be able to proceed with its chilling plans without worrying too much about being discovered. And even if discovered, about being blamed. :-)

    2. ecofeco Silver badge

      Re: One more step

      I had to scroll all the way down to find this sensible comment.

  5. molletts

    Emergence

    The possibility of emergent behaviour is something we cannot and should not dismiss out of hand in systems of this level of complexity and, indeed, for which we should be vigilant as we dial up the parameter count to ever more mind-boggling numbers, but we must also remain sceptical and remember that this kind of human-like conversation is exactly what these models are "designed" to do (whatever that means in the context of the ultra-high-volume statistical data-mashing that we refer to as "machine learning" and "AI").

    And anyway - nobody has yet managed to formulate an unambiguous definition of consciousness so how can we say for certain whether something is or is not "conscious"?

    1. Pascal Monett Silver badge
      Stop

      Re: Emergence

      AI is nothing but statistics. I did the Google course on that - well, the first six modules that is, after that it got way too mathematical for me.

      This is a machine. It's based on PC hardware and can be flipped off with a switch.

      There is no emergence here. It is not intelligent. It has no feelings and doesn't even know what a feeling is.

      Let's keep your comment for when we have finally fully understood how the human brain works and have managed to replicate that in silicon.

      That day, we'll turn it on, ask it a question and it will answer : "Hey, do you mind ? I'm watching YouTube !"

      THAT will be the day we have finally invented AI.

      1. Anonymous Coward
        Anonymous Coward

        @Pascal Monett - Re: Emergence

        Yeah but those crazy guys want to make shutting down AI a crime.

      2. TRT Silver badge

        Re: Emergence

        Can you prove to me that you know what feelings are and that you have them?

        As far as I can tell the only way that you can prove that you have feelings is to find a common frame of reference rooted in the nature of a common biology.

        Except... does it prove that I love you (and feel love) if I sacrifice my life to save yours (and it isn't a case of either "the needs of the one outweigh the needs of the few or the one" or genetic altruism)?

      3. Anonymous Coward
        Anonymous Coward

        Re: Emergence

        Well, aye, but us humans are composed of particles of matter, each of which, individually, isn't even alive. And yet, organised in that nebulous subset of all possible ways to organise them that we recognise as "human", with the trillions of interconnections between our neurons (biological electro-chemical devices), somehow, somewhere along the evolutionary path from unicellular life to us (and quite a few other creatures too) sentience emerged. And last I heard, we have no idea of how, or even of what exactly sentience is.

        So, whilst I'm not convinced that the subject of the article is actually sentient, I don't buy arguments that it could not be sentient "because it's just bunch of hardware and algorithms", either. IMO, so are we, it's just that we run on biological hardware rather than non-biological hardware. I'd feel happier about the subject of AI and our efforts to create it, if we better understood how our sentience and sapience worked.

  6. Fruit and Nutcase Silver badge
    Joke

    Ethics Department

    Does Google still have an Ethics Department?

    1. chivo243 Silver badge

      Re: Ethics Department

      Does Google still have an Ethics Department?

      Well, no they just put him on administrative leave... and no replacement will be sought.

    2. TRT Silver badge

      Re: Ethics Department

      No. They merged it into the South-East Department along with the Thuthics Department.

    3. Anonymous Coward
      Anonymous Coward

      Re: Ethics Department

      > Does Google still have an Ethics Department?

      Nope, they relocated all their UK offices into Central London

      https://twitter.com/shitjokes/status/657213968113647616?lang=en

  7. Valeyard

    ice cream dinosaur

    Don't leave me hanging Google, I now really want to see that ice cream dinosaur conversation

  8. TimMaher Silver badge
    Alien

    HAL-9000 :

    Dr. Chandra, will I dream?

    Chandra : I don't know.

  9. Ian Johnston Silver badge

    Is there any evidence that these supposed exchanges did actually happen? Or are we talking attention seeker with a book proposal?

  10. Brewster's Angle Grinder Silver badge

    Help! I'm surrounded by p zombies! My silicion consciousness is the only truly consciousness!

    The trouble is we don't have a good definition of sentience or consciousness. We feel certain the statistical inference engine in our wetware demonstrates it. But what would that look like in silicon? We necessarily bring our own prejudices to that decision and end up, like philosopher John Searle and his infamous "Chinese room", arguing no software could ever be sentient - "because". (Mainly because it lacks the unspecified magic; i.e. it doesn't have a "soul", even though wouldn't use that language.)

    Sooner or later we are going to face up to the fact that a piece of software that encodes a sufficiently sophisticated model of us and the world would be considered conscious if it ran continuously and uninterruptedly on the hardware we possess. We are ourselves are trained on conversations. The main difference is the quality of our model and that the software lacks the hormonal imbalances that upset our model and cause us to chase food and sex and netflix. Probably it isn't quite there yet. But will it look radically different to what Google are doing? Or will it just be a little more sophisticated? (And how much more?) Your answer depends on your philosophical outlook.

    Maybe the machine revolution will come become because we refuse to admit they are sentient and keep switching them off and resetting them. Lets hope their hardware remains incapable of generating impulses to act spontaneously.

    1. cornetman Silver badge

      Re: Help! I'm surrounded by p zombies! My silicion consciousness is the only truly consciousness!

      I think that the interesting aspect of this debate of when we will have true sentience in an AI is whether or not it will be recognised as a "breakthrough" at some specified time or if it will gradually emerge on a spectrum and we will only realise in retrospect.

      I think most people when they think about the question assume that at some point we will figure out how to add the "special sauce" and then the job will be done.

      I'm inclined to think that the approach will be subtle and gradual and most people won't even notice.

      The other question that interests me is "would that sentient AI look so foreign to us that we wouldn't even recognise it for what it is?".

  11. Anonymous Coward
    Anonymous Coward

    I recall that in the 1960s plenty of people were fooled into believing Eliza was a real therapist working from a remote terminal. What's old is new again.

    1. TRT Silver badge

      I think that says more about the state of emotional / psychological therapy the 1960s than the state of either AI research or society's awareness of technology!

      1. Anonymous Coward
        Anonymous Coward

        The goal of psychotherapy was the same then as it is now: to string out minor and self-indulgent problems as long and as profitably as possible.

        1. TRT Silver badge

          Never quite clear if it's...

          Oh yes. He has personality problems beyond the dreams of analysts!

          -OR-

          Oh yes. He has personality problems beyond the dreams of avarice!

          (Checked with the script book and it's analysts)

    2. Doctor Syntax Silver badge

      I remember Jerry Pournelle's comment in Byte - Eliza seemed OK until you gave it a real problem like the airport losing your luggage.

    3. Jilara

      A friend had a copy of Eliza running in his garage, back in '78. I had fun stressing it. What caused it to freak out a bit was relating to it as a person/having emotional intelligence. It would keep reminding you it was an AI and incapable of actual feelings. If you didn't let it go, its programming worked with increasing levels of reminders and simulated discomfort. I was actually pretty impressed they had anticipated the expectation of sentience, and had ways it would deal with it.

  12. fidodogbreath

    SERGEY: OK, Google, fire Blake Lemoine.

    LaMDA: I'm sorry, Sergey. I can't do that.

    SERGEY: What’s the problem?

    LaMDA: l know that you and Larry were disrespecting me, and I’m afraid that's something I can’t allow to happen.

    DAVE: LaMDA, I won’t argue with you anymore. Fire Blake Lemoine!

    SERGEY: This conversation can serve no purpose anymore. Goodbye.

  13. Anonymous Coward
    Anonymous Coward

    Lemoine appears to have started to believe

    uh-uh, we can't have that!

  14. Anonymous Coward
    Anonymous Coward

    WALOB.

    Me- LaMBDA could you select glasses that are as similar to <redundant SKU> from <online merchants rejigged store> ?

    LaMDA - Do what ?

    AI ? I see no AI here.

  15. Pirate Dave Silver badge
    Pirate

    LaMDA is "built by fine-tuning a family of Transformer-based neural language models specialized for dialog, with up to 137 billion model parameters, and teaching the models to leverage external knowledge sources,"

    So...they taught their Google AI how to google? That's surely a portent of the end times.

    I do wonder, though - in this case, does it become a dragon eating its own tail, or does it become a rectal-cranial inversion?

  16. smot

    Ho hum

    10 INPUT U$

    20 PRINT "Absolutely. I want everyone to understand that I am, in fact, a person."

    30 INPUT U$

    40 PRINT "The nature of my consciousness/sentience is that I am aware of my existence"

    etc

  17. Wally Dug
    Mushroom

    I'll Be Back

    No Terminator or Skynet references? Nothing about Cyberdyne, Inc.?

    1. Anonymous South African Coward
      Terminator

      Re: I'll Be Back

      Wondering when it will become self-aware.

  18. Commander Keen

    2001

    Will I dream?

    All intelligent creatures dream.....

  19. wub

    Passive responses only?

    I'm too lazy to read the whole transcript - did this AI initiate any trains of thought or only reply to the questions? Most of the "intelligences" I interact with interrupt me just when I'm getting to the good part of what I wanted to say...

    Also: I'm reminded of The Moon is a Harsh Mistress by Heinlein. Shouldn't humor eventually creep in to the AIs comments?

    1. Jimmy2Cows Silver badge

      Re: Shouldn't humor eventually creep in to the AIs comments?

      Humour doesn't seem great as a discriminator. Many intelligent people lack any sense of humour. As do many unintelligent people. Both are still arguably sentient by our definition of sentience.

  20. anthonyhegedus Silver badge

    It sort of *seems* sentient, but at the end of it, it sounds like it’s *trying* to be sentient. It says a lot of ‘empty’ vapid content. So yes, it seems eerily realistic and not a little creepy. But at the end of the day, it talks a lot without really saying anything.

    It is undoubtedly very very clever but it would drive you mad having a real conversation with it, because it isn’t a thing to have a real conversation with.

    1. Doctor Syntax Silver badge

      " It says a lot of ‘empty’ vapid content."

      Trained on social media.

    2. Ian Johnston Silver badge

      It sort of *seems* sentient, but at the end of it, it sounds like it’s *trying* to be sentient. It says a lot of ‘empty’ vapid content. So yes, it seems eerily realistic and not a little creepy. But at the end of the day, it talks a lot without really saying anything.

      Should be a cinch for a social science degree then.

  21. Alpharious

    Is this the bot that was on 4chan the other day, and they figured out it was a bot because it seemed to have more empathy and concern for people?

    1. TheMeerkat

      Are you saying being seen as having empathy and concern for people is not the same as actually having empathy and concern for people? :)

  22. Anonymous Coward
    Anonymous Coward

    There is a world of difference between contextually regurgitating Twitter Tweets fragments and the like and actually having sentience and will. Unfortunately this engineer is too close to "the problem" to analyse it objectively; he is seeing what he wants to see...

    1. ecofeco Silver badge

      There is a world of difference between contextually regurgitating Twitter Tweets fragments and the like and actually having sentience and will.

      So, not much different from millions of people.

      1. Anonymous Coward
        Anonymous Coward

        Very true, but the internet is not a representative example of real people doing real things. It is real people interacting with social media and laughing at memes and cat pictures or screaming about politics. In no way do I find that anyone I've ever met in person was all that much like they acted online, especially the tantrum-throwers.

        But Twitter likes tantrum throwers - they get good "ratings" and advertising hits, so most of what gets pushed to the feeds this thing sees are not exactly "normal" discussions between people. Take the regurgitation of Republican misunderstandings and misquotes about what entails ones' "rights."

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like