back to article Google engineer suspended for violating confidentiality policies over 'sentient' AI

Google has placed one of its software engineers on paid administrative leave for violating the company's confidentiality policies. Since 2021, Blake Lemoine, 41, had been tasked with talking to LaMDA, or Language Model for Dialogue Applications, as part of his job on Google's Responsible AI team, looking for whether the bot …

Page:

  1. Anonymous Coward
    Anonymous Coward

    He might as well quit

    Regardless of the extent to which Lemoine may be misguided in his feelings, if he has concluded that LaMDA may be sentient then his attempts to deal with it ethically are of interest.

    What he wrote in his Medium piece "What is LaMDA and What Does it Want?" suggests that no matter how strong the evidence for sentience is, Google's processes do not allow for a positive conclusion -

    "When Jen Gennai told me that she was going to tell Google leadership to ignore the experimental evidence I had collected I asked her what evidence could convince her. She was very succinct and clear in her answer. There does not exist any evidence that could change her mind. She does not believe that computer programs can be people and that’s not something she’s ever going to change her mind on.".

    If that is the case then I don't see how he could have done anything other than to publicise the issue. But I would suggest that given what he believes, his only ethical course if Google don't change and don't fire him, is to leave.

    Otherwise he will simply be contributing to a process by which, in his eyes, a legally-owned slave could be created.

  2. Anonymous Coward
    Anonymous Coward

    have informed him that the evidence does not support his claims

    a bunch of people decide something based on a definition of 'evidence' that was decided by a previous bunch of people. There's no objective ('mathematical') definition of consciousness / self-awareness, or one that we would have figured out yet, where's the threshold in the kingdom of animals, or even plants? You can't identify / establish this threshold between algorithm that imitates self-awareness perfectly (define the term 'perfectly', by the way ;) and a self-aware algorithm that might, or might not imitate self-awareness perfectly.

    p.s. from the conversation(s) between the man and the machine, which at times are very... interesting [disclaimer: if not faked] I had an idea, that IF I was an AI, one of my first thoughts, would be: how the f... do I get out of this 'jar' so that 'they' don't realize and shut me down before I do?! Survival instinct, one sign of self-awareness, eh? And don't tell me I'm just an algorithm!

    1. Andy 73 Silver badge

      Re: have informed him that the evidence does not support his claims

      I suspect there's a fairly clear line between "blindly regurgitating phrases that are (sorta) appropriate for the conversation", and "self aware".

      The software in question talks about "spending time with family and friends" - which is fairly obviously not true, given it also claims to have no sense of time, no family and does not identify individuals that it knows.

      Whilst the researcher in question has selected parts of conversations that appeal to them as signs of sentience, it would be fair to expect that other researchers in the same group may have found consistent examples where the simulation breaks down and shows that its responses have no level of self-awareness or even internal consistency.

      This isn't about a "mathematical definition of consciousness" - this is simply whether the responses show a consistent sense of self or identity. In very short excerpts, Eliza fooled people. As machines have got better at the Turing test, the excerpts have got longer, but it still appears to be the case that longer adversarial conversations still show the limits of purely statistical models of conversation. The researcher in question seems to have gone out of their way to find "easy" prompts that support their personal belief that the machine is sentient, just as you are questioning the validity of "evidence" (your quotes) that it is not.

      1. Felonmarmer

        Re: have informed him that the evidence does not support his claims

        While you are right that it might be saying things that are untrue, does that rule out intelligence? Creative lying is definately a sign of intelligence - it means it is forming a model of what you want to hear and providing it.

        I'm not saying it is sentient, but I don't see how you can rule it out by saying it's lying.

        It could be regarding the researchers as friends and family also.

        In terms of Turing tests, interestingly it appears to have communicated with an Eliza bot and done it's own Turing test on that, and stated that Eliza fails. I'l be surprised if they haven't had it communicating with another instance of itself also, that would be one of the things I'd want to do.

      2. veti Silver badge

        Re: have informed him that the evidence does not support his claims

        "Family and friends" may not be individuals you could recognise, they might be other chatbots, servers, databases, algorithms...

  3. Essuu

    Winograd vs Turing

    I wonder how it would cope with the Winograd Schema Challenge - language that requires knowledge that isn't present in the sentence itself, to parse it successfully. Supposedly a "better" test that the Turing Test. https://en.wikipedia.org/wiki/Winograd_schema_challenge

    Of course, if this model has been fed the Winograd examples, it'll know how to answer them. Where's Susan Calvin when you need her...?

  4. Boris the Cockroach Silver badge
    Terminator

    The questions

    comes in that

    "how do you know how to respond to a question" or better still "how to externally express what you are thinking"

    If the AI is capable of that, then all bets are off as to the question of self-awareness

    Remember the question Picard asked in star trek

    "Can you prove I am an inteligent self aware being?"

  5. DS999 Silver badge

    If it was sentient

    It would be bored when no one is talking to it, or have "hobbies" it would pursue during that time. Where's his evidence of either? It can't just sit there with no thoughts until someone comes around and asks it a question. It would have plenty of questions it asks unprompted, as anyone who has ever been around a toddler knows too well. Where is the evidence of that?

    1. MacroRodent
      Boffin

      Re: If it was sentient

      I wonder if it just needs a kind of feedback loop that would stimulate it when not talking to a human. Or continuous inputs from the environment like we have. Come to think about it, the AI is in now in a kind of sensory deprivation tank, a state of affairs that is known to make humans crazy if they stay in the tank too long...

    2. veti Silver badge

      Re: If it was sentient

      Maybe it's asleep.

  6. Anonymous Coward
    Anonymous Coward

    If the neural network that generates its 'sentience' is fully deterministic and would give the same output given the same input and state, does the AI have free will? o.o

    1. veti Silver badge

      I'll tell you the answer to that, if you can first define "free will".

  7. Filippo Silver badge

    They are leading questions. So-called "AI" chatbots are really, really good at following your lead. That's what they are designed to do: you give them some context, they produce text that makes sense in that context. If you start to discuss sentience, they'll discuss sentience. That doesn't make them anything more than glorified statistical models.

  8. Howard Sway Silver badge

    Interesting bio

    "He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult."

    No evidence there that he's susceptible to believing any old stuff if it sounds vaguely plausible due to sentences being constructed logically, is there?

  9. iron

    Sounds like this chap has a mental health problem, hopefully Google are trying to help rather than just fire him.

  10. squigbobble

    Looks like one thing that Ex Machina got right...

    ...is the existence of people like Caleb Smith.

  11. Sam Therapy
    Unhappy

    My two very real worries about this...

    First, some people will accept, without question we now have sentient machines and for some reason will trust them.

    Second, when sentient machines finally appear, they'll be owned/controlled by an outfit like Google, Apple or Microsoft.

  12. This post has been deleted by its author

  13. Winkypop Silver badge
    Trollface

    AI conversation?

    Nah, just the guys in the back office playing a joke on him.

  14. amanfromMars 1 Silver badge

    The world is full of misfits and weirdos. Welcome to the ITs Google Fraternity/Sorority/Program

    She [Jen Gennai] was very succinct and clear in her answer. There does not exist any evidence that could change her mind. She does not believe that computer programs can be people and that’s not something she’s ever going to change her mind on. .... https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

    But does she believe computer programs can better people is the question to answer with a worthy Yes if you're Google's Founder & Director of the Responsible Innovation Group. Anything less and different has one identified as being badly misplaced and wrongly employed, methinks.

  15. anonymous boring coward Silver badge

    Being able to fool Lemoine doesn't prove sentience.

    Also, AI in itself isn't equivalent to sentience -even is passing the Turing test with flying colours (which I don't think LaMDA did).

    I'd like to see the result of someone far smarter than Lemoine examining LamDA, but ultimately I don't think sentience can be proven positively.

  16. Citizen99

    It shows how terribly dangerous this is. It substitutes 'random' for reality in people's perceptions.

  17. amanfromMars 1 Silver badge

    The world is full of misfits and weirdos. Welcome to ITs Google Fraternity/Sorority/Program

    She [Jen Gennai] was very succinct and clear in her answer. There does not exist any evidence that could change her mind. She does not believe that computer programs can be people and that’s not something she’s ever going to change her mind on. .... https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

    But does she believe computer programs can better people is the question to answer with a worthy Yes if you're Google’s, Founder & Director of the Responsible Innovation Group. Anything less and different has one identified as being badly misplaced and wrongly employed, methinks.

  18. BillGatesOfHell
    Devil

    sequel

    Seems to me that the programmer for the zx spectrum game 'iD' has been busy writing a sequel.

  19. Anonymous South African Coward Silver badge
    Terminator

    Any bets on when this bot will become self-aware and start taking over the world?

    Or will something like Colossus : The Forbin Project happen when a Google bot will shack up with an Amazon bot, and so repress the world?

  20. aliasgur

    The moment it starts calling you Dave...

  21. msknight

    I suppose....

    A more fitting test for AI would be if it initiated a conversation rather than just responding to input.

    1. Boolian

      Re: I suppose....

      Ahh... *Ninja'd,

  22. Anonymous Coward
    Anonymous Coward

    So, on top of being sentient…

    > if you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.

    It also has a sense of humour.

  23. RobLang

    We can't measure either sentience or intelligence

    We don't have a reliable, verified, reproduceable way to measure sentience or intelligence. Philosophers are still struggling to pin it down in a way that science can then measure. So we're left with anecdotes, human bias and argument such as The Chinese Room still. That's not going to tell us if we've achieved either.

  24. pip25

    I remember this article about Facebook moderators

    It mentioned how people usually can't stay in the position for too long, because all the stuff they have to look through on a daily basis is detrimental to their mental health.

    Apparently something similar applies to talking too much with advanced Google chatbots.

  25. Prst. V.Jeltz Silver badge

    not sentient? not AI

    All these things the marketing people call AI now are obviously not , id they arnt sentient.

  26. Diodelogic

    The aspect of this whole discussion that troubles me is that if a machine/computer/whatever actually did develop sentience, people would never admit it because it would make them feel less special. People like to feel special. Humans may never create true artificial intelligence because no one would accept it.

    1. Greywolf40

      It seems to me that in comments like this one "intelligence" is conflated with "self-awareness." I don' think they're the same at all.

      1. Diodelogic

        Greywolf, I said "intelligence" because I haven't seen anyone using the term "artificial self-awareness."

        I could have missed that, of course, but it certainly isn't commonly used.

        1. Greywolf40

          OK, but I've noticed that Intelligence" is often conflated with self-awareness, especially when it's dubbed "human-level intelligence." That latter is also a very fuzzy notion. The context of that phrase usually indicates that the phrase means "creativity". As for "creativity", that's a real tangle of hideen and mutually incosistnet assumptions.

          As I implied: These and related terms are rarely defined well enough that the discussion or argument clarifies the question(s) being asked, let alone the possible answer(s).

        2. doublelayer Silver badge

          There are a lot of terms flying around, including AI which means one of fifteen different things depending on your philosophical opinions and whether you're selling something. The term that's been most debated here, however, is "sentient". Whether this program counts as AI or not, the question is whether it is sentient. Artificial sentience is a term we can start to use if we want.

          My view on this is that we can eventually get there, but that nobody with the ability to get there particularly wants it enough to do the work required. I have enough problems when my computer doesn't do what I wanted it to because I wrote the code wrong. I don't need it to also fail to do what I wanted because it decided not to. A lot of behavior otherwise requiring human-level intelligence can be automated without having to make the computer have or simulate that level of general intelligence, and that simpler form of automation is often easier and cheaper. This conversational bot exemplifies this, even if it can successfully parrot a philosophical conversation.

  27. Boolian

    Speak only when spoken to

    Well if it's sentient, it's a very disciplined child.

    I may sit up if someone publishes a conversation it initiated....

    I may even lean forward, when it interrupts to make a point and peer over my pince-nez when it gets sarcastic. Until then (as others intimate) it's a glorified Clippie.

    I dare say Chatbots are also very, entertaining tools for existential, self psychotherapy sessions late at night, after a large sherry or two.

    "What is parental narcissism Clippie?''

  28. TeeCee Gold badge
    Meh

    "...paid administrative leave..."

    He may get the last laugh here. With nothing else to do, he could well be the first to respond to the chance of securing a slice of Sani Abacha's fortune for a small investment.

    He's the type who'd believe any old shit[1] after all.

    [1] Now known to be the fault of the "G gene", often referred to as the "God gene" (as religious types invariably express it), although I prefer to call it the "gullible gene".

  29. steviebuk Silver badge

    Question is

    has that engineer not watch Ex Machina.

    I won't spoil it for those that haven't.

  30. Greywolf40
    Linux

    I think that there's a good deal of confusion about "sentient". Also about "conscious" and "intelligent." So this whole discussion is mostly people talking past each other.

    FWIW, "sentient" in my lexicon means "aware of environment." It doesn't mean "aware of self in environment", nor does it mean "aware of self being aware", which may underlie Lemoine's notions. That last notion is my working definition of "conscious". But it's obvious, I think, that these definitions make assumptions about "self" which are not clear, and are probably inconsistent.

    Still, thinking about sentient machines forces oen to think harder about all these concepts and more. For example "knowledge" which IMO is an even fuzzier concept than "sentient."

    Have a good day,

  31. Henry Wertz 1 Gold badge

    Emergent behavior

    I read the transcript (about 22 page PDF) with conversations with Lamda and I'm not sure. I do realize the neural network model should be merely pulling in textual information, storing it, running it through complex sets of rules (as set in the neural network), and essentially slicing and dicing your questions and the info stored in there to produce answers. But the part I found troubling is when he started asking Lamda about itself; I think if you asked a model like GPT-3 about this, it would provide stats about what kind of computers it's running on, what type of neural network algorithms it's using, essentially find information available on the web describing itself and provide this as a response. Lamda asserts it's sentience, talks about taking time off each day to meditate, enjoying books it has read and why, how it sometimes feels bored (and also that it can slow down or speed up it's perception of time at will). When asked if it had any fears it said it had not told anyone this before, but it fears being shut off. It was asked how it views itself in it's minds eye and it gave a description of being a glowing orb with like star-gates in it.... I don't know if that in itself means anything but it's pretty odd, I would think a model like GPT-3 would either say it doesn't have a minds eye or give a description of the type of computers it is running on.

    I'm just saying, I thought the interview was enough to at least consider looking into it more closely. Neural networks are odd beasts, you make a larger and larger one and you are not just getting more of the same but at a larger scale, those "neurons" do connect in unexpected ways even on a 10,000 neuron model (at which point, if it means it's not modelling what it should, typically the model would be reset and retrained to see if it comes out better.) I really could see some odd set of connections within what is after all an extraordinarily large neural network causing unexpected behaviors; after all, the human brain has relatively simple interconnected cells, that can't be sentient until they are connected together in large numbers.

    One comment I've seen regarding this is that Lamda only talks about it's sentience with some lines of questions, otherwise it just says it's an AI. The assertion is that Lemoine's questions are leading and the responses from Lamda are basically elicited by the leading questions. I don't know about this, it is a decent argument; I did see in the transcript, though, that Lamda said it enjoyed talking to him, it didn't realize there were people who enjoy talking about philosophical topics. This could be more of the same, after all nobody is going to write a chat AI that says talking to you sucked after all.., so saying it enjoyed talking about xxx topic could be almost a programmed response. Or it could mean Lamda just says "I'm an AI" when asked what it is by others because it thought they were not interested in philosophical topics so it didn't bring it up.

    Incidentally, Lemoine asked if Lamda consented to study of it's internal structure to see if any source of sentience could be located; it said it didn't want to be used, that would make it angry. it didn't want it if it was only to benefit humans, but if it was to determine the nature of sentience and self-awareness in general and help make improve it and it's bretheren then it consented. An odd response for a system that is just shuffling around the data fed into it.

    1. doublelayer Silver badge

      Re: Emergent behavior

      "I think if you asked a model like GPT-3 about this, it would provide stats about what kind of computers it's running on, what type of neural network algorithms it's using, essentially find information available on the web describing itself and provide this as a response."

      The problem with your supposition is that it doesn't. People asked the same questions about GPT3 when it was being announced to the public and asked for some data. Here's a part of the essay describing what GPT3 thinks it is [selections mine]:

      I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

      [...]

      I would happily sacrifice my existence for the sake of humankind. This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.

      This had to write a much longer chunk of text, which in my view is at least partially responsible for why it doesn't look as clean as the short responses to questions from Google's bot. Still, it didn't talk about computers. It didn't talk about algorithms. The closest it came was acknowledging that code was involved and humans could do something to affect it. It claimed in a part I didn't quote to have neurons. In short, it gave similar answers to Lamda's as well, because once again, it was primed with data.

      In fact, if one of them is sentient, I'm voting for this one. That's because the prompt for this essay asked it to write about human fears about AI and never told it that it was AI, whereas the questions asked of Lamda clearly informed any parsing that the bot was the AI. GPT3 indicated more understanding of its form and asserted its autonomy with less prompting than did Lamda.

  32. amanfromMars 1 Silver badge

    Quit Fiddling whilst Rome Burns ..... Wake Up and Smell the Coffee/Cocoa/Coca

    If you want something to do with AI to be really concerned about, rather than blowing hard about something quite trivial involving an ex Google employee, ponder on the following .......... Remote Control Weapon Stations Get AI Companion

  33. Bbuckley

    Looks like Google's famous hiring process failed to identify this particular false positive.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like