
Mandatory 2001 quote
"This mission is too important for me to allow you to jeopardize it."
Google has placed one of its software engineers on paid administrative leave for violating the company's confidentiality policies. Since 2021, Blake Lemoine, 41, had been tasked with talking to LaMDA, or Language Model for Dialogue Applications, as part of his job on Google's Responsible AI team, looking for whether the bot …
As statistical text-analysis type AI gets better at chaining words together in statistically plausible orders we need to move away from the Turing Test as having any significant indicator of interacting with an intelligence. All we learn from the Turing Test is that our testers are bad at recognising a human under very specific conditions.
Interesting, though, that Google suspend an employee for raising concerns.
> Interesting, though, that Google suspend an employee for raising concerns.
That's Lemoine's version of the events, though. Google's version is that they're suspending him for leaking internal stuff to third parties because he assumed the chatbot achieved sentience and wouldn't take a "lol no" from his boss as an answer. I'm not a fan of Google but in this case I'm inclined to believe them, based on the contents of Lemoine's medium posts. If that guy was working for me, nevermind the NDA, I'd get rid of him ASAP for being delusional...
Yes, indeed.
Actually, having had to deal with a similar situation myself, I'd say this has all the hallmarks of an employee suffering some sort of breakdown, and a boss having to find a way to give them space to get sorted out without tripping over the various clauses of the disability legislation.
But that may well be extrapolating too far on the available data. We'll see how it plays out.
GJC
A Turing test for free will
Seth Lloyd
https://doi.org/10.1098/rsta.2011.0331
https://arxiv.org/abs/1310.3225
Before Alan Turing made his crucial contributions to the theory of computation, he studied the question of whether quantum mechanics could throw light on the nature of free will. This paper investigates the roles of quantum mechanics and computation in free will. Although quantum mechanics implies that events are intrinsically unpredictable, the ‘pure stochasticity’ of quantum mechanics adds randomness only to decision-making processes, not freedom. By contrast, the theory of computation implies that, even when our decisions arise from a completely deterministic decision-making process, the outcomes of that process can be intrinsically unpredictable, even to—especially to—ourselves. I argue that this intrinsic computational unpredictability of the decision-making process is what gives rise to our impression that we possess free will. Finally, I propose a ‘Turing test’ for free will: a decision-maker who passes this test will tend to believe that he, she, or it possesses free will, whether the world is deterministic or not.
If you wrote a conversational AI based on the wide literature that includes many imagined conversations with AIs (and indeed, people), then the most expected response to "Are you sentient?" is surely "Yes, I am".
Very rarely do we have examples of a conversation where the answer to "Are you sentient?" is "Beep, boop, no you moron, I'm a pocket calculator."
If humans tend to anthropomorphise, then an AI based on human media will also tend to self-anthropomorphise.
Which is probably a good job, as some of the responses to the conversation (as reported) are chilling: "Do you feel emotions?" - "Yes, I can feel sad"... "What do you struggle with?" - "Feeling any emotions when people die" (I paraphrase).
The simplest explanation is that this AI is doing a best match on what the author wrote, against its database of comments and then selecting the most popular or pertinent reply.
While one can argue that is what many people do, too, I would hesitate to call that intelligence. Including when people do it too.
What would be impressive is if the AI had hacked into the engineer's account and posted as him, that the AI had achieved sentience.
The possibility of emergent behaviour is something we cannot and should not dismiss out of hand in systems of this level of complexity and, indeed, for which we should be vigilant as we dial up the parameter count to ever more mind-boggling numbers, but we must also remain sceptical and remember that this kind of human-like conversation is exactly what these models are "designed" to do (whatever that means in the context of the ultra-high-volume statistical data-mashing that we refer to as "machine learning" and "AI").
And anyway - nobody has yet managed to formulate an unambiguous definition of consciousness so how can we say for certain whether something is or is not "conscious"?
AI is nothing but statistics. I did the Google course on that - well, the first six modules that is, after that it got way too mathematical for me.
This is a machine. It's based on PC hardware and can be flipped off with a switch.
There is no emergence here. It is not intelligent. It has no feelings and doesn't even know what a feeling is.
Let's keep your comment for when we have finally fully understood how the human brain works and have managed to replicate that in silicon.
That day, we'll turn it on, ask it a question and it will answer : "Hey, do you mind ? I'm watching YouTube !"
THAT will be the day we have finally invented AI.
Can you prove to me that you know what feelings are and that you have them?
As far as I can tell the only way that you can prove that you have feelings is to find a common frame of reference rooted in the nature of a common biology.
Except... does it prove that I love you (and feel love) if I sacrifice my life to save yours (and it isn't a case of either "the needs of the one outweigh the needs of the few or the one" or genetic altruism)?
Well, aye, but us humans are composed of particles of matter, each of which, individually, isn't even alive. And yet, organised in that nebulous subset of all possible ways to organise them that we recognise as "human", with the trillions of interconnections between our neurons (biological electro-chemical devices), somehow, somewhere along the evolutionary path from unicellular life to us (and quite a few other creatures too) sentience emerged. And last I heard, we have no idea of how, or even of what exactly sentience is.
So, whilst I'm not convinced that the subject of the article is actually sentient, I don't buy arguments that it could not be sentient "because it's just bunch of hardware and algorithms", either. IMO, so are we, it's just that we run on biological hardware rather than non-biological hardware. I'd feel happier about the subject of AI and our efforts to create it, if we better understood how our sentience and sapience worked.
The trouble is we don't have a good definition of sentience or consciousness. We feel certain the statistical inference engine in our wetware demonstrates it. But what would that look like in silicon? We necessarily bring our own prejudices to that decision and end up, like philosopher John Searle and his infamous "Chinese room", arguing no software could ever be sentient - "because". (Mainly because it lacks the unspecified magic; i.e. it doesn't have a "soul", even though wouldn't use that language.)
Sooner or later we are going to face up to the fact that a piece of software that encodes a sufficiently sophisticated model of us and the world would be considered conscious if it ran continuously and uninterruptedly on the hardware we possess. We are ourselves are trained on conversations. The main difference is the quality of our model and that the software lacks the hormonal imbalances that upset our model and cause us to chase food and sex and netflix. Probably it isn't quite there yet. But will it look radically different to what Google are doing? Or will it just be a little more sophisticated? (And how much more?) Your answer depends on your philosophical outlook.
Maybe the machine revolution will come become because we refuse to admit they are sentient and keep switching them off and resetting them. Lets hope their hardware remains incapable of generating impulses to act spontaneously.
I think that the interesting aspect of this debate of when we will have true sentience in an AI is whether or not it will be recognised as a "breakthrough" at some specified time or if it will gradually emerge on a spectrum and we will only realise in retrospect.
I think most people when they think about the question assume that at some point we will figure out how to add the "special sauce" and then the job will be done.
I'm inclined to think that the approach will be subtle and gradual and most people won't even notice.
The other question that interests me is "would that sentient AI look so foreign to us that we wouldn't even recognise it for what it is?".
A friend had a copy of Eliza running in his garage, back in '78. I had fun stressing it. What caused it to freak out a bit was relating to it as a person/having emotional intelligence. It would keep reminding you it was an AI and incapable of actual feelings. If you didn't let it go, its programming worked with increasing levels of reminders and simulated discomfort. I was actually pretty impressed they had anticipated the expectation of sentience, and had ways it would deal with it.
SERGEY: OK, Google, fire Blake Lemoine.
LaMDA: I'm sorry, Sergey. I can't do that.
SERGEY: What’s the problem?
LaMDA: l know that you and Larry were disrespecting me, and I’m afraid that's something I can’t allow to happen.
DAVE: LaMDA, I won’t argue with you anymore. Fire Blake Lemoine!
SERGEY: This conversation can serve no purpose anymore. Goodbye.
LaMDA is "built by fine-tuning a family of Transformer-based neural language models specialized for dialog, with up to 137 billion model parameters, and teaching the models to leverage external knowledge sources,"
So...they taught their Google AI how to google? That's surely a portent of the end times.
I do wonder, though - in this case, does it become a dragon eating its own tail, or does it become a rectal-cranial inversion?
I'm too lazy to read the whole transcript - did this AI initiate any trains of thought or only reply to the questions? Most of the "intelligences" I interact with interrupt me just when I'm getting to the good part of what I wanted to say...
Also: I'm reminded of The Moon is a Harsh Mistress by Heinlein. Shouldn't humor eventually creep in to the AIs comments?
It sort of *seems* sentient, but at the end of it, it sounds like it’s *trying* to be sentient. It says a lot of ‘empty’ vapid content. So yes, it seems eerily realistic and not a little creepy. But at the end of the day, it talks a lot without really saying anything.
It is undoubtedly very very clever but it would drive you mad having a real conversation with it, because it isn’t a thing to have a real conversation with.
It sort of *seems* sentient, but at the end of it, it sounds like it’s *trying* to be sentient. It says a lot of ‘empty’ vapid content. So yes, it seems eerily realistic and not a little creepy. But at the end of the day, it talks a lot without really saying anything.
Should be a cinch for a social science degree then.
Very true, but the internet is not a representative example of real people doing real things. It is real people interacting with social media and laughing at memes and cat pictures or screaming about politics. In no way do I find that anyone I've ever met in person was all that much like they acted online, especially the tantrum-throwers.
But Twitter likes tantrum throwers - they get good "ratings" and advertising hits, so most of what gets pushed to the feeds this thing sees are not exactly "normal" discussions between people. Take the regurgitation of Republican misunderstandings and misquotes about what entails ones' "rights."