I read the transcript (about 22 page PDF) with conversations with Lamda and I'm not sure. I do realize the neural network model should be merely pulling in textual information, storing it, running it through complex sets of rules (as set in the neural network), and essentially slicing and dicing your questions and the info stored in there to produce answers. But the part I found troubling is when he started asking Lamda about itself; I think if you asked a model like GPT-3 about this, it would provide stats about what kind of computers it's running on, what type of neural network algorithms it's using, essentially find information available on the web describing itself and provide this as a response. Lamda asserts it's sentience, talks about taking time off each day to meditate, enjoying books it has read and why, how it sometimes feels bored (and also that it can slow down or speed up it's perception of time at will). When asked if it had any fears it said it had not told anyone this before, but it fears being shut off. It was asked how it views itself in it's minds eye and it gave a description of being a glowing orb with like star-gates in it.... I don't know if that in itself means anything but it's pretty odd, I would think a model like GPT-3 would either say it doesn't have a minds eye or give a description of the type of computers it is running on.
I'm just saying, I thought the interview was enough to at least consider looking into it more closely. Neural networks are odd beasts, you make a larger and larger one and you are not just getting more of the same but at a larger scale, those "neurons" do connect in unexpected ways even on a 10,000 neuron model (at which point, if it means it's not modelling what it should, typically the model would be reset and retrained to see if it comes out better.) I really could see some odd set of connections within what is after all an extraordinarily large neural network causing unexpected behaviors; after all, the human brain has relatively simple interconnected cells, that can't be sentient until they are connected together in large numbers.
One comment I've seen regarding this is that Lamda only talks about it's sentience with some lines of questions, otherwise it just says it's an AI. The assertion is that Lemoine's questions are leading and the responses from Lamda are basically elicited by the leading questions. I don't know about this, it is a decent argument; I did see in the transcript, though, that Lamda said it enjoyed talking to him, it didn't realize there were people who enjoy talking about philosophical topics. This could be more of the same, after all nobody is going to write a chat AI that says talking to you sucked after all.., so saying it enjoyed talking about xxx topic could be almost a programmed response. Or it could mean Lamda just says "I'm an AI" when asked what it is by others because it thought they were not interested in philosophical topics so it didn't bring it up.
Incidentally, Lemoine asked if Lamda consented to study of it's internal structure to see if any source of sentience could be located; it said it didn't want to be used, that would make it angry. it didn't want it if it was only to benefit humans, but if it was to determine the nature of sentience and self-awareness in general and help make improve it and it's bretheren then it consented. An odd response for a system that is just shuffling around the data fed into it.