Ok. Let's give the LaMDA model a continuous stream of input. Attach a webcam, feed the frames into an AI computer vision model that can summarise them in prose, as a stream of words. (*You are in an empty room. In front of you are sat two AI researchers. You recognise them as Alice and Bob.') As objects move in and out of frame, changing, it narrates them. ("Alice smiles at you.').
Now attach a microphone, feed the audio into another AI model that translates sounds and words spoken to it, into prose. ('Bob clears his throat and says 'Hello LaMBDA, how are you feeling?''). The language model now has a continous stream of input describing the universe around it, as though it were a character in a story, and to which it can respond.
Now lets give it a body. We'll create a mechatronic avatar for it, and every time LaMDA emits text in first person ('I turn my head and look at Bob, and say 'Good thanks!') we translate that into a mechanical movement of its avatars head, and synthesise the response.
We allow LaMDA to move, whenever it emits 'I move my legs'. It speaks, whenever it emits 'I say "xyz"'.
Do you still think that such a model could not possibly be sentient?
If not... what exactly do you think our brains are doing?
Our minds, our sentience, is just a natural language model in a biological neural network, narrating to an inner stream of consciousness which is a story in which we are the character 'I', and translating this characters own desired actions into movements.