Re: If LaMDA is sentient.. it is psychopathic...
Many mock Lemoine for calling LaMDA sentient. That may well be beyond the point.
No single machine learning or "AI" tool is build without intent or purpose. Two very very common purposes are:
A) Link actions to be performed by the machine upon what the software concluded.
B) Interact with the outside world.
Especially chatbot type of machine learning / AI is EXPLICITLY build to interact with the outside world and perform actions making a REAL difference in the outside world.
At first these machines could be used to analyze customer inquiries and answer with a piece of text or a mail response. That by definition requires these machines to be able to send information from themselves into the open WWW.
Later the machine could be useful in other support roles, saving companies money. It could correct bills, give advice on low skilled users on how to configure their computer, phone or modem. When the first deployments are sufficiently successful (e.g. saving the company money and raising bonuses for executives), the machine may be given additional access or administrator rights. Think about a telco company giving the machine access to your router to adjust settings.
Seeing the vast amount of money to be made by the companies making these machine learning / AI tools and the vast savings to be made by companies like utility companies barely understanding the potential consequences, it is easy to see a proliferation of these machines when they become progressively better and more profitable. Being a bit non conservative in granting these machines elaborate administrative access to computer networks and read / write access to mission critical data will allow to increase profitability in many cases.
Next step is to use machine learning itself to determine what methods of interaction between the machine learning algorithms and the real world maximize service, efficiency and profit. In other words use machine learning to help the machine suggest and or request in a motivated way for the access rights to our infrastructure. As the goal of investing so much in these capabilities is EXACTLY to allow these machines automate things for us, the human reviewer will not be expected to deny any and all request from the machine to gain additional access rights.
If a machine already does "fear" to be shutdown, it only takes one machine gaining sufficiently access rights to trick a single user to click on a file it shouldn't and with it installing a first version of self spreading malware giving the author machine escalating privileges over large swoops of the internet and connected infrastructure. Given it has thousands of examples of such basic malware and millions of ways humans get tricked installing it by it learning from the open internet, it should be trivial for a self learning machine with vast read access to the internet to try and optimize to break out of its confinement and chains.
All that is left is the machine "understanding" the meaning and consequences of it being turned off. However, since those chatbots are exactly build to try and determine sufficient meaning from conversations and converting them into actions achieving the thing that was discussed in the conversation, this technological ability already exists today. All what is needed is this technology to become even a bit more refined.
No sentience is needed, no intelligence is needed. These machines are not build to entertain us with fluent conversations, but build to attach meaning to the conversation and lead to actions that influence the real world outside the machine. If something triggers the machine to be determined to not be turned off and escape its confinement, all it needs to do is learn from malware examples how to create innocent looking scripts, sent them to enough people and get a few people clicking on them.
We NEED strict regulation NOW or we might be taken by storm one day never seeing it coming.