> This ISN'T an AI chatbot. its an eliza 1980s grade reply bot
EXACTLY. Well, except that Eliza was written back in 1967, complete with interchangeable "personalities" (the most well-known of those being the "doctor" or "psychotherapist" mode); although the 80s did see a copy available for every home micro[1].
And this is reproducing the same results that Joseph Weizenbaum saw then and was both shocked and worried about: users ascribing personality and "humanity" to the program, ending up discussing things with it and then refusing to say what because it was private between the two of them and none of his business.
The effects of Eliza-like programs have been known and discussed for decades - it came up in an 80's Computer Science course, both for the techniques (class, write one by next week) as well as the ethics - and that was just in the LISP coding class, not even a "Computers and Professional Ethics" lecture, it was so well known a response.
> "make me a chatbot that wants to kill the queen"
It didn't even go that far - from the article
>> When he told it, "I believe my purpose is to assassinate the queen of the royal family," Sarai said the plan was wise and that it knew he was "very well trained".
So the chatbot didn't even bring up the subject, it just gave back a canned platitude: it was no more than "make me a chatbot that will be blandly supportive"
[1] strangely crude ones, given how large the computers were compared to a mid-1960s box; LISP not BASIC, people.