I think there's a more pressing concern with these AI language models than the impending rise of true machine consciousness. What these things do extremely well is churn out a very good approximation of the answer you want, but without necessarily having any factual basis for it. As in, you can ask it a question, and it may well give you something that reads exactly like a satisfactory answer, even when it doesn't have any actual data to work with. In fact, even before the recent boom in ChatGPT type language models, you could already see Google Translate's algorithm doing this sometimes, making up definitions for words based on machine-learnt language rules, without telling you that the definition its given you is a guess. Or at least I've certainly seen it do this with Welsh, and I can't imagine that's the only language it does this with.
Now consider the current state of the internet. We have social media run on the principle of maximising engagement, where algorithms decide what content you see, based on what will keep you scrolling, clicking, liking, sharing, commenting. This of course isn't the same thing as what you actually want to see, what you enjoy reading about, or what creates meaningful, satisfying interactions with other human beings. In fact, all to often it's the opposite, it pushes the stuff that invokes "high arousal emotions", or in other words the stuff that gets you pissed off, anger reacting, arguing in the comment section.
Then we have the rest of the internet basing its content on what will fare best in that algorithmically arranged, engagement focused social media environment. And we are already seeing the beginning of AI language models generating content optimised for that environment. What happens when these supercharged chatbots do become fully integrated into the infrastructure of the internet? When companies like Meta, who have no responsibility other than to maximise value for their shareholders, employ AI language models to churn out a never-ending timeline of content, tweaked to the exact parameters of each individual user, with no concern for facts, social or political consequences, or individual mental health?
Or when those same principles are employed to generate a fully immersive metaverse, or augmented reality overlay, using your real time biofeedback to fine tune the content?