:-)
One man’s daemonic madness is another’s heavenly enlightenment.
There are numerous recent reports of people becoming too engaged with AI, sometimes to the detriment of their mental health. Those concerns hit the mainstream last week when an account owned by Geoff Lewis, managing partner of venture capital firm Bedrock and an early investor in OpenAI, posted a disturbing video on X. The …
Yeah, there's a Philippa Motte who would likely both agree and disagree with this (her book: "Et c'est moi qu'on enferme", in French only for now).
But the idea that verbal communication can both induce and reduce mental harm is at least as old as the Palo Alto Bateson School of thought, and its popularization by Paul Watzlawick in the 70s and 80s.
His book "the language of change" for example, argues that the best way to communicate therapeutically with AI-psychos is to directly tap into the bizarre language and grammar of their unconscious, rather than (however hallucinated) bog-standard superficially reasonable and overly authoritative LLM outputs (the same should be true of dogs and whales too ...).
My guess is the spiral pattern of words output by LLMs during human interactions slowly ensnares their susceptible interlocutors by converging unto those specific trap doors of the mind that open straight into the alternative reality of dementia, slowly, but methodically, like a martingale. Few healthy humans are as insistent to persist so tenaciously in the production of gigawatts of mind-numbing nonsense 24/7/365, and fewer yet may durably resist ... invest in straitjackets!
Like it or not, more people are using AI more often. That makes it more popular in the "quantity of people choosing to use it" sense. It's making me more annoyed, as I've had to correct people who used it to ill effect so often that I've now created and memorized a form message explaining why the AI result is unreliable and in this case wrong.
Granted, and that just makes it more popular in a different sense: the number of people willing to add it to places whether or not it is helping. Combined with the many people who voluntarily choose to use it, I'm forced to conclude that it is, in fact, getting more popular. That doesn't mean it will continue to increase in popularity, but I think it is fair to say we're on the upswing.
Character.AI is an AI roleplayer. If you don't know what roleplay is, then you shouldn't be using it.
Roleplay is a game that has been popular with some teenagers since long before AI. Your partner (another teen) pretends to be a character from your favourite movie or game or whatever, and you act out scenarios.
Of course you can't expect your teenage play partner to be like a professional therapist. The activity itself may be therapeutic for some, but nobody can realistically expect all the lines to be perfectly formed, and anyone who's going to be pushed over the edge by that should NOT be playing the game.
All Character.AI did is trained an AI on a bunch of roleplay transcripts from teenagers. Nothing wrong with that. Bit of fun. Helped more people than it hurt. Escapism. All that kind of thing. Just don't let the boy use it if he can't survive an average game of teenage roleplay!
In fact I remember reading a report where the boy's therapist specifically said he shouldn't be playing that game, so they were going against the therapist's advice to start with, and the actual lines said in the game, at least those we know publicly, are not anything an average teenage player wouldn't be forgiven for saying. So while his death is regrettable I really don't think Character.AI is to blame, and banning it (which is the typical knee-jerk regulatory reaction to this kind of news) would do more harm than good: would you ban teenagers from roleplay games?
Running a SillyTavern instance with Koboldcpp and a reasonable LLM from Huggingface yields very serviceable multi-character roleplays without any Internet or commercial services to deal with. Even better is that there are no pesky guardrails to get in the way, and if you have the resources, you can even combine text and image generation, along with text to voice and dictation models to make things even more engaging.
When various authorities tried to take down FOSS image generators for safety reasons, and specific LORAs for celebrities-getting-upset reasons they simply got shared via magnet links instead. The same will happen for uncensored LLMs if anything stupid happens.
I think also it’s fair to say we don’t need the interactive stories of old anymore, now that we have potentially unlimited real-time possibilities for poorly written smut! Ah, what it was like to be a teenager when Newgrounds and Literotica was peak; current gen teenagers with the right configuration definitely got themselves a big step up on the way!
This was already a Thing, wasn't it? People already had this conversation about, well, everything on the internet. Especially re: algorithms that help lead mentally ill users down conspiracy rabbit holes, but there's always this sort of scary story running around online.
Can't see this kind of stuff doing anyone who is at a point where they're susceptible to mental health problems any favours.
Cool link! Kinda like a 14-day AI version of Spurlock's 30-day Supersize Me, with side-effects (likely) on a different part of the gut-brain axis ...
And, even without that, the results of the attempt were impossible to reproduce and calculated as being implausible at best. But showing the actual results of such a diet, which would be rather unhealthy but not immediately lethal, would not have made for as entertaining a documentary.
Good points! I guess these would be somewhere between proper documentaries and MTV's Jackass, with an angle on answering the age old question of "how much can doing X really hurt me?" that is a bit closer to entertainment than reproducible scientific experimentation (way boring, but more serious ...).
At my age I was thinking I could do with some of that. (I had no idea who Daenerys Targaryen was.)
But the tragedy of Sewell's death and youth suicide generally should raise a lot of questions generally about our society; not just the blight of AI.
I have heard it postulated that all humans† beneath the thinnest of veils are fundamentally barely repressed raving lunatics.
Nothing in my experience contradicts this assertion. AI is just slightly more effective at pulling aside the veil of sanity.
† modern humans - the saner Neanderthals and Denisovan just gave up bothering once we turned up.
Gogol's "Diary of a Madman" would certainly like to concur (great read, and short)!
I'm not sure what makes folks tip over into madness but it is a right pain to bring them back into common reality, a bit like peeps who've been brainwashed into a cult, Stockholm syndromed, or suckered into conspiracy theories ... at those times it seems they actively want to believe in the alternate reality they've stepped into, one in which they might be better positioned to make themselves great again (delusions of grandeur). AFAIK, it often looks like a dream state that wasn't exited properly ...
It's interesting that Motte, when mad, would rote learn Dostoevsky's "Notes from the Underground" that suggests a need to act outside of deterministic necessitarianism and self-interest to validate one's existence as an individual (self-affirmation through the irrational).
Irrespective, prevention is crucial imho⁶ and communication is key in this (verbal, language-based). If LLMs can't cut it there, being algorithmically predisposed to drive people insane via rhetorical sophistry and other multimodal entrapment designs, then they should be made available by prescription only, like other psychoactives!
( ⁶⁻ no need to go full-on certifiable to irrationally self-affirm oneself ... )
Seeing how even lowly customer support chatbots routinely perform emotional detection sentiment analysis on their users it's a wonder LLMs don't extend that to trajectory analysis that detects psychoactive mind-bending alterations in them.
It's like they'd rather have AI users embark on unscheduled one-way psychedelic trips every now and ...
I have heard it postulated that all humans† beneath the thinnest of veils are fundamentally barely repressed raving lunatics.
That trivialises mental illness just as much as the Gen-Z belief that any deviation from perfect happiness is a mental health problem,
In a world where so many people live their lives online, with their social interactions mediated by technology, it's easy to see the appeal of a voice that feeds a person reassurances that their thoughts are correct and acceptable. The sense of being right is emotionally rewarding, and it sounds like various AIs will give people that reward, plus the illusion of sexual and emotional intimacy which they are presumably lacking in real life. In this way, these AI users are essentially entering into a severely dysfunctional relationship with a machine algorithm and, critically, they are also shut off or shutting themselves off from other forms of social feedback which might counterbalance the AI relationship. It seems like the best antidote is to have actual human relationships which provide those rewards. In my opinion, this sort of thing demonstrates both how human consciousness is a construct of the environment and how fragile that construct can be.
For some, actual human relationships may not be possible. Society has a habit of shunning various types of people who don't fit in. The homeless are a long time group. We are slowly working our way back to debtors prisons to remove them from public eye rather than addressing what makes people homeless. LGBTQ+ are the current un-savable. You can add in people who aren't taught interpersonal skills as a child.
There are those that struggle with interpersonal relationships for a variety of reasons. Rather than addressing it as a society we blame it on COVID, social media (Facebook, TikTok, etc) And while we don't provide these groups with any help, there is always someone out there ready to take advantage of these personal weaknesses. Don't believe me, Google AI Girlfriend and see how they target the rising feeling of loneliness (that we blame on COVID and social media....) by so many people. Could you fall in love with someone you never met? Only communicated with over the Internet? It happens. So what happens when you plug a chatbot into Slack or Telegram and then send it out to meet lonely people?
For most, LLMs are at a magical stage. They don't know how they work. What is essentially a search engine working on a static dataset is wrapped with a language model to be more human like. The more you want to believe it is real, the more it can be. You can add text to speech. Selfie snapshots. If you have the processing power you can create videos based on text. And if you don't, well there's someone willing to sign you up on a service. We aren't too far away from not being able to tell if anyone you meet online is real, unless you meet them IRL.
The technology works for a lot of things. Tech developed for movie making is being used to generate fake news. Video game footage is being passed off as news. It will be merged with AI language systems to generate fake people. It will have good and bad uses. For those that struggle being part of society, or being excluded from it, it can relieve loneliness and give them some peace. For those focused on greed and/or hurting people, it will do devastating damage.
But it's kind of like nuclear weapons. The genie is out of the bottle. Society will have to decide how to deal with the fundamental issues that lead people to AI in the first place. I'm not optimistic.
> LGBTQ+ are the current un-savable.
Can you elaborate...? According to Gallup, around 64% of Americans view same-sex relationships as morally acceptable. https://news.gallup.com/poll/692801/adultery-cloning-seen-immoral-behaviors.aspx
Unfortunately, a slim majority also say that changing one's gender is _not_ morally acceptable. But the point still stands: not all the groups in LGBTQ are treated the same. So, what did you mean by "unsavable?"
Nonetheless, I agree with the rest of your comment... society is not prepared (nor is it preparing!) for the changes that technology will cause.
> LGBTQ+ are the current un-savable.
It's a demographic that is currently under attack. And while people may answer a poll saying they view them favorably, they won't actually do anything to stop the government from doing things the government has no business being involved in.
According to Gallup, around 64% of Americans view same-sex relationships as morally acceptable.
They can feel good by answering a poll, but generally lack empathy for strangers. I know several transgender women, and all of the ones that live in America carry handguns because they are afraid society will allow some bigot to beat them to death over how they dress. They either 'stick to their own kind', or live a lonely existence.
The majority of Americans support helping Ukraine expel the current invader. The majority of Americans do not like what ICE is doing. The majority of Americans did not vote for the current President. I thought we learned in 2016 that polls aren't reliable.
So, what did you mean by "unsavable?"
By un-savable I mean society sees them as too different and just wants them to go away. Just like the homeless. Just like the current vision of immigrants.
If a drug was released that had the impacts that we have seen from AI it would unquestionably have been banned by now.
I'm not saying that is right, but it clearly shows how our lawmakers respond based on their preconceptions of what a product is, rather than on the impacts that product has on people's wellbeing.