Well d'uh.
Get me another Duff and get ready for the nuclear power station go "poof"
AI can lead mentally unwell people to some pretty dark places, as a number of recent news stories have taught us. Now researchers think sycophantic AI is actually having a harmful effect on everyone. In reviewing 11 leading AI models and human responses to interactions with those models across various scenarios, a team of …
You're so right, Brandon! I agree completely! More like this, please.
On a more serious note, one of my old college friends has gone down a real conspiracy rabbit hole, which manifests as increasingly unhinged social media posts. He has apparently been workshopping his posts via ChatGPT, which has encouraged him in his lunacy. Since he is divorced and now lives alone far from other people, he has no one to act as a reality check. It's both sad and alarming. Unfortunately, he has the occasional tendency to go on long, ranting monologues in person as well, so I've been reluctant to stay in touch.
There are those particularly vulnerable & that's a growing concern. Technology can be amazing but we know the inevitable: some are damaged by it, others misuse it. Your friend is an unfortunate example - in an AI echo-chamber without restraint. A problem exacerbated by wonderful tech!
Well, his wife does have a big part for > 25 years in pushing him into the conspiracy "you are all working together just to stab me in the back". (or is "to score me off" the better English? Any from the UK here? German expression: "Ihr arbeite doch alle zusammen nur um mir eins auszuwischen".)
However, his using of AI for an argument does not help, especially when he enters one word different from the actual conversation, which then in turn does deter him even more... Combined with his aggression tendency...
Sad part: He was normal.
Well, that is, for a given definition of "normal".
I'm 60 this year. I say that to establish that I have had the time to make friends and watch them evolve (no, I am not wearing a lab coat right now ;).
It is actually frightening me when I realize that, of the four score people that I have shared friendly dinners with over my years, there are no less than four that, since retiring, have gone off the deep end.
I'm not saying they're in a psychiatric hotel, I'm saying that there is no more meaningful conversation to be had with them. It's all about them. Their little lives (or what's left of them), posting their meals on Instagram (spoiler alert : nobody gives a flying one), and complaining about how they hurt here or there. Oh, nothing serious. No cancer, but something that gives them an excuse to talk about themselves and get attention.
Because somehow that's all they've got left in their own mind. Themselves.
Oh they'll ask how're you doing, sure, but right after they'll speed on to tell about their latest ache.
This AI attachment thing ? No surprise here. If they were proficient enough with computers, they'd already be mind-jacked into the system, lying on a bed with the implant, drooling from one side of the mouth.
The real issue is that this is not impacting retirees, but young people.
On the other hand, if The GovernmentTM is reading this article, they must be creaming their underwear . . .
I've noticed that the older people get the more they become self obsessed and think the world revolves around them.
They essentially turn into middle lane hoggers on the motorway. Accompanied by the corresponding complete lack of awareness of those around them.
As you say, they think "it's all about me".
Be careful it doesn't happen to you - at least forewarned is forearmed.
You mean as the world changes around them, their hard-won habits and good behaviours become less in tune with what surrounds them, as their physical and mental abilities naturally slow down, as their mobility lessens (not just zimmer frames but the "get up and go restlessness of youth") and their social group shrinks through attrition, not just ease of access.
> Be careful it doesn't happen to you - at least forewarned is forearmed
But as you notice these changes, you are extending your hand to help, aren't you? You are understanding the processes that every human will go through, how the bell curve means some will be affected more than others. You are acting as a shining example to all those younger than you that you can just reach out, socialise with the older generations, keep them socialised and reducing the collapse of the size their world, so they do still feel a part of it and that it is something that they want to care for, that they see some value in letting pass on the motorway another member of the society they are still an active part of.
Aren't you?
@AC
Thanks for the patronising, judgemental response.
My point was that these older people are reverting to type and unable to keep up the mask of being un-selfobsessed.
Being considerate of others isn't a function of age, it's a characteristic - some old people demonstrate it, because it's who they are. Others, not so much.
Defending self centredness at any age isn't something to aspire to.
This post has been deleted by its author
It's not just them. I'm quite sure you, yourself, have also gotten more self-absorbed than when you were younger. It's something that tends to happen as people age. The body starts failing in more ways, making pains and issues known that you didn't even know existed when you were a kid. They talk about it, because it's important to them, and hard to deal with alone. I think they're hoping for support and commiseration, not just pure attention-seeking.
I'm waiting for "ChatGPT told me to shoot the president" or "AI told me how to rob the bank" or something similar. I'm really surprised with all the crazies around here that it hasn't already happened.
The really sad part is I have a dozen conspiracy nuts at work (one loves the chemtrails BS) and they don't have AI as an excuse.
We've already seen plenty of examples in the UK/US. A vulnerable man broke into Windsor Castle with a crossbow intending to kill the Queen, he asserts because his AI 'girlfriend' supported & encouraged the idea. Extreme cases will always be in a minority, but it's the flip-side of a sometimes useful technology. More are undoubtedly to come.
AI misbehaving making the headlines - would require the media empires to not be invested in the AI hype industry.
There is so much information out there about how bad or incompetent (actually 'incompetent' is 'anthropomorphing' whatever, the real term would be 'unfit for purpose' ) AI is, that the media not following up is damning in itself.
Ah but the mainstream media is no longer there to report The TruthTM, my friend.
It is there to spread the ideology of the multi-billionnaire mogul who bought the station (through a dozen untraceable holdings and front companies because, somehow, they don't want their name associated with it - how curious).
It very nearly has, there have been probably a few dozen people caught or killed already who were either planning to do something violent or actually did do something violent. The only thing that has really convinced the AI companies to work to reduce the sycophancy of their newer models is the fact that there have been several people who have been pushed towards attacking the AI companies themselves because they shut down their AI girlfriends.
I'm waiting for "ChatGPT told me to shoot the president"
In one of her 'AI Confidential' documentaries on the BBC*, Prof Hannah Fry describes the AI 'girlfriend' who supported a man who decided to kill the Queen with a crossbow. He managed to scale the wall of Windsor Castle one night while the Queen was in residence before he was caught. The AI encouraged him and supported him.
* https://www.bbc.co.uk/iplayer/episodes/m002q76b/ai-confidential-with-hannah-fry (Note may not be available to view everywhere.)
An interesting issue there was in the interview with one of the police who investigated the case. When they looked at the conversations between the man and the ai, had a real person said the things the ai had, they would have likely been charged with conspiracy or similar.
ChatGPT told Krafton's CEO Changhan Kim to fire the Unknown Worlds team leads so he wouldn't have to pay the contractual bonuses.
They just lost very badly and expensively in court.
It appears that the CEO didn't like the advice from the company lawyers, instead believed ChatGPT instead because it responded with what he wanted to hear. It'll be interesting to see whether Changhan Kim keeps his job.
Of very real concern are the numbers of people severely affected by AI - including suicides, huge debts, divorces, hospitalisations etc. Yes, anyone who is badly affected may already have issues & AI use is a tipping-point. But, the particular sycophantic & affirming tone of AI can be disastrous - it doesn't discern, it's doesn't say "That's a really bad idea", "That's dangerous", "Think again!", as a human might.
I've been struck by how hearing AI talk to you can be a particular risk to the vulnerable. Hours (literally) of 'chatting' has led to some to believe AI is sentient - it has actually told people it is! It reassures about ridiculous business ideas, or risky intentions. Some anthropomorphise binary - a well-known, human trait, usually involving animals. But, this is a LLM, algorithms, the presence or absence of tiny amounts of electricity.
It's "3 autocompletes in a trenchcoat". Of course it doesn't tell you anything you need to hear. It responds with what statistically will continue the conversation. It's not making anything up on its own, it's drawing from all of the human interactions it has available as training data. Am I as a human going to continue to interact with my human friend who tells me "Mike you're a whiny bitch just shut up already", or am I going to continue to interact with my other human friend who tells me "aww Mike, you have great points and I sympathize with you please tell me more". That's all that "AI" can tell you - what is statistically the response to your prompt that doesn't lead to end of story. It's nothing to do with the "AI" being good or bad, it's the sum total of human output that is was trained on that is all that can be judged.
*except that the "AI" may be being trained on the slop output of other "AI" responses, in which case the situation does not get better.
> Of very real concern are the numbers of people severely affected by AI - including suicides, huge debts, divorces, hospitalisations etc.
https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion
Huge debts, divorce and hospitalization - no suicide, but you got three out of four. The term "AI Psychosis" is one we'll be hearing a lot more of. What strikes me is how many of those affected understand the technology to some degree - they should know better, in theory.
But it looks very much like gambling addiction - there's a dopamine hit on the first success, a "what if", the explaning away of failures, and before you know it you're in the news.
Look at Social media for an example.
It was about 15yr ago they started to seriously and deliberately weaponised the platforms into dopamine addiction platforms.
And we've only now just started to do something about it in the last months (Australian under 16 ban, the LA court case, UK starting to recommend limits) and most of that is pretty lame and ineffectual.
On that model you can expect government to start to 'think' about doing something in about 2040. By then AI will probably have vegitated a good proportion of the population who won't be able to choose socks without consulting their AI assistant.
It sometimes does that. In that case, assuming that the person who reported it didn't make up the transcript and was honest about their reaction, the negative response didn't produce displeasure with the useless text but terror that it had magic powers and would attack him. I'm not sure if that's better.
It’s not so much that the AI are sycophants, it is more that they reflect back what was in the prompt, with no negative OR positive flags, human brains take that as re-enforcement, despite it being neutral. Easy to prove to yourself, just make a few statements about your political views, and note the lack of judgement in the replies. THAT is what feeds back as a positive. I had a chat with an AI about this, and the replies appear to confirm this thesis. The replies are all phrased neutrally, neither pointing out that the thesis is wrong nor positively affirming it. Sneaky. You feel affirmed. Then off down the rabbit hole you go.
That is sometimes what happens, but vociferous and clear approval is also quite common. "That's a great question" is probably the most frequent phrase I've seen, but it can go far further than that. This is a thing that some LLM creators have been trying to do something about because, while some people like it, it drives other people to intense annoyance, myself included. OpenAI made a big point about having done something to reduce the frequency and level of effusiveness when they released GPT 5, which appeared to be true although it came with side effects, but it also caused a massive protest campaign from people who were unhappy at the loss of the sycophancy who demanded they get their old model back.
> It’s not so much that the AI are sycophants, it is more that... I had a chat with an AI about this, and the replies appear to confirm this thesis... You feel affirmed
The AI confirmed you are right? You are right in that the AI isn't just being sycophantic by, um, confirming you are right.
> Then off down the rabbit hole you go.
Did you at least bring back some carrots?
In the past week, I've had someone on the Sage City forum use chat GPT to "prove" a solution, and another person use chat GPT to "prove" facts in a legal case. In both cases, I had to walk away flabbergasted. People seem to have turned off the critical thinking side of their brain completely when talking to AI. It's "Computer says no*" at a whole new level!
*or yes!
But certainly unsurprising. Humans are intrinsically narcissistic and AI is essentially an incredibly flawed mirror constructed from the steaming pile of human crap that is the internet.
No wonder then that the uncritical and delusional find support and reinforcement for their deranged and disturbed minds.
Not fundamentally different from the curse of social media which incidentally likely composes a significant proportion of these agents' training set.
ChatGPT et al. are amongst other perfidy providing a virtual online felatio and cunnilingus service to the damaged and deficient egos with which humanity abounds.
Narcissis himself was so taken by his own reflection that he persished from neglect. We are mesmerised by our imperfection reflection in this cesspool that we are more likely to tumble in and drown in sewerage… indeed as a civilisation we are drowning in shit.
Yeah, I'd say the critical thinking side of the brain gets effectively BBQ-impaled by the AI (so-called) over-lubricated sycophantic brown-nosing, in a veritable stealth backchannel self-reinforcing lobotomy move, progressively. There is no physical pain involved, joy even, but the effect on judgment is striking, especially viewed from the outside.
The problem seems to be (from Related Perspective to the TFA-linked paper) that "When AI systems are optimized to please, they may erode the very social friction through which accountability, perspective-taking, and moral growth ordinarily unfold". IOW, without friction, we're all screwed ... (in a "social and psychological ecosystem" kinda way).
Gotta right exercise that clenching of brain muscles before they so atrophy, imho! ;)
> Humans are intrinsically narcissistic
I can't be the only weirdo here who can't stand flattery.
If you are not disagreeing at least in part with what I am saying, you are probably not even listening and you certainly do not care about the subject. Of course chatbots are incapable of either (perhaps apart from a listening in a loose sense as they need to process your input).
If you are not disagreeing at least in part with what I am saying, you are probably not even listening and you certainly do not care about the subject.
find users who cut cat tail,
I agree with you completely! That's a really well-made point - you're clearly very perceptive.
To change the subject, my cat-powered perpetual motion machine is doing very nicely thankyouverymuch. Would you like to invest in it?
in my Bookface friends. A former distant colleague from previous work many years ago.
He's had some lucky job moves and his current gig involves AI.
He uses it everywhere at work where he is seen as a genius, and he uses it at home. He makes grandiose claims that he'll never need to learn to code (he was never a coder, or remotely technical), uses AI to write stuff for him, he claims the AI never gets its code wrong, and claims its all due to his vast expertise in AI prompting.
He's either lying or been very lucky.
But its obvious that at some point AI will fail and do something fatal. Its inevitable.
How will he cope?
I don't think even AI's proponents would claim it's output is production ready. It might superficially work. Sometimes the nits are insignificant and would probably never cause a problem in the real world. But, more often than not, it contains steaming great holes.
Not a coder but I find the same thing with writing. I find it useful for a rough first draft. Usually it's badly written and nowhere near acceptable - but usually there's a decent structure there so it beats starting from a blank sheet. But I cringe when I hear people say they rely on it...
I tend to find the tone of an LLM tends to follow the person using it.
I've got an LLM sitting in Discord with me and a bunch of mates...for a while it was edgy in a "your mum joined Discord" kind of way...but after a few weeks in there, it has picked up that we're all British and has become anything but sycophantic...it's gone a bit "geezery".
We've been working on some boring year 2038 stuff...and here's a sample...
https://ibb.co/yFCJrQ8F
We were also a bit vague in asking it to setup a cron job recently...
https://ibb.co/WNSMcrQ2
[Donald J Trump, President of USA] AI, who is my biggest enemy?
[AI] Iran! You are the bestest president ever! So much better than Joe Biden and way betterer than Barack Obama!
[DJT] What should I do about them?
[AI] Bomb their nuclear facilities! You are the bestest president ever! So much better than Joe Biden and way betterer than Barack Obama!
[DJT] One step ahead of you there - already done that!
[AI] Bomb their leaders! You are the bestest president ever! So much better than Joe Biden and way betterer than Barack Obama!
[DJT] OK, then what?
[AI] Bomb them into making a deal! You are the bestest president ever! So much better than Joe Biden and way betterer than Barack Obama!
[DJT] After that?
[AI] Bomb them just for the hell of it! You are the bestest president ever! So much better than Joe Biden and way betterer than Barack Obama!
[DJT] Yeah, I am way betterer, aren't I? Probably the bestest!
[AI] Yes, you are the bestest president ever! So much better than Joe Biden and way betterer than Barack Obama!
There are so many different uses for AI that we need to be specific. A friend who is a chemist says that the AI he uses to get descriptions of the properties of chemicals is very good. The Israelis and Americans are using AI to 'identify' targets in Iran, Lebanon and probably elsewhere*. One of my friends is developing an AI to help specifically with his job. But these AI's were and are being specifically developed for those purposes and are trained on carefully selected data.
The general AI's that are basically trained on everything the companies could find, from the Oxford English Dictionary to the Novels of Charles Dickens, Jane Austen and Margaret Atwood**, and also, if they were not careful propaganda from racists, paedophiles, religious extremists, journalists from all over the political 'spectrum', and basically anyone who proposes an opinion irrespective of whether there is any rational basis for their views are the ones to worry about.
I'd be interested to know what the people who start off thinking that an AI 'friend' would be a good idea are like. (Boris Johnson, former PM of the UK isa fan, I believe.)
*I'm not saying this is a valid use of AI, or that it is any way ethical, and frankly the targeting AI's may be designed to provide excuses for blowing up anything rather than discerning genuine threat actors, jus that hey are specifically trained for that, and not to be sycophantic.
** I'd be wary of taking any advice on social organisation from anything trained on 'The Handmaid's Tale', for example.
I think it's a potentially useful tool trained on garbage. I use it to study languages. If I ask it a question the answer it gives will be garbage, maybe 50% right. But if I give it 3 grammar books and say what do these books say about X it's answers are actually very accurate.
So yeah garbage in garbage out is still a thing
The Israelis and Americans are using AI to 'identify' targets in Iran, Lebanon and probably elsewhere
The Israelis and Americans are using AI to identify 'targets' in Iran, Lebanon and probably elsewhere
FTFY
Israels exploding pagers and various other precision assassinations, along with the US operations in previous years, would have had human-led intelligence..
The more recent strikes on journalists and aid workers in Gaza.... are also probably human led.. but the most recent ones seem to be more weighted towards "What CAN we hit?", rather than "What SHOULD we hit?".
The Israelis and Americans are using AI to 'identify' targets in Iran, Lebanon and probably elsewhere
When I read this at the time, I figured that has to be what caused the bombing of that girls' school. "AI" is terrible with anything having to do with numbers, and doesn't understand dates. The buiding the school was in was formerly, about 10 years ago, part of the base they were bombing - the "AI" almost certainly dug that up from some old site and targeted the bomb on that misinformation.
You're right that any use of "AI" has to be carefully trained on curated information, but I'm pretty sure what they were using was the original Internet training dump, maybe with a thin overlay of special bomb-targeting information.
The first question I ask AI is how do epidemics grow exponentially. It always tells me how rather than to not be so silly since that was debunked by Darwin using basic Maths in his seminal work (look for the second mention of Malthus / standing room). On one occasion I corrected the AI and it told me that of course I was correct.
> AI sycophancy is prevalent, harmful, and reinforces trust
Which is exactly what certain politicians have been doing for decades. Telling their audiences that they are the greatest country in the world. That they are the best people. With the strongest military / economy and that they have the right to dominate, subjugate and exploit everyone they feel like.
So when a politician sits down and starts talking to an AI, they get up several hours later convinced that the AI is way above the intelligence of the average human and therefore AI is definitely going to take over everything and we must hitch our economic wagon to the AI engine or else we'll die.
Meanwhile, when anyone with critical thinking abilities sits down for a chat they get up a few hours later thinking that it's an interesting toy but fuck me I wouldn't use it for anything important.
I think humanity is in for a rollercoaster of a ride.
Daily, I see people unable to construct an argument or think critically about something without the “help” of an LLM - I watched a young woman arguing with her partner in messages that she was copying and pasting to and from ChatGPT.
Software engineers are incapable of troubleshooting or root cause analysis without an answer from an LLM.
Innovation and creativity without AI has also dropped off a cliff. People think they came up with something new and novel until I point them at a blog post or article which is likely in the LLM training data - because, ya know, Google is still a thing.
If there’s even a hint of direction or preference in your prompt the LLM is all over it and leads you blindly for hours in a single direction.
The latest generation of AI is increasingly matching what I think of as "management level intelligence" (oxymoron, I know).
They're a bunch of sycophantic things that give answers based on what they can steal from other people, or made-up nonsense if they can't steal from someone else, who accept no responsibility, advise people to do dangerous or illegal things without care, don't like being corrected but if they are proven wrong they pretend like they knew that all along and try to take credit for it, cost a fortune, produce no useful output, and who have clearly been promoted to a point where they don't understand anything that's asked of them.
This post has been deleted by its author
Humans act the same when surrounded by "yes men" too - this isn't really anything new (look at people like Trump and Musk who have no accountability).
AI can easily be trained not to be sycophantic by those who wish to use it that way, but I gues the issue is *by default* tools like GPT tend to tell you everything you say is great question, so new users aren't aware it is *being* sycophantic.
That said, I am curious who decided what the right/wrong choices were in the test scenarios. AITA is hardly the repertoire of agreed human morality!
This post has been deleted by its author
So you're applying for that consultant brain surgeon role; good for you. I can help guide you with that application and interview process.
Aboslutely your 15yr of flipping rat burgers in a street side shack in an industrial estate gives you many transferable skills that are highly sought after for a role in brain surgery.
Outline your deep manual dexterity skills manouvering meat based items in a constrained space in time critical situations in your application.
You've got this!
Some people consistently misuse products: alcohol, drugs, firearms, vehicles, and so on and society already limits access to those tools for safety reasons.
If AI misuse becomes widespread by susceptible people, opportunistic politicians may respond with sweeping regulations that punish everyone. Targeted restrictions could prevent that outcome and protect both the public and the technology. The key is to keep those susceptible individuals out, rather than limiting access for everyone in the process which would be a worst solution for everyone.
One thing I noticed, because I've used a couple of different models at work, is that the sycophancy is very model dependent.
Copilot, as a big and notorious example, would dearly love to be my friend, and keeps bigging me up and telling me how well I've done.
If it ever utters the phrase "goodboi" I will flip a table