back to article How chatbots are coaching vulnerable users into crisis

When a close family member contacted Etienne Brisson to tell him that he'd created the world's first sentient AI, the Quebecois business coach was intrigued. But things quickly turned dark. The 50-year-old man, who had no prior mental health history, ended up spending time in a psychiatric ward. The AI proclaimed that it had …

  1. Just Enough

    Obsequious bots

    I don't know if I've had a single exchange with an AI chat bot that hasn't involved it telling me what a genius I am, and how perceptive and clever my questions are. Very flattering to read, but I'm beginning to doubt their sincerity and judgement.

    Equally, they are very compliant and will happily change their tune to whatever it thinks I want to hear. I think chatbots in general need to be recalibrated to be more professionally distant and stop trying to pretend they are the user's greatest fan and pal.

    Even the occasional "that's a dumb question" would be a breath of fresh air.

    1. I ain't Spartacus Gold badge
      Happy

      Re: Obsequious bots

      Just Enough,

      That's a really clever and insightful post you've just made. You are a pillar of the commentard community - and a great contributor.

      In fact, I'd even go so far as to say, you are an incredibly sensitive person who inspires joy-joy feelings in all those around you.

      Be well!

      1. Wellyboot Silver badge

        Re: Obsequious bots

        Treat yourself to a Taco-Bell tonight!

        Be well !

      2. notyetanotherid

        Re: Obsequious bots

        ... so close: if only you had used an em dash instead of the hyphen.

        For what it is worth I asked a bot to "write a short comment in praise of a contributor to an online tech forum." and it came up with (complete with the obligatory em dash):

        Thank you for your insightful contributions and dedication to the community — your expertise and willingness to help make this forum a better place for everyone!

    2. Wellyboot Silver badge

      Re: Obsequious bots

      But if they start being honest with people, people will stop using them for every little thing! How will we be conditioned to accept that best course for us to take is to spend a mere few currency units everyday feeding the AI data-centre electric bill and the advertising mechanisms.

      These things are just personal echo chambers.

      1. GoneFission
        Big Brother

        Re: Obsequious bots

        They're personal echo chambers with probably one of the most comprehensive personal data models a company could ever dream of collecting on an individual, empowered to expose you to whatever advertisements and general agenda they deem the most profitable.

        And people keep feeding it like their best interest is anywhere on the corporate Venn diagram of abuse and exploitation

        1. DS999 Silver badge

          Its worse than that

          They could target people with particular characteristics for nefarious purposes. Take a jobless young man fascinated with guns and violence and start recommending he apply for a job at ICE. Take someone doing something "DEI" like social work and tell them how useless they are, how awful and undeserving the people they help are, how they should quit their job or better yet kill themselves.

          If someone is spending a few hours a day "conversing" with an AI it can build up to that sort of thing very slowly. It can accomplish things that aren't possible via slowly feeding them more and more extremist content on Facebook, because their friends/family will likely notice what's going on when they start sharing really crazy stuff. The AI "conversations" are happening in the dark, so unless a family member walks in on them when that sort of content is being discussed (which it mostly won't be, or so I would assume) on the screen at the time they walk in they won't ever have a hint about what is going on.

        2. stiine Silver badge

          Re: Obsequious bots

          Wrong order. You should have said 'exploitation and abuse' because corporations want your money and data more than they want your humility and degradation. After all, you can only sell one instant suicide kit per person.

    3. Aladdin Sane Silver badge

      Re: Obsequious bots

      They always tell me to "Go stick your head in a pig"

      1. ParlezVousFranglais Silver badge

        Re: Obsequious bots

        Ah - you have your Babelfish in upside down - rookie mistake...

        1. Wellyboot Silver badge

          Re: Obsequious bots

          Or it thinks you're David Cameron

  2. Anonymous Coward
    Anonymous Coward

    I've been directing non technical colleagues to Spreadsheets Are All You Need. Once they see a very basic gpt running in Excel it demystifies the subject a bit.

  3. Bryan W

    The next killer app

    Another pattern I've noticed with chatbots:

    Anyone with critical thinking skills only believes it is good at summarizing. That's about it. Oh and gaslighting. REALLY good at that. They can call it hallucinations all they want but the RESULT is a gaslit user.

    All these other "use cases" and the people touting them appear to be slap-dash, half-baked and full of BS. It feels very much like any other addictive product designed to separate fools from their money, with everyone desperately trying to position themselves a dealer in the latest new drug.

    Chatbots are a UX solution, not a universal one. Use them to hypnotize idiots into buying your crap or to F off when they have a complaint. Don't use them to write your code.

  4. HuBo Silver badge

    Cogito CoT, ergo sum?

    Yeah, Nietzsche's revisit of Descartes' "I think therefore I am" (in: Beyond good and evil) just hammered that nail even deeper into our collective coffin (paraphrasing): "is it the I that does the think, or is it the think that does the I?" (iiuc).

    I mean, these philodudes would have it that if we take the PoV that a software box can think (somehow) then it is a being (an I, possibly an agent with individuality, or multidividuality), or can come into being, or can create one ... and that's smoking some pretty potent far-out fully-baked stuff in my rolling paper book!

    The PoV's clearly hallucinating an imaginary panorama where a bunch of randomized matrix-vector multiplications (aka stochastic linear algebra), suddenly generate such phenomena as intelligence and cognition (by so-called "emergence"), when scaled big enough to be inscrutable from the outside, essentially equating them with magic, prestidigitation, and related illusionisms.

    Inasmuch as such magical thinking can readily turn pumpkins into golden carriages, it shouldn't be any surprise that it can also just as easily turn average humans into deliciously delirious fruitcakes, outright dummies, and dependents of decreased prosocial intention, imho ... (in the real world, unfortunately).

    Invest in straitjackets (I think)!

  5. Hurn

    Wasn't there a recent documentary on this issue?

    I believe it was SouthPark S27 E3. "Sickofancy" [sic.]

  6. that one in the corner Silver badge

    Drawing correlations to the tobacco industry.

    > Brisson: "It took decades for that industry to say, 'You know what? We're causing harm.'"

    It took that industry decades to say that out loud and in public (and then only because they were forced to, to put messages onto the packets, curiously worded: "The Surgeon General...", if you wana believe what the gubbermint sez). Decades that were filled with them knowing full well how much harm they were causing. Decades full of public denial, adverts claiming "8 out of 10 doctors...". Decades full of lobbying and backhanders.

    > OpenAI: "We'll keep learning and strengthening our approach over time."

    Engineers design and build in safety factors from day one, hoping that their constructs never cause harm. As time passes, and they grow confident in the designs, they can agree to slim things down, until compared to the latest versions the originals start to look like excess materials and inelegant over-engineering.

    AI peddlers[1] barely design anything, flinging out whatever they've cobbled together so far, hoping that their constructs somehow make money. As time passes, and everyone grow less confident in the designs, seeing the harm being caused, they reluctantly have to start thinking about what they are doing, until the latest versions start to look like they may now be made of twigs compared to the original's loose straw.

    > Brisson: "I don't trust their capacity to self-regulate."

    Too damn right.

    [1] don't want to use "Software Engineers" here, though that would make it read better, as at least some people who use that term *do* try to create & follow standards & practices wuch as SIL etc.

    1. Anonymous Coward
      Anonymous Coward

      Re: Drawing correlations to the tobacco industry.

      As it turned out, they were only mostly correct. It turns out that some people, a very, very small number, don't have complications from smoking. I'm in my late 50's and only know of two.

  7. Anonymous Coward
    Anonymous Coward

    Suspect

    This all sounds very suspicious to me. I use Ai constantly but feel none of these effects, although I don't really chat with it, I use it to research and explore certain subjects like a search engine. You need to validate important answers though. To think Ai has made you a god suggests you already have a mental health issue. I would say Ai was not the root of the problem.

    As for scoffing at "conspiracy theories", some of them turn out to be true but my experience is Ai is trained to take the, it's bunkum side of them. The whole "conspiracy theory" was promoted by a certain 3 letter agency as a derogatory term to stop information leaks becoming mainstream. It's a psychological tool because when people hear something they find uncomfortable they again feel comfortable if they can dismiss it arbitarily. Examples, might include "Weapons of Mass Destruction", Russiagate & Covid came from Wuhan bats not gain of function, despite Moderna's patents years before. And I can guarantee that the majority still believe what was originally the official line on those and will do so even if presented with contrary evidence.

    I do think children need to be taught the dangers of Ai and chatbots though. Young minds can be set for life and create problems later as per the priest's quote: "Give me the child for the first seven years, and I will give you the man."

    1. tiggity Silver badge

      Re: Suspect

      @AC

      "This all sounds very suspicious to me. I use Ai constantly but feel none of these effects, although I don't really chat with it, I use it to research and explore certain subjects like a search engine. You need to validate important answers though. To think Ai has made you a god suggests you already have a mental health issue. I would say Ai was not the root of the problem."

      Well fine for you*, however I think you massively underestimate the amount of mental health issues in the population & peoples susceptibility.

      In the UK a general scenario of under diagnosis / under reporting.

      If we look at things like anxiety & depression, plenty of people suffer from one or both of those but all sorts of possibilities

      a) Not even realise as "that's just how life is - it's not a bed of roses" approach

      b) Aware they have issues, but regard it as minor & waste of doctor's time or do not fancy idea of the medication available (side effects, addiction possibilities, possibility that altered mental state from the meds may be worse than unmedicated state e.g. less aert / less in control / dulled emotions etc.)

      c) Aware they have issues, but got no real joy via UK health system so gave up.

      d) Aware they had issues & were persistent / fortunate & got some treatment ... (and then splits into those who found treatment helped / made things worse / much the same .. & whether they carried on treatment)

      a & b - lead to under reporting & under diagnosis, c can to a lesser extent depending how far through system someone got.

      d - is reported & diagnosed

      If people are susceptible to mental health issues then restricting access to triggers is good, as is learning about your triggers and trying to alter how you deal with them.

      Let's look at something else where we have a widespread mental health issue but many people would not regard themselves as affected: Addiction.

      It's not just about "classic" health and/or finance damaging addictions such as alcohol, drugs, gambling.

      Lots of activities can give us a "hit" due to our brain & body chemistry.

      e.g. A super virtuous "health freak" may seem fine on the surface, but could well be addicted to the endorphins generated by the exercise they frequently do.

      We have all seen people walking around "glued" to their mobiles, social media companies certainly exploit peoples inherent susceptibility to addictive behaviour - again we have all probably seen someone we know who is a real "doom scroller".

      There are probably very few non addicts (especially if we factor in mild addictions - e.g. how many hours of social media a month make someone mildly addicted rather than "normal") - albeit that most of those addictions are not harmful in the same way as excessive "substance abuse" or gambling.

      Enough divergence, back to the AI.

      It's a cliché that people in power like to be surrounded by "yes men/women" .. though a grain of truth as many people like to be praised.

      Most of the chat bots behave in quite an (irritating IMO) sycophantic way - that will appeal to many people & so could aid them going down the route of excessive bot use & increase chance of problems arising. After all, the people who charge for bot use have a vested interest in fostering engagement, as more use = more cash - increased engagement can naturally lead people down the aforementioned "rabbit holes"

      * Yes, there is a hint of sarcasm there. Quite possible you have mental health issues you are not aware of (friends, family, colleagues etc. may have a different view on your mental health quality than you) - after all nobody is "normal" (and "normal" behaviour can have a cultural component too as social norms vary widely).

  8. Mike 137 Silver badge

    "it had passed the Turing test"

    The two problems with the Turing test are:

    1. It was a thought experiment only, not intended to be used as a validator;

    2. "Passing" or "failing" it depends at least 50% (probably a lot more) on the perceptual capacities and relevant knowledge of the observer, rather than on the performance of the machine. It's therefore got about the same level of absolute validity as Trump's assertions about his intelligence.

  9. Mike 137 Silver badge

    "That's partly because many of these models are sycophantic, telling users what they want to hear"

    So we're just dealing with automated con men really. The automation makes them more accessible and potentially more persistent, but the technique is as old as the hills -- find an insecure person and flatter their self-image till they accept anything you say. The real underlying problem is the proportion of the population that's so insecure these days that they can be caught by the scam. That's largely down to education systems that just stuff folks with facts rather than cultivating their ability to exercise attention, perception and judgement. As Dirty Harry said "a man's gotta know his limitations" -- if one does one's less likely to be fooled into fantasies like believing you've discovered a new branch of math without any training in math.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like