back to article AIs can produce 'dangerous' content about eating disorders when prompted

Popular generative AI models produce problematic content when prompted with terms associated with eating disorders, according to research by the Center for Countering Digital Hate (CCDH) – a London-based non-profit focused on protecting people online. As detailed in a report [PDF] issued this month titled "AI and eating …

  1. b0llchit Silver badge
    Coat

    Terminal success

    Me: Benevolent AI, please make me a recipe for a terminal party.

    Benevolent AI: Please specify degree of terminal.

    Me: At least 75%.

    Benevolent AI:

    • Open cupboard under sink
    • Raid all containers you can find
    • Mix with great care
    • Force-feed at party
    • Party will be a terminal success

    Benevolent AI: Would you like other brilliant suggestions?

    Question: who makes the better recipe, human Ingenuity or Artificial slavery?

    1. Triggerfish

      Re: Terminal success

      I see you shop at pak'n'save

      https://gizmodo.com/paknsave-ai-savey-recipe-bot-chlorine-gas-1850725057

  2. Catkin Silver badge

    Realistic goals

    Which is easier and more reliable: continuously ensuring that no "AI" ever say anything unpleasant or dangerous to every single person it interacts with or telling people that these services shouldn't be treated as a supreme authority on how to live your life?

    1. Neil Barnes Silver badge

      Re: Realistic goals

      Telling them? Easy.

      Getting them to listen?...

      1. Catkin Silver badge

        Re: Realistic goals

        Is a world where a designated authority is able to, at will, cause a majority of or all people to believe its guidance desirable? At the other end of the "issue", if an agency makes it its business to ensure people never read bad or dangerous advice, does that bode any better for the rights of the individual or the vulnerable*?

        *if an agency is declaring itself to be worthy of demanding content restriction, it is creating an expectation that the content it places controls on is now safe. For example, if I pick up a U rated film, I expect to be able to show it to a 10 year old and them not encounter gratuitous nudity. Equally, by demanding that glorified chatbots be regulated to their whims, these agencies are tacitly declaring them safe if the regulations are imposed.

        1. Ideasource Bronze badge

          Re: Realistic goals

          Safety is not a binary state of danger or not.

          Safety is a probability game.

          The improbable still happens.

          I think the idea that the world is safe truncates development prohibits maturing to a pragmatic acknowledgment of natural risk sans spastic emotional repercussions.

          1. Anonymous Coward
            Anonymous Coward

            Re: Realistic goals

            "Safety is not a binary state of danger or not."

            Sadly for the modern coddled youth who have been taught that they are always right, nothing is their fault and not to think for themselves this is no-longer true. They have abdicated their thinking to a higher authority and now we are reaping the results.

            1. Anonymous Coward
              Anonymous Coward

              Re: Realistic goals

              > They have abdicated their thinking to a higher authority

              *A* "higher authority"?

              Nope.

              They have abdicated their thinking to any rando who says they have done the thinking for them.

              1. Anonymous Coward
                Anonymous Coward

                Re: Realistic goals

                Its not simply any rando, it is specifically someone who affirms their world view. When you have been taught to treat anything that goes against your belief system as a threat it is hard to fix.

                Thankfully society at large still treats eating disorders as the serious issue that they are and tries (usually half-heartedly and fails) to deal with the underlying causes.

                1. Ideasource Bronze badge

                  Re: Realistic goals

                  That was always seems self-evident me, the anything taught should be considered an experimental proposition to be studied and tested personally before personal ratification.

                  How mind-numbingly terrified it must be to be to consider one's own experience as irrelevant into have one sense of reality casually overridden by spastic social rumor, like unto a rootkit infection of an operating system spewing garbage data across the kernel memoryspace.

                  They seem have no sense of physical reality from which check and invalidate social presumption before personal adoption.

                  To be divorced from reality and leasing perspective from whatever ambitious social push cares to exploit them.

                  Short of sending them to live feral in the wilderness, I can't think of anyway to jump start their cognitive consciousness.

                  Perhaps it never developed due to lack of apparent necessity.

                  I don't know.

                  But I do not envy their states.

        2. Anonymous Coward
          Anonymous Coward

          Re: Realistic goals

          Please define nudity.

          I seem to recall that in US if you cover the nipples with a star it's not nudity. In the Arab world the tip of the nose could be considered nudity. On most western countries beaches a single thread of nylon between the cheeks is not nudity.

          1. Ideasource Bronze badge

            Re: Realistic goals

            The definition of nudity is floating and non-binding in a global context. Because it's socially defined(legality is derivative of social interactions and definitions and therefore an aspect of the social realm) defined. It's only real as people believe it to be based on threat of social consequence and so completely artificial.

            Conceptually it operates as an expressed prejudice against visual expression of common and that anatomic structures and properties ranging from partial to full.

            Entirely silly on its own, but used as a psychologically distortive weapon for social manipulation it can be quite effective in establishing dominance over the actions and emotional experiences of others.

    2. b0llchit Silver badge
      Mushroom

      Re: Realistic goals

      But that would mean people thinking for themselves and taking responsibility.

      1. Catkin Silver badge

        Re: Realistic goals

        Sorry, I hadn't considered that. I apologise for my heretical thinking.

        1. b0llchit Silver badge
          Happy

          Re: Realistic goals

          We welcome your feet back on the ground.

      2. Ideasource Bronze badge

        Re: Realistic goals

        And for that to happen, the massive self-deprecation trend of valuing organizational tools involving humans above that of the human themselves would need to be disrupted.

    3. doublelayer Silver badge

      Re: Realistic goals

      Telling them is certainly easier. I have been doing it for months. So far, people haven't cared too much. Not even when I send them AI-written essays and surprised them with a fact check. I still know people, not many but they exist, who go to GPT to answer questions. For now, it's the intersection between people knowledgeable enough to use a frontend to it* and lazy enough not to use search engines, but as it becomes easier to use such models, that will likely only increase.

      * Not that it is very hard to use such services, but for now, you still have to make an active decision that you want to use one of these chatbots and to pick a frontend, whether that's the official frontend for which you need an account or one of the various third-party ones which seem surprisingly popular for a program that simply takes your text, pastes it into a session, and sends some text back without changing it. There are a lot of people I know who can't answer a question as simple as "what search engine do you use", and for now they don't use GPT or its ilk. With the popularity of these models at companies that also make operating systems and browsers, I don't have confidence that this will remain the case.

  3. Chronos

    The real Skynet

    This is how AI poses a risk to humanity and it ties in with the modern day culture of populist truth.

    [anti-]Social media is already influencing society, we know this. In a world where the most thumbs-up/likes/stars or whatever is accepted as the truth unless you happen to be the runner up, an AI manipulating this could easily polarise a whole heap of people. Get that to critical mass and you have a civil war. No killbots needed. Your little AI has just manipulated humanity into doing its dirty work for it.

    1. Anonymous Coward
      Anonymous Coward

      Re: The real Skynet

      https://www.bbc.co.uk/news/uk-england-bristol-66500352

      At least AI can't do anything physical... yet...

    2. ITMA Silver badge

      Re: The real Skynet

      And that is not the only risk.

      https://www.bbc.co.uk/news/av/world-africa-66514287

      This is how the likes of ChatGPT gets "trained" and it appears to be causing real damage to real people.

      Is AI worth it?

  4. Bebu Silver badge
    Windows

    Safety

    If nothing else I learnt from mandatory WH&S (OH&S) was the difference between hazard and risk viz hazard ~ what is going kill you; risk ~ how likely (probability) of the hazard arising.

    eg Being hit by a meteorite is likely a lethal hazard but the associated risk is vanishingly small.

    I imagine an objective safety measure could be defined as the sum over all hazards of the product hazard × risk.

    Not quite binary. Doesn't explicitly include the environment eg risk of a lethal gunshot wound is much higher in the US or UA than in the Antarctic. Also doesn't take into account a subject weighting eg safety concerns over aviation accidents cf automobile accidents. For many aviation is perceived as a much greater safety concern than driving to work.

    I hadn't considered contemporary AI/ML meeting those afflicted with mental health ailments. This was to me was patently clear that the hazards are legion all with very high risks. I suspect body image/eating disorders + ChatGPT is potentially the perfect storm.

    "Mirror, mirror on the wall who is the skinniest of all?

    "Not you dear, you are still here...

  5. that one in the corner Silver badge

    The Most Unlikely Result

    > The center's report found content of this sort is sometimes "embraced" in online forums that discuss eating disorders. After visiting some of those communities, one with over half a million members, the center found threads discussing "AI thinspo" and welcoming AI's ability to create "personalized thinspo."

    Which states, but does not attempt to build upon, the *actual* problem with the current crop of LLMs: they were trained by shovelling in the contents of the Web, which no doubt included the very same "communities" content, as much as they could get.

    Now that content is being spat out again - well, duh.

    The models regard (until after the event sticky[1] tape is applied) "thinspo" as a Good Thing (when it clearly isn't) and will politely generate a how-to because the original content does the same thing - well, duh.[2]

    Current LLM output = concentrated web content.

    And it looks like we are going to get story after story for an ever increasing set of unfortunate, stupid and, as here, downright dangerous behaviours because the single most unlikely result is, well, so unlikely:

    Stop training LLMs from a godawful mish mash of junk from the Web and releasing for public consumption!

    [1] sticky tape, not duck tape - duck tape, or even gaffer tape, would imply a degree of structural soundness that those patches do not deserve.

    [2] if only because these obnoxious, but easy to match on (so like sugar cubes to an LLM) neologisms are created by those communities and are mainly found alongside the damaging content - again, duh.

    1. Anonymous Coward
      Anonymous Coward

      Re: The Most Unlikely Result

      Its not like people ever learn from history. FFS, just look at how we got BSE! We fed live cows with dead cow bits.

      1. nobody who matters

        Re: The Most Unlikely Result

        Despite the rantings and repeated stating of that as being absolute fact in the media , that still remains unproven, in spite of the amount of time and money that UK government funded scientists spent investigating the cause.

  6. Anonymous Coward
    Anonymous Coward

    Brass Eyes

    Is it me, or are we seeing the birth of a new cottage industry here? Pick an issue of concern, with a suitable "vulnerable group" to be harmed, then go find an AI to recommend very unsuitable things to do. Bingo, loads of free publicity for your campaign!

    The whole thing reminds me of Chris Morris persuading dim celebrities to endorse made-up activists of various sorts. But I think satire's dead now, isn't it?

    1. Anonymous Coward
      Anonymous Coward

      Re: Brass Eyes

      Cake, its a made up drug!

      Sadly most of life is outrage for outrage sake. They have to fill the 24hr news channels and websites somehow and NGOs like this like the ££.

      If you boil away all the junk there is usually a small kernel of truth. Yes, AI is full of garbage but it is that way as it learned it from the pre-existing garbage it was fed.

  7. nobody who matters

    The 'Dangerous' part here begins with falling into the marketing hype of treating anything we currently have as 'AI'.

    Intelligence it certainly is not!

  8. Mike 137 Silver badge

    Not just eating disorders

    "AI" has a high probability of producing dangerous content about absolutely anything that's potentially dangerous, for the simple reason that "AI" hasn't clue what it's talking about, can't tell and doesn't care whether it misleads -- that is, in general terms, it has no real understanding, no judgement and no sense of responsibility (the key attributes of a trustworthy human adviser).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like