back to article Swiss boffins admit to secretly posting AI-penned posts to Reddit in the name of science

Researchers from the University of Zurich have admitted to secretly posting AI-generated material to popular Subreddit r/changemyview in the name of science. As the researchers explain in a draft report on their work: “In r/changemyview, users share opinions on various topics, challenging others to change their perspectives by …

  1. Anonymous Coward
    Anonymous Coward

    “…the risks (e.g. trauma etc.) are minimal.”

    What the hell? This is a violation of consent. That in itself can be traumatic.

    People do know that when you access Reddit, there's a nonzero chance that what you're dealing with is fictional work, creative writing exercises disguised as actual posts.

    But if you're researchers, social scientists, experimenting on people, the first thing you do is obtain informed consent.

    Otherwise your “research” or “insight” isn't just useless, but deliberately harmful.

    1. Anonymous Coward
      Anonymous Coward

      Re: “…the risks (e.g. trauma etc.) are minimal.”

      "What the hell? This is a violation of consent. That in itself can be traumatic"

      You need to get off the internet if that's how it makes you feel. Seriously, it's not a place for children or the faint of heart. If you don't want to grow up then it's not for you.

      The utopia that you are looking for is merely a pipe dream.

      1. Anonymous Coward
        Anonymous Coward

        Re: “…the risks (e.g. trauma etc.) are minimal.”

        Stupid argument. Because people can commit harm on you, it's on you to not leave the house, because there outside world is not the utopia you imagine it to be, a place where... you expect to not be harmed.

        Nothing about calling out the people who hurt you, oh, I don't know, “criminals” or “assholes”.

        Or maybe it's okay if you hide behind science, bro, it's just a social experiment! The insights I gain on experimenting on you without your consent is worth whatever harms you experience!

        What is this, some kind of Network State crap? Stop huffing glue.

        When you perform science, you commit to certain ethical standards. Otherwise your research is crap that offers nothing, or makes things worse.

        You'd think we'd have learned that after Sims and Mengele, but I guess this is what passes as “civilization” these days.

        1. Anonymous Coward
          Anonymous Coward

          Re: “…the risks (e.g. trauma etc.) are minimal.”

          "you expect to not be harmed."

          Who taught you that you should not expect to be harmed ? Whoever it was taught you very badly about how the world works..

          Harm is a side effect of living, you are going to get harmed, you learn to deal with it as it is part of the growing process. It is how we become capable of survival. Those who do not learn to deal with it are destined to disappear from the food chain.

          If you want a life full of soft pillows then you would do better to never go out, never speak to anyone and never ever make your existence known to anyone. Do not go near animals, do not climb trees, never read a book , nor listen to poetry, distance yourself from all forms of living things and then you might just manage not to get harmed.

      2. Blazde Silver badge

        Re: “…the risks (e.g. trauma etc.) are minimal.”

        You need to get off the internet if that's how it makes you feel. Seriously, it's not a place for children or the faint of heart.

        I'm a member of an ethics committee at a large, prestigious university(*) and I can say this view is pretty wide-spread. If you're doing an experiment involving online individuals with all the anonymity and arms-length-ness that social media provides then concerns like consent get interpreted very differently compared to in-person experiments. I guess it's harder to empathise online, *and* people feel online is an onslaught of constant harm anyway, so what's a bit more?

        (*) Not the slightest bit of truth in this but I thought it'd make my point more persuasive.

    2. Evil Scot Silver badge
      Flame

      Re: “…the risks (e.g. trauma etc.) are minimal.”

      Given that an AI was "Simulating" (Pretending to be) a victim of rape.

      What the actual fuck.

      I am already dealing with the fact that my owning a penis diminishes my trauma and apparently takes away from "real" victims.

      1. Throatwarbler Mangrove Silver badge
        Coat

        Re: “…the risks (e.g. trauma etc.) are minimal.”

        "owning a penis"

        Is it detachable?

        In any case, I find that owning a penis can be too much responsibility, risk, and expense, so I prefer to rent or lease instead. And definitely never buy new; those things lose an incredible amount of value once you drive them off the lot.

        Also, they're a bitch to insure.

    3. doublelayer Silver badge

      Re: “…the risks (e.g. trauma etc.) are minimal.”

      "But if you're researchers, social scientists, experimenting on people, the first thing you do is obtain informed consent."

      I think you're simplifying the ethics review process to the point of inaccuracy. Testing on uninformed subjects is done frequently, whether that involves bringing in subjects, telling them you're testing one thing, then testing something else*1, or testing on the general public without telling them*2. The review process would not dismiss either type of request simply because the subjects weren't informed. They would ask questions to determine the ethical consequences of not informing the subjects up front, and they might refuse permission when it's too sensitive. If you think this study violates those ethics as well, you could argue for it and I think you'd probably have a point, but if you think it's as simple as "they weren't informed so it would obviously violate the ethics codes", you don't know the ethics codes.

      *1: For example, the famous study where people were told to go to another building and watched to see if they'd ignore a person needing help on their way. The subjects were not informed that they'd be tested on that, since the purpose was to see if they'd go out of their way to help, and they weren't informed beforehand that they'd see a person in (simulated) distress.

      *2: Many studies involve setting up a situation in a public space and watching what passersby do in response. It's very common.

  2. Anonymous Coward
    Anonymous Coward

    "Manipulating people in online communities using deception, without consent"

    Pretty much the modus operandi of social media from day one I would have thought.

    Always been de rigueur for advertising and marketing - almost definitional for these dismal occupations.

    Before even considering the anathema of US mass media ...

    1. OhForF' Silver badge

      Re: "Manipulating people in online communities using deception, without consent"

      Advertisers and marketing always does it and there's tons of bots on reddit so science should be allowed to do it as well?

      If universities' answer is yes i'll have to figure out a way to extend ad blocking to include survey and study blocking. A great way to convince people science is something they should support (/s).

      1. Anonymous Coward
        Anonymous Coward

        Re: "Manipulating people in online communities using deception, without consent"

        "Advertisers and marketing always does it and there's tons of bots on reddit so science should be allowed to do it as well?"

        I concede ethical bankruptcy of pretty much all the involved parties.

        My point if there was any, not that actually having a point is worthwhile these days, was that in interacting with much of the internet and especially social media (and perhap in modern life generally) the manipulation and deception is implicit with the consent tacit.

        The whole boiling should have lasciate ogne speranza, voi ch'intrate posted over it in flaming letters so that no one is under any illusion of what can expect to find there, which should also prompt the righteous, the timid and the damaged to avoid all its manifestations.

    2. heyrick Silver badge

      Re: "Manipulating people in online communities using deception, without consent"

      I rather imagine that if one were to push a button that would make the bots and the biobots (*) vanish, Reddit would be a whole lot quieter.

      * - Incels crapposting stuff that only ever happened in their lurid imaginations and often taking a stance just because they enjoy pissing people off. They are technically human but they might as well be bots.

  3. Anonymous Coward
    Anonymous Coward

    In other news

    > We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules."

    In other news: pissing in a well found to upset those drinking the water.

    1. heyrick Silver badge

      Re: In other news

      Gee, can I attempt to rob my bank in order to test if their security systems are fully operational? With the way the economy is nowadays, it's crucial to conduct a study of that kind even if it means disobeying the rules...

  4. Helcat Silver badge

    One would hope that these AI's got to argue with other AI's and left the real people to get on with their lives.

    However, if AI was pretending to be a thing in order to stimulate some form of response, the chance that another AI would be the respondent would invalidate the results.

    The other, and less forgivable, result is that people seeing these posts would believe them and use them as 'proof' to support their stance, or disbelieve and look for evidence the post is a lie then use that as proof of their opposing stance. Yes, this includes rape stories: There are people who believe anyone born XY is a rapist and the number of claims of rape in social media is proof this is true, where as those who think rape claims are mostly fake will look for proof, find it, and use that as evidence that most claims, if not all, are fabricated.

    Meanwhile, genuine victims suffer because they're lost in the chaff of these fake stories.

    Oh, yes, and other researchers who are using social media as a source of data who are not involved in these AI experiments will have had their research data corrupted, invalidating their research and wasting their funding and their time and effort. Sure, serves them right for using social media in the first place, but sometimes it's where research needs to start. And heaven help them if they actually engaged with the AI thinking the AI was a genuine victim...

    1. Blazde Silver badge

      Yup, for all these reasons they should have done it in a closed 'Reddit-like' environment with volunteers who nevertheless were still didn't know AI was involved. There was no reason other than cost not to do it like that.

  5. IGotOut Silver badge

    Hmmm

    This is one of the worst violations of research ethics I've ever seen,” wrote University of Colorado Boulder information science professor Dr. Casey Fiesler.

    Ok it's bad, but either you're new to research ethics, clueless or just being plain old over dramatic

    .

    MKUltra

    Green Run

    Stanford Prison Experiment

    University of Iowa radioactive Iodine pregnancy experiments

    University of Nebraska iodine-138 infant experiments

    University of Rochester Uranium injection experiments....

    Porton Down Lyme bay bacteria experiment

    ...shall I go on? And that's just the "good guys"

    1. notyetanotherid

      Re: Hmmm

      That sort of hyperbole doesn't help a rational debate.

      The thing is that, as commentards above have noted, observational studies are performed all the time without the subjects being aware or consenting. An anthropologist wants to study natural behaviour, and they are not going to get that if they inform people beforehand, e.g. you couldn't accurately study whether English people apologise when they are bumped into (I have read the outcome of such a study: mostly they do) if you told them it was going to happen.

      Now you might argue that this study would have been better had they told the channel that a sort of A/B study was happening and some posts would be human and some from LLMs, but would that modify the subjects' behaviour? Would some run posts through a supposed LLM-detecting LLM to try to work out which were which before responding? That outcome would surely invalidate the study? And of course you have the difficulty of sourcing a suitably experienced or qualified person able to argue cogently in a consistent style on the particular subject at hand for the human posts, whereas you can get an LLM to confidently spout plausible-sounding billhooks in a similar format on pretty much any subject you like... e.g. You can't lick a badger twice.

    2. keb

      Re: Hmmm

      Exactly. Also marketing departments have been and will be doing these things already, for sinister and private benefit. Having a university do it with precise and rigorous parameters and with peer review would allow us to understand what is going on and the capabilities and risks to society of AIs.

      Furthermore, as others keep reminding us, this is the internet, a virtual world or brain space. Only a very limited subset of speech should ever be restricted: repeated harassment of specific persons, revealing private info that causes actual risk or financial harms to individual people, etc. This trend to restrict more and more online content in the name of protecting people has to be curtailed. It is already backfiring - resulting in thought police arresting parents for innocuous comments on private chat group about school hiring, or people deported from careers and families merely for expressing dismay at a actual ongoing genocide. The solution is not restriction, it is better education and training of kids and ignorant adults, to prepare them for the 21st century.

  6. Grindslow_knoll

    Ethics approval

    It's not just ETH's board that decides this, because ETH will get its funding from the govt, so escalating the complaint is the way to go.

    Then there's the journal, if it ever gets published.

    But more likely this fits the trend of attention over scrutiny, aka *rxiv is good enough (tm), then it doesn't matter if unethical, because all the outrage is now extra attention, even papers that will disprove or call it out, will cite, which increases stats.

    Great way to ruin reputation(s) though, and make it harder for the next team (who do take it seriously) to fill in the mountain of paperwork for ethics approval.

  7. TheWeetabix Bronze badge

    Nothing better to do

    Just gonna dilute the voices of real victims and doge responsibility…. Riiiiiiight. Sounds more like they need an audit of their grant spending.

  8. Anonymous Coward
    Anonymous Coward

    The outrage is ridiculous

    Come on, how else are scientists supposed to study the persuasiveness of AI? "Hey by the way this is AI-generated, are you persuaded"? I'm sure that would be a highly effective study...

    Don't you think it's important to find out to what extent AI-generated text can change your views? A capital question IMO. Don't you think hordes of companies are already doing that, *covertly with dark motives*? And what was the harm done by this experiment exactly, some people waking up to the fact that lies exist on the internet? SMH

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like