back to article #MeToo chatbot, built by AI academics, could lend a non-judgmental ear to sex harassment and assault victims

Academics in the Netherlands have built a prototype machine-learning-powered Telegram chatbot that attempts to listen to victims of sexual harassment and assault, and offer them advice and help. The bot, arising as a result of the Me Too movement, is trained to analyse messages to classify the type of harassment described by a …

  1. Pascal Monett Silver badge

    “Before releasing it to the public"

    You're going to want to ensure that your AI is capable of handling the legion of trolls out there who will not hesitate one second to try and game it or break it.

    So sanitize your inputs and, above all, do not accept attachments.

  2. Denarius
    Thumb Up

    worth trying

    remember Eliza chatting to early lonely geeks among others ? A sophisticated chatbot may be like the sympathetic stranger model used in post trauma management.

  3. This post has been deleted by its author

  4. Why Not?
    Happy

    Excellent

    Soon harassment can be detected like using speed cameras. That can only be for the good.

    I suspect though there will be a large cultural element that will need to be understood, what is not offensive in America may be so in China and vice versa.

  5. Jimmy2Cows Silver badge
    Thumb Up

    A genuinely useful case for chatbots, which hopefully can help victims of these dispicable acts.

    Aside, wearing my pedantic dev hat I notice the flow chart has no escape clause for "Not harrassment", so it will continue seeking more information until it decides harrassment occured, instead of deciding harrassment hasn't actually occured based on the information provided. They do cover this to an extent when discussing early failure to recongise negation. I just hope they aren't missing other nuances.

  6. Cederic Silver badge

    mixed views on this

    Support for victims of crime, and help for them in recovering from an ordeal, and easier reporting to the police to secure justice are all good. Where no crime has been committed but someone is still feeling emotionally impacted support of this nature may well an ideal response.

    I do though hope that this non-judgemental ear judges not the accused either.

  7. Robert Carnegie Silver badge
    Joke

    I worry

    Did you hear about the unexpectedly offensive foul-mouthed chatbot that was threatened with a lawsuit by musician Taylor Swift for stealing her act :-)

  8. Anonymous Coward
    Anonymous Coward

    Hmmm

    Has Tay taught them nothing?

    1. Anonymous Coward
      Anonymous Coward

      Re: Hmmm

      Tay was designed to learn via unsolicited anonymous input. If they only train it with sanitized input, and don't allow it to "learn" from its interactions in actual use (unless that data is given permission to be used for further training and is sanitized by a professional) then it should be fine.

      I'm skeptical of the idea of having a machine "comfort" a woman who is a victim of sexual assault or rape though. If all it does is gather information to help her make a report that's fine, but if it crosses the line and tries to console her that's just creepy. What's next, make a robot shoulder for her to cry on? Some things require human interaction, at least until we can make androids that pass the Voight Kampff test (not in the lifetime of anyone reading this, I'm sure)

    2. P. Lee

      Re: Hmmm

      I can think of few things more condescending than being offered a chatbot to talk to. "Sorry chaps, no humans could be bothered to talk to you."

      If you are just providing information, put it on a web page. At least I can ctrl-f rather than negotiate the vagaries of a computer trying to process human language.

  9. Claptrap314 Silver badge

    What could go wrong?

    Do I need to post a list of proven maliciously false claims of harassment? Of course, a claim by someone not-important against someone not-important is far less likely to make the news--this can affect anyone. This bot will function as a guide to making such claims more believable.

    Real harassment is a scourge. Treating every claim as if it were valid, however, is bringing in the elephants to drive out the lions.

    1. Anonymous Coward
      Anonymous Coward

      Re: What could go wrong?

      INb4 the AI doubts a claim and the software gets torched at midnight for toxicity.

    2. Anonymous Coward
      Anonymous Coward

      Re: What could go wrong?

      How are you today?

      Would you like to discuss some more about elephants?

      1. Claptrap314 Silver badge

        Re: What could go wrong?

        slowcap.gif

  10. intrigid

    How about making an AI program that is 90% effective at identifying false sexual assault accusers, by cataloguing their story in detail, and running an algorithm that finds critical factual errors and logical inconsistencies? That would be funding well spent.

    1. Robert Moore
      Trollface

      Is that the best you can do?

      I am sure you can troll harder than that.

  11. Glen 1

    What's the betting that a better approach is to have an actual human - Turing test style - be the bot?

    If pretending to be a bot helps the victims open up...? Naturally only by proper professional counselors (not the local gov kind)

  12. steviebuk Silver badge

    Will it also...

    ...detect the liars? Obvious harassment is a awful issue to happen to a person but there are also cases of people "crying wolf" and later found to be lying, such as Carl Beech.

  13. Anonymous Coward
    Anonymous Coward

    Uhh...

    "personal information will not be stored" - Spanakis

    "Ask permission to store personal information" - flowchart

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like