back to article Police lab wants your happy childhood pictures to train AI to detect child abuse

Australia's federal police and Monash University are asking netizens to send in snaps of their younger selves to train a machine-learning algorithm to spot child abuse in photographs. Researchers are looking to collect images of people aged 17 and under in safe scenarios; they don't want any nudity, even if it's a relatively …

  1. Yet Another Anonymous coward Silver badge

    The great trust pairing

    Police and Government AI

    1. seldom

      Re: The great trust pairing

      What could go wrong?

      There's no CC footage to be abused.

      In Australia it's all done the best possible taste.

      1. NoneSuch Silver badge
        FAIL

        Re: The great trust pairing

        Because the use of your photos will not stop with the child pr0n investigations. It's also the thin end of the wedge when you will be forced to let police go through your financial records at will so they can find money launderers and your emails so they can find terrorists.

        They can do that today if they get probable cause and get a judge to sign off on a warrant. But they find a warrant limiting and want to do away with them so they can gather info on EVERYONE. THAT is what you should be afraid of.

        1. Snowy Silver badge
          Megaphone

          Re: The great trust pairing

          Going through your financial records I am sorry to inform you but they already do that.

          You move more than £10K the bank has to inform the authorities and I am not sure what other checks this will trigger.

          1. Anonymous Coward
            Anonymous Coward

            Re: The great trust pairing

            You move more than £10K the bank has to inform the authorities and I am not sure what other checks this will trigger.

            When buying a car the other year the dealership asked for me to send in multiples of £10k, something to do with additional paperwork needed if more than that in 1 go.

            Paying a mortgage off recently I was able to move more than £10k from my ISA IN 1 bank to the current account then to my building society current account then on the mortgage account with no issues.

            I assume the authorities perhaps have my accounts linked up so larger amounts can flow without triggering the money laundering protocols. I guess I’ll know for sure in a few months if I get a knock on the door or asked awkward questions.

    2. Anonymous Coward
      Anonymous Coward

      Re: The great trust pairing

      Yes, I don't know what the punchline is yet, but this definitely reads like the start of an episode of Black Mirror.

    3. AMBxx Silver badge
      Facepalm

      Re: The great trust pairing

      Wait until they find that the majority of the pictures are of white kids. The AI then identifies all pictures of non-whites as child abuse.

      1. Anonymous Coward
        Anonymous Coward

        Re: The great trust pairing

        Some of those that work forces

        Are the same that burn crosses

        https://www.youtube.com/watch?v=bWXazVhlyxQ

      2. Yet Another Anonymous coward Silver badge

        Re: The great trust pairing

        >The AI then identifies all pictures of non-whites as child abuse.

        But this is Australia so that won't be a problem

        1. tip pc Silver badge

          Re: The great trust pairing

          But this is Australia so that won't be a problem

          Are you suggesting all Australians are the same hew?

  2. cantankerous swineherd

    looks like a disaster in the making

    1. Flocke Kroes Silver badge

      I agree, but I am astounded to hear about police and an AI project seeking permission.

      1. Anonymous Coward
        Anonymous Coward

        Clearly to facilitate reselling the database afterwards.

      2. Michael Wojcik Silver badge

        Upvoted on general principle, however: while I know nothing about Human Subjects Research in Australia, in the US, no accredited university's IRB would let a project this proceed without consent from every subject whose photos are used. (Using an existing dataset that's already been cleared is a different story.)

  3. Anonymous Coward
    Anonymous Coward

    Nudity = child sex abuse now?

    Let me guess, the 33000 reports he gets, come from algorithms? Because apparently they come with large media trawl and not specific "these images here are the claimed child abuse" as you would with a human generated report. And not worth police time, only AI filter time.

    Build AI models to detect *new* child sex abuse from the old? We slid down that slippery slope damn quick, it was only yesterday it was fuzzy-hash matching! But there aren't enough images to build AI models, so instead they're making a negative set.

    child sex abuse = NOT(happy fully dressed playtime).

    If you train on that model, the AI will pick up on nudity or partial nudity as the defining characteristic. Because that's how you've defined it in your request here. Which means on a real set, you would have an insane amount of false positives. Partial nudity (e.g. swimming, naked arms, crop tops), non playtime, sadness, outlier images (new fashions, new shows, new events, stuff the AI wasn't trained on, because its trained on a historic set), all firing the AI.

    So the step after that, you grab images from peoples phones for review to process all the false positives*

    (* I know he's claiming its to review 33000 *reported* claims, and he uses colorful language to justify that, however these are not real claims. A real claim would have the abuse images claimed as abuse supplied with it, already filtered and police would be investigating that incident, not a large set of media which may or may not contain abuse images. If you need further processing to even be worth a police officers viewing time, clearly the rhetoric doesn't match the scenario). My guess is these come from Microsoft's algo.

    Can I point out *whose* devices will get constant false positive flags? Childrens devices will. Because their photos are of children (themselves and their friends), the primary devices being spied on will be that of children. Then parents (because their photos are of their children), will be next.

    In essence, when you roll that out to replace the Apple iPhone scanner AI (and you will), that is who you are targetting: Children and parents.

    And then there's the group of watchers, I endlessly read how they are totally professional, some sort of non-human group of super-beings. Yet their rhetoric and actions don't match. Professional trained child sex abuse image watches, who love their jobs, pouring over the private media to process all the false positives. And whether a borderline image stimulates their libido or not will define whether its porn or not, because, as much as we pretend its "training" that defines their job, they are people. Weird people who sought out the job of reviewing child media.

    So, you want to protect the world from creeps who prey on children, by erm, employing them? No.

    1. Joe W Silver badge

      Re: Nudity = child sex abuse now?

      I guess the police... uh, politibetjent would be the Norwegian gender neutral and mostly used term, police officers? who have to review these pictures have been volunteered. At least that's how my impression is from news reports, my time working for the gubbmint, and the army. So it is not a self selecting group of pervs.

      The top half of the post was my thoughts exactly. The training set is.... flawed.

    2. ThatOne Silver badge
      Devil

      Re: Nudity = child sex abuse now?

      > you would have an insane amount of false positives

      Bingo! Total success, our algorithm caught many thousands of child abusers, let the budgets flow: I need a substantial rise, 3 good-looking assistants and a new office. Mission accomplished.

      Come on, you didn't expect that this would catch any real child abuse, did you? It's like training an AI to detect lilies by scanning pictures of daisies. A rose is not a daisy, so it must be a lily, isn't it? Not our problem, let the judiciary sort it out.

      1. Yet Another Anonymous coward Silver badge

        Re: Nudity = child sex abuse now?

        And we need to roll this proven successful program out to your computer at home, and we need to block all that foreign internet - we have Murdoch News (tm) what more information do true patriotic Australians need ?

        1. Anonymous Coward
          Anonymous Coward

          Re: Nudity = child sex abuse now?

          True patriots don't need any information. They are told what to think by their superiors.

    3. Falmari Silver badge

      Re: Nudity = child sex abuse now?

      I agree all they will have done is train an AI model to classify images of children as clothed or naked.

  4. Anonymous Coward
    Anonymous Coward

    Australia

    has lots the plot when it comes to IT. They still want backdoors in encryption I believe and don't understand how it will be abused and then this.

    Could this also be because they are under the thumb of the CCP. Look at the recent elections and look up Drew Pavlou. How he was arrested for protesting against Xi, in Australia, but the Chinese nationals that were there (CCP goons) that assaulted him weren't, at first, arrested.

    A country that also banned RimWorld cause it contained the word Yayo & because you were able to make and sell yayo (they have now overturned the ban)

    Might be nice place to visit but would never want to live there.

    1. The Central Scrutinizer

      Re: Australia

      Come back and re-post when you can put a coherent thought together.

      1. Yet Another Anonymous coward Silver badge

        Re: Australia

        Australia - an imaginary country where even the bunny rabbits are venomous and the politicians can overrule the laws of Mathematics

        1. The Central Scrutinizer

          Re: Australia

          And watch out for the drop bears and hoop snakes. All the usual tropes we have to put up with.

  5. katrinab Silver badge
    Alert

    The AI is going to learn that photos taken with ancient cameras are fine, and photos taken with modern cameras are child abuse.

    1. Version 1.0 Silver badge
      Happy

      I can remember being a minor in a normal environment, that these days might be seen as "an exploitative, unsafe situation" because my family stopped at a pub in Oxfordshire, put me in a wheelchair and we all sat in the pub garden. My uncle drank his beer then put it down on the table and went inside to the toilet ... so I picked up his beer and drank it. Everyone laughed until my uncle came back and saw his glass was empty.

      1. Fruit and Nutcase Silver badge
        Alert

        A few years ago, a bloke went to a pub on a Sunday for a drink. Afterwards, the bloke and his wife left the pub, leaving their 8 year old daughter behind. She had been left in the pub for a quarter of an hour before the bloke's wife returned to collect her

        https://www.bbc.co.uk/news/uk-18391663

  6. Anonymous Coward
    Anonymous Coward

    Might be worth a shot ...

    The word "Happy" in the headline threw me a bit, as there's no guarantee that 'normal' childhood pics are full of happy faces. But actually, it seems to be more about the overall situation that a child is in, and presumably the subtle clues that an AI could detect in the body language of the children and adults present - "To develop AI that can identify exploitative images, we need a very large number of children's photographs in everyday 'safe' contexts"

    Worth a shot, I guess, even if it doesn't ultimately pan out.

    However, I'm not sure if the outcome for the reviewers will be any better - "Reviewing this horrific material can be a slow process and the constant exposure can cause significant psychological distress to investigators," - if the AI is filtering out 'normal' pics, then the reviewers will only be seeing horrific images.

    Unless, of course, the idea is to let the AI make the decision on all images.

    1. ThatOne Silver badge
      Devil

      Re: Might be worth a shot ...

      > Unless, of course, the idea is to let the AI make the decision on all images.

      Ah, you finally get it...

    2. Contrex

      Re: Might be worth a shot ...

      I had a very unhappy childhood - a cruel bully of a father - no sexual abuse though. I'm sure that if the AI is looking for simply sad faces it would pick me out.

      1. chivo243 Silver badge
        Holmes

        Re: Might be worth a shot ...

        I was a yo-yo, one hour I was happy as could be, an hour later is was sad, angry, bored whatever... I was a kid. I see it in my own kid too, genetics or just human nature?

        1. Michael Wojcik Silver badge

          Re: Might be worth a shot ...

          Likely a combination of genetics and perfectly reasonable differences in environment and development. Child brains are very neuroplastic; anthropologists who study child development have documented an enormous range of which competencies are developed to what extent and in what order by a given age. So some children will develop emotional filtering and negative (dampening) feedback strategies much earlier than others, who will develop other skills first and for a while be more emotionally volatile.

          Completely normal, and as the prefrontal cortex continues to develop they'll generally even out. Not everyone does, of course; it's a complex system and no one's perfect, and some people are neurologically predisposed to mood swings even if they don't have actual bipolar syndrome. And, of course, many are legitimately bipolar or stimulation-seeking or depressive or what not.

  7. Anonymous Coward
    Anonymous Coward

    About 10 years ago, when I was more stupid, I uploaded a picture to Facebook of my then 8 year old daughter beaming with pride at having won a swimming contest. She was wearing a salmon pink t-shirt and holding up two medals she won. I got reported by some auto-CP bot on Facebook and had to declare what the image was while I was "investigated"! After a human being stepped in and stated the image was completely innocent, I said thanks and removed it. I also sent a nasty complaint to FB, removed all my media and simply switched the account to "dormant" so friends and family could find me.

    Sorrry but AI is barely able to tell a cat from a dog, a push bike from an exercise bike, when AI image recognition works 99% of the time without marking innocent people and their photos as sexual predators, then I'll play along until then, stick it up your arse!

    1. katrinab Silver badge
      Boffin

      A 99% accurate recognition system is going to be pretty much entirely false positives.

      Number of photos uploaded to facebook daily: 350,000,000

      Number of CP photos uploaded to facebook daily: probably less than one, that's not where they share them.

      Number of false positives at a 99% accuracy rate: 3,500,000

      Number of true positives: less than one

      1. ThatOne Silver badge
        Devil

        Yikes! Now you're basely logical! Don't!

        Logic has nothing to do with any of it (Nor have kids BTW). It's just the infamous "somebody think of the kids" rhetoric, the big excuse of the 21th century surveillance state (now that terrorists have slacked off), and there is big money to be made from that.

  8. Eclectic Man Silver badge

    Efficiency?

    I guess that the politicians and police reckon it will be more efficient (i.e., cheaper) to have an AI system alert them to possible child abuse victims rather than have to go through all the effort of properly funding social health care services, and actually listening to social care professionals when they say that a child is in serious danger. I cannot help feeling that child abusers are unlikely to post abusive images on public forums, so not sure entirely what the objective of this is.

    In the same week that US President Biden stated that firearms have now become the number one killer of children in the USA* (recently overtaking car accidents), I do hope that a holistic approach to child safety is being taken, rather than a 'one solution fits all' approach, which never quite seems to work.

    *Note I am not interested in getting into the guns are good / bad arguments, which could fill the register's pages, so please save your comments on that debate for a more appropriate article.

    1. Yet Another Anonymous coward Silver badge

      Re: Efficiency?

      So logically put a gun in all your photos of children and they will be approved

      1. Michael Wojcik Silver badge

        Re: Efficiency?

        What if the child is playing with a gun in the bath?

  9. DS999 Silver badge

    If their "happy" images can't include nudity

    Then the algorithm is going to be trained to recognize any photo that includes nudity as child abuse.

    1. Anonymous Coward
      Anonymous Coward

      Re: If their "happy" images can't include nudity

      Then I'll have to throw away all those old National Geographic magazines that I read when I was a kid, all were full of pictures of primitive peoples around the world.

      1. IGotOut Silver badge

        Re: If their "happy" images can't include nudity

        "all were full of pictures of primitive peoples around the world."

        By "primative" you mean those that have happily survived and thrived, without the (mainly) white Europeans coming to "educate" them., usually by killing them off and stealing there resources?

        1. Yet Another Anonymous coward Silver badge

          Re: If their "happy" images can't include nudity

          'Primitive' women are those that can be shown topless on a magazine cover in a school library without the PTA having a fit of the vapours.

          It would be an interesting experiment to determine exactly how 'primitive' they had to be before the AI determined that librarian was to go on a government list.

  10. Anonymous Coward
    Anonymous Coward

    I’m quite old

    Do you think they want Daguerreotypes?

    1. Eclectic Man Silver badge

      Re: I’m quite old

      They might want your wet collodion prints, but you'll have to explain that the process has the effect of reversing the image left to right

      1. Yet Another Anonymous coward Silver badge

        Re: I’m quite old

        Is that why tik-tok videos do that ?

  11. Snowy Silver badge
    Trollface

    If "they" are to be believed

    Watching violent movies or playing violent games makes you violent what does looking at these images do to the officials that look at them? /s

  12. IGotOut Silver badge

    Odd...

    Given the fact that there is already a massive dataset of child abuse images and has been for a very long time, wouldn't it make more sense to train it on those?

    1. Yet Another Anonymous coward Silver badge

      Re: Odd...

      Because they need to train a system to determine what a happy child looks like. If your child doesn't like that then you are obviously abusing them - even if no picture of the abuse exists.

      1. Anonymous Coward
        Anonymous Coward

        Re: Odd...

        The logic is a bit strained.

        1) happy child

        2) abused child

        Ask any parent and you'll probably find that

        3) pissed off because it can't get what it wants

        Is by far the most common.

  13. Anonymous Coward
    Anonymous Coward

    Give us your biometrics, we have cookies

    If someone were a bit on the cynical side, one could remember news from some years ago such as this one:

    https://www.washington.edu/news/2014/04/09/see-what-a-child-will-look-like-using-automated-age-progression-software/

    in short, AI used to simulate aging. You have a picture of a kid, and are able to get a picture of the same person at any age.

    One could then wonder (ponder?) whether this is really about the stated intent, or about building a trained model for face recognition, which is so hip nowadays.

    I am almost surprised there is no associated lottery where 100 lucky participants will get a 50$ Amazon gift card for donating their childhood memories to the state.

  14. Anonymous Coward
    Anonymous Coward

    I Had a happy childhood, don't ask about my grandfather.

    40 years later I have come to terms with those intrusive thoughts.

    Like AI could tell.

  15. Anonymous Coward
    Anonymous Coward

    AI paired with RI (Real Incompetence)

    We all know that children are always happy, and under 'normal' circumstances they never, every cry, pout, or throw tantrums. Therefore "unhappy-looking child" == CSAM, because the only possible reason any child would ever look unhappy is due to abuse.

    Woe to anyone who ever took a picture of their child in one of those moments. The storm troopers will be kicking down the door any minute now.

  16. Michael Wojcik Silver badge

    What sort of situation, now?

    an exploitative, unsafe situation

    Such as having childhood pictures added to a ML training set, for example.

  17. Michael Wojcik Silver badge

    Ah, another inexplicable deep convolutional stack

    We already have plenty of problems with the "deep learning" approach of building tall stacks of mostly-convolutional network layers, as Reg readers and scribes are well aware. In addition to the ones mentioned in the article there are issues such as overfitting and selecting proxy attributes that turn out to be falsely correlated with the desired attributes. Crowdsourcing a dataset is very problematic, particularly if you don't have the resources to curate it and improve its quality. For something like this I don't know that any possible attackers would bother, but it's technically possible to submit enough altered images to introduce a backdoor bias in the model.

    Besides the technical issues, the general problem of inexplicable models becomes much worse in practice when we're trying to create a discriminator for an attribute where reasonable human judges can disagree. Delegating a choice to a black-box algorithm when the metrics aren't even clear to human experts is a huge moral hazard, because we substitute an oracle of unknown value for a hard problem. The temptation to simply trust the oracle is huge and we see it already in action in many domains, such as the sentencing of convicts.

    I have to disagree with those who think it's "worth trying". The risks are significant and likely unavoidable if the system is ever used, and the benefit is likely to be minuscule due to the extremely low positive predictive value, very high N, and cost of confirmation. This is precisely the sort of thing which isn't worth trying because it's almost certain to do more harm than good, if it accomplishes anything at all.

  18. steviebuk Silver badge

    No no no no no

    Just no. You fight child sex images with proper policing not using shitty AI. The register from what I can see, has yet to report on the guy that has had his phone blocked and his Google account closed cause of what it and its shitty AI classed as CSAM. The local police were informed and an investigation was opened up. Which was then closed as it was clear the photos were close ups of his sons groin for MEDICAL REASONS. The doctor requested, checked the photos for the rash and prescribed antibiotics. The police saw all this was legit, all investigation dropped.

    Yet the cunts at Google have said "We stand by our findings and won't reinstated his account" when did Google become the fucking world police.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like