The great trust pairing
Police and Government AI
Australia's federal police and Monash University are asking netizens to send in snaps of their younger selves to train a machine-learning algorithm to spot child abuse in photographs. Researchers are looking to collect images of people aged 17 and under in safe scenarios; they don't want any nudity, even if it's a relatively …
Because the use of your photos will not stop with the child pr0n investigations. It's also the thin end of the wedge when you will be forced to let police go through your financial records at will so they can find money launderers and your emails so they can find terrorists.
They can do that today if they get probable cause and get a judge to sign off on a warrant. But they find a warrant limiting and want to do away with them so they can gather info on EVERYONE. THAT is what you should be afraid of.
You move more than £10K the bank has to inform the authorities and I am not sure what other checks this will trigger.
When buying a car the other year the dealership asked for me to send in multiples of £10k, something to do with additional paperwork needed if more than that in 1 go.
Paying a mortgage off recently I was able to move more than £10k from my ISA IN 1 bank to the current account then to my building society current account then on the mortgage account with no issues.
I assume the authorities perhaps have my accounts linked up so larger amounts can flow without triggering the money laundering protocols. I guess I’ll know for sure in a few months if I get a knock on the door or asked awkward questions.
Upvoted on general principle, however: while I know nothing about Human Subjects Research in Australia, in the US, no accredited university's IRB would let a project this proceed without consent from every subject whose photos are used. (Using an existing dataset that's already been cleared is a different story.)
Let me guess, the 33000 reports he gets, come from algorithms? Because apparently they come with large media trawl and not specific "these images here are the claimed child abuse" as you would with a human generated report. And not worth police time, only AI filter time.
Build AI models to detect *new* child sex abuse from the old? We slid down that slippery slope damn quick, it was only yesterday it was fuzzy-hash matching! But there aren't enough images to build AI models, so instead they're making a negative set.
child sex abuse = NOT(happy fully dressed playtime).
If you train on that model, the AI will pick up on nudity or partial nudity as the defining characteristic. Because that's how you've defined it in your request here. Which means on a real set, you would have an insane amount of false positives. Partial nudity (e.g. swimming, naked arms, crop tops), non playtime, sadness, outlier images (new fashions, new shows, new events, stuff the AI wasn't trained on, because its trained on a historic set), all firing the AI.
So the step after that, you grab images from peoples phones for review to process all the false positives*
(* I know he's claiming its to review 33000 *reported* claims, and he uses colorful language to justify that, however these are not real claims. A real claim would have the abuse images claimed as abuse supplied with it, already filtered and police would be investigating that incident, not a large set of media which may or may not contain abuse images. If you need further processing to even be worth a police officers viewing time, clearly the rhetoric doesn't match the scenario). My guess is these come from Microsoft's algo.
Can I point out *whose* devices will get constant false positive flags? Childrens devices will. Because their photos are of children (themselves and their friends), the primary devices being spied on will be that of children. Then parents (because their photos are of their children), will be next.
In essence, when you roll that out to replace the Apple iPhone scanner AI (and you will), that is who you are targetting: Children and parents.
And then there's the group of watchers, I endlessly read how they are totally professional, some sort of non-human group of super-beings. Yet their rhetoric and actions don't match. Professional trained child sex abuse image watches, who love their jobs, pouring over the private media to process all the false positives. And whether a borderline image stimulates their libido or not will define whether its porn or not, because, as much as we pretend its "training" that defines their job, they are people. Weird people who sought out the job of reviewing child media.
So, you want to protect the world from creeps who prey on children, by erm, employing them? No.
I guess the police... uh, politibetjent would be the Norwegian gender neutral and mostly used term, police officers? who have to review these pictures have been volunteered. At least that's how my impression is from news reports, my time working for the gubbmint, and the army. So it is not a self selecting group of pervs.
The top half of the post was my thoughts exactly. The training set is.... flawed.
> you would have an insane amount of false positives
Bingo! Total success, our algorithm caught many thousands of child abusers, let the budgets flow: I need a substantial rise, 3 good-looking assistants and a new office. Mission accomplished.
Come on, you didn't expect that this would catch any real child abuse, did you? It's like training an AI to detect lilies by scanning pictures of daisies. A rose is not a daisy, so it must be a lily, isn't it? Not our problem, let the judiciary sort it out.
has lots the plot when it comes to IT. They still want backdoors in encryption I believe and don't understand how it will be abused and then this.
Could this also be because they are under the thumb of the CCP. Look at the recent elections and look up Drew Pavlou. How he was arrested for protesting against Xi, in Australia, but the Chinese nationals that were there (CCP goons) that assaulted him weren't, at first, arrested.
A country that also banned RimWorld cause it contained the word Yayo & because you were able to make and sell yayo (they have now overturned the ban)
Might be nice place to visit but would never want to live there.
I can remember being a minor in a normal environment, that these days might be seen as "an exploitative, unsafe situation" because my family stopped at a pub in Oxfordshire, put me in a wheelchair and we all sat in the pub garden. My uncle drank his beer then put it down on the table and went inside to the toilet ... so I picked up his beer and drank it. Everyone laughed until my uncle came back and saw his glass was empty.
A few years ago, a bloke went to a pub on a Sunday for a drink. Afterwards, the bloke and his wife left the pub, leaving their 8 year old daughter behind. She had been left in the pub for a quarter of an hour before the bloke's wife returned to collect her
The word "Happy" in the headline threw me a bit, as there's no guarantee that 'normal' childhood pics are full of happy faces. But actually, it seems to be more about the overall situation that a child is in, and presumably the subtle clues that an AI could detect in the body language of the children and adults present - "To develop AI that can identify exploitative images, we need a very large number of children's photographs in everyday 'safe' contexts"
Worth a shot, I guess, even if it doesn't ultimately pan out.
However, I'm not sure if the outcome for the reviewers will be any better - "Reviewing this horrific material can be a slow process and the constant exposure can cause significant psychological distress to investigators," - if the AI is filtering out 'normal' pics, then the reviewers will only be seeing horrific images.
Unless, of course, the idea is to let the AI make the decision on all images.
Likely a combination of genetics and perfectly reasonable differences in environment and development. Child brains are very neuroplastic; anthropologists who study child development have documented an enormous range of which competencies are developed to what extent and in what order by a given age. So some children will develop emotional filtering and negative (dampening) feedback strategies much earlier than others, who will develop other skills first and for a while be more emotionally volatile.
Completely normal, and as the prefrontal cortex continues to develop they'll generally even out. Not everyone does, of course; it's a complex system and no one's perfect, and some people are neurologically predisposed to mood swings even if they don't have actual bipolar syndrome. And, of course, many are legitimately bipolar or stimulation-seeking or depressive or what not.
About 10 years ago, when I was more stupid, I uploaded a picture to Facebook of my then 8 year old daughter beaming with pride at having won a swimming contest. She was wearing a salmon pink t-shirt and holding up two medals she won. I got reported by some auto-CP bot on Facebook and had to declare what the image was while I was "investigated"! After a human being stepped in and stated the image was completely innocent, I said thanks and removed it. I also sent a nasty complaint to FB, removed all my media and simply switched the account to "dormant" so friends and family could find me.
Sorrry but AI is barely able to tell a cat from a dog, a push bike from an exercise bike, when AI image recognition works 99% of the time without marking innocent people and their photos as sexual predators, then I'll play along until then, stick it up your arse!
A 99% accurate recognition system is going to be pretty much entirely false positives.
Number of photos uploaded to facebook daily: 350,000,000
Number of CP photos uploaded to facebook daily: probably less than one, that's not where they share them.
Number of false positives at a 99% accuracy rate: 3,500,000
Number of true positives: less than one
Yikes! Now you're basely logical! Don't!
Logic has nothing to do with any of it (Nor have kids BTW). It's just the infamous "somebody think of the kids" rhetoric, the big excuse of the 21th century surveillance state (now that terrorists have slacked off), and there is big money to be made from that.
I guess that the politicians and police reckon it will be more efficient (i.e., cheaper) to have an AI system alert them to possible child abuse victims rather than have to go through all the effort of properly funding social health care services, and actually listening to social care professionals when they say that a child is in serious danger. I cannot help feeling that child abusers are unlikely to post abusive images on public forums, so not sure entirely what the objective of this is.
In the same week that US President Biden stated that firearms have now become the number one killer of children in the USA* (recently overtaking car accidents), I do hope that a holistic approach to child safety is being taken, rather than a 'one solution fits all' approach, which never quite seems to work.
*Note I am not interested in getting into the guns are good / bad arguments, which could fill the register's pages, so please save your comments on that debate for a more appropriate article.
"all were full of pictures of primitive peoples around the world."
By "primative" you mean those that have happily survived and thrived, without the (mainly) white Europeans coming to "educate" them., usually by killing them off and stealing there resources?
'Primitive' women are those that can be shown topless on a magazine cover in a school library without the PTA having a fit of the vapours.
It would be an interesting experiment to determine exactly how 'primitive' they had to be before the AI determined that librarian was to go on a government list.
If someone were a bit on the cynical side, one could remember news from some years ago such as this one:
in short, AI used to simulate aging. You have a picture of a kid, and are able to get a picture of the same person at any age.
One could then wonder (ponder?) whether this is really about the stated intent, or about building a trained model for face recognition, which is so hip nowadays.
I am almost surprised there is no associated lottery where 100 lucky participants will get a 50$ Amazon gift card for donating their childhood memories to the state.
We all know that children are always happy, and under 'normal' circumstances they never, every cry, pout, or throw tantrums. Therefore "unhappy-looking child" == CSAM, because the only possible reason any child would ever look unhappy is due to abuse.
Woe to anyone who ever took a picture of their child in one of those moments. The storm troopers will be kicking down the door any minute now.
We already have plenty of problems with the "deep learning" approach of building tall stacks of mostly-convolutional network layers, as Reg readers and scribes are well aware. In addition to the ones mentioned in the article there are issues such as overfitting and selecting proxy attributes that turn out to be falsely correlated with the desired attributes. Crowdsourcing a dataset is very problematic, particularly if you don't have the resources to curate it and improve its quality. For something like this I don't know that any possible attackers would bother, but it's technically possible to submit enough altered images to introduce a backdoor bias in the model.
Besides the technical issues, the general problem of inexplicable models becomes much worse in practice when we're trying to create a discriminator for an attribute where reasonable human judges can disagree. Delegating a choice to a black-box algorithm when the metrics aren't even clear to human experts is a huge moral hazard, because we substitute an oracle of unknown value for a hard problem. The temptation to simply trust the oracle is huge and we see it already in action in many domains, such as the sentencing of convicts.
I have to disagree with those who think it's "worth trying". The risks are significant and likely unavoidable if the system is ever used, and the benefit is likely to be minuscule due to the extremely low positive predictive value, very high N, and cost of confirmation. This is precisely the sort of thing which isn't worth trying because it's almost certain to do more harm than good, if it accomplishes anything at all.
Just no. You fight child sex images with proper policing not using shitty AI. The register from what I can see, has yet to report on the guy that has had his phone blocked and his Google account closed cause of what it and its shitty AI classed as CSAM. The local police were informed and an investigation was opened up. Which was then closed as it was clear the photos were close ups of his sons groin for MEDICAL REASONS. The doctor requested, checked the photos for the rash and prescribed antibiotics. The police saw all this was legit, all investigation dropped.
Yet the cunts at Google have said "We stand by our findings and won't reinstated his account" when did Google become the fucking world police.