back to article Apple says its CSAM scan code can be verified by researchers. Corellium starts throwing out dollar bills

Last week, Apple essentially invited security researchers to probe its forthcoming technology that's supposed to help thwart the spread of known child sexual abuse material (CSAM). In an attempt to clear up what it characterized as misunderstandings about its controversial plan to analyze iCloud-bound photos for this awful …

  1. elsergiovolador Silver badge


    Wow the level of contempt from Apple is astounding.

    "Look it was reviewed by researchers! They say it's safe! What else do you want you stupid customer?"

    1. Lord Elpuss Silver badge

      Re: Pear

      Your characterisation is an insult to the value of a good vendor/security researcher partnership.

      1. Anonymous Coward
        Anonymous Coward

        Re: Pear

        At no time has anyone complained that the scanning code might be insecure. This is a deflection. It might be insecure, but the security was the least of the problems with what they're doing there.

        Obviously, Apple could simply do the scan on the device, put up a dialog to tell the user what its doing. No need for hiding it, if the user and Apple have such confidence in Apple's system.

        "Mandatory child porn scan in progress. Please note, you consented to this search in the EULA when you installed the latest upgrade."

        "these 3 images have been detected as child porn and will be inspected by our staff, if in their opinion the images are illegal in nature, then they will be forwarded to authorities in your country. [X] add your explanation here:... [X] add other images you wish to be taken into consideration. "

        There, done.

        They have confidence in their AI based scanning code, then it should not be an issue to telling the customer what their software is doing behind their back. With or without the proxy consent of Correllium, (as if any Apple user agreed that Correllium could consent on their behalf anyway!).

        What is the problem here, Apple keep assertings its all good and works perfectly then there's no need for this weird deflection, pretending the problem is 'security' as if you're marketing against hostile users too stupid to see through that claim.

        1. Lord Elpuss Silver badge

          Re: Pear

          You’re creating strawman arguments. There’s no deflection, and Apple making the code/process open to researchers isn’t just about security; it’s about promoting understanding of how the process works, in an attempt to reassure both users and research professionals that nothing untoward is going on ‘under the covers’.

          Personally I have no doubt the process will be secure and will work as Apple intended. My PROBLEM is that even when working as described and intended, I find the process highly intrusive. It gets a solid ‘No’ from me, even without any concerns regarding misuse.

          1. Henry Wertz 1 Gold badge

            Re: Pear

            Yeah there is deflection. The concern is about invasion of privacy, and that the neural network could misidentify things. (Edit: I see it's using hashes rather than neural network type setup.) They have deflected to "here, check out our source code and look for security flaws".

            1. Lord Elpuss Silver badge

              Re: Pear

              Nope. It's not deflection, it's broad-scope analysis beyond simple security. It's about understanding the technical framework, evaluating against current objectives and validating the design principles against misuse. Code security is just one component of this; go back and read the PDF again.

    2. Anonymous Coward
      Anonymous Coward

      Re: Pear

      I propose El Reg to ask Apple for permission to audit their code.

  2. sqlrob

    Look, Squirrel!

    The client can be 100% secure and do everything it says on the box. Unless this also includes auditing how hashes get in the system AND keeping that audit 100% up to date, it's really kind of pointless and doesn't prove much.

    1. DS999 Silver badge

      Re: Look, Squirrel!

      If Apple or someone else was able to surreptitiously add hashes that would identify other images, what problem do you see?

      Is there some other class of known images that Apple could want to check for on the sly that would be detrimental? It isn't like this system can identify new images, all it can do is match to existing ones, so it is really hard to see how this can be abused. So Apple adds a hash of a known image of the Taj Mahal (not ANY picture of the Taj Mahal, only one that's near identical to a specific preexisting one that the hash will match) and finds out who is passing around that particular picture? For what end?

      The other possibility I guess you could worry about is that some bad actor third party could add hashes - though if they can do that they probably can modify iOS itself in which case you're fucked no matter what! But let's say all they can do is add hashes. So they could add hashes for common internet memes and create a huge number of false positives, overwhelming Apple's ability to manually verify that they are really CSAM. That would bring down their ability to identify CSAM, or in other words put us exactly where things stand now.

      Am I missing something? Can you come up with a scenario where what you suggest would be harmful?

      1. doublelayer Silver badge

        Re: Look, Squirrel!

        "Can you come up with a scenario where what you suggest would be harmful?"

        A repressive country, the Democratic Republic of Tyranny, has a protest. People take pictures during the protest and share them with those in other areas. People in those other areas see that they are not alone in their displeasure with the government, and the government feels that protests are likely to occur there. The DRT government tasks a group with collecting those images wherever they have been shared. It tries to block those images in their censorship system, but at least it can't track down those who have it. Enter Apple's system. The DRT government sends the hashes of those images to Apple and gets a report including the identities of all people whose devices contain that image. That would include the person who originally took it (was at protest, definitely guilty of high treason), the people who sent it to others (promulgated information contrary to the government, also high treason), and anyone who received a copy and retained it by choice or chance (just normal treason).

        The DRT would have several ways to add this into Apple's system. The easiest would be to call them up and tell them they had to put in the image. If they called the wrong number and got someone who would complain or, it's imaginable, refuse, they threaten to confiscate Apple's assets and cut its business; Apple quickly caves. However, there is an easier method. The country likely has some police system which investigates child abuse, or at least a police organization which can pretend to investigate it. They submit the hashes saying that it is abuse material. If Apple includes it, the DRT gets what it wants. If Apple doesn't include it, the country can go out in public and accuse Apple of being biased and failing to protect children when given information to track; Apple quickly caves.

        1. David 132 Silver badge

          Re: Look, Squirrel!

          Excellent example, but I have to point out that ever since the Glorious Proletariat Revolution of '77, the country's official name is the Socialist People's Democratic Republic of Tyranny. People who question the validity of that name are given complementary re-education and training at the People's Happy Learning camps in the interior of the country.

        2. Anonymous Coward
          Anonymous Coward

          Re: Look, Squirrel!

          Democratic Republic of Tyranny

          Funny, we're still called U.S.A.

        3. DS999 Silver badge

          Tank Man

          OK so I could definitely see China wanting to block circulation of that famous image of that man standing up to a tank in Tianamen Square. Because there's basically just the one iconic image that exists it would be easy to match with a hash based system.

          Imagine if a similar protest happened today. Thousands of people in the crowd, every single one of them carrying a camera. There will not be just one iconic picture, there will be thousands of pictures and videos of that event. It would be Whack a Mole trying to block them all, because there will always be another person coming out the woodwork who took a picture that hasn't been shared widely yet. The hash based system totally fails here, because they are all different photos - and that doesn't even get into whether such a hash matching scheme could work for video.

          1. doublelayer Silver badge

            Re: Tank Man

            No problem. If the group finds a thousand images which have been widely shared, that gives them thousands of targets who took the pictures or stored them. Let's say they only succeed in finding a hundred of them. That's enough people to achieve several goals:

            1. At least a hundred people who took pictures and shared them is a hundred dissidents who can be removed.

            2. Those hundred can be questioned to find more. Some will comply with questioning.

            3. A hundred is large enough that people will notice that the government was able to track them down. That's a good advertisement that protesting can end badly for you.

            Even if there are more pictures, that gives them quite a large head start. If there are, they can add them to the filter later when they are found.

            1. DS999 Silver badge

              Re: Tank Man

              How are you assuming they can find out who took and shared the image? They can only match them to images that have been hashed, so they have to be identified as "here's an image we don't like". So if you have a copy of such an image, and send it to me, they can know I have it. But they don't know who I got it from and certainly don't know who originally took it.

              Being able to trace back to the source is certainly possible but this hashing scheme doesn't help that process at all. It only helps stop the spread once they have found the image in the first place.

              1. Falmari Silver badge

                Re: Tank Man

                @DS999 "Being able to trace back to the source is certainly possible but this hashing scheme doesn't help that process at all. It only helps stop the spread once they have found the image in the first place.Being able to trace back to the source is certainly possible but this hashing scheme doesn't help that process at all. It only helps stop the spread once they have found the image in the first place."

                Of course it helps the process, it is the very start of the process. The hash identifies an image which may have been posted anonymously. But now they know the account details and therefore the identity of those who have sent it to Apple's cloud. From those who have been identified they can now backtrack and maybe find other images to hash.

              2. doublelayer Silver badge

                Re: Tank Man

                It gives them a list of people who have the image. That likely includes the person who took it (sort by date uploaded, pick the first). However, even if it doesn't, they'll be happy to target those who received it as well, who could, under questioning, disclose the person who sent it to them. If the source of the image is their primary target, it's just traversing a tree. Since those who received the image are probably also targets, it's traversing a graph. Even if the source evades discovery, there are lots of others who won't.

    2. Anonymous Coward
      Anonymous Coward

      Re: Look, Squirrel!

      Like "Certificate Transparency" on the internet?? Auditing is not transparency.

      They swapped Certificate Pinning, which forced the browser to reject a cert being swapped in for the actual *real* cert, with a system where your browser reports every site it visits it Google and Cloudflare and Digicert.

      And they don't fooking check the certs, they just say "yeh, the cert authority sent us a copy of that cert its sending you". So they fingerprint your browser, and log your IP, and from the hash of the cert they determine which site you visited and log it for their records.

      A security measure turns into a privacy attack, while you were not looking.

      80 million certs plus this year alone, are there 80 million new websites? No. Do they check those 80 million certs? No! its impossible to know if the cert authority has correctly issued it, they only see the cert not the data used to get it issued.

      Did those certs intercept TLSs connections? Are they being used as an attack on encrypted traffic by governmental agencies abusing some mass collection warrant? They have no idea, because the content was never checked.

      Auditing is not a solution here.

      Tell the users if your algo flags their images. The only auditing that counts is the users.

  3. Doctor Syntax Silver badge

    How do you audit a precedent?

    1. Lil Endian Silver badge

      Yes, it's a diversionary tactic to bypass the scrutiny of precedence.

  4. cornetman Silver badge

    They don't really address how they are going to handle governments making them use the technology, once it is up and running, to bend it to their own ends.

    Like scanning for distributed pictures of Winnie The Pooh or whatever is the demon de jour in the Western world.

    1. elsergiovolador Silver badge


      This is an equivalent of a rug company sending a Roomba equipped with sensors to sweep your house looking for traces of drugs and then reporting you and we are at a point discussing what if Roomba will start collecting DNA samples from the rug instead of rejecting the whole idea altogether.

      1. cornetman Silver badge

        Re: Rug

        Don't get me wrong. This is an awful idea and Apple are going to regret going down this path.

        I would be interested to know where this came from originally. I cannot believe that they are that stupid that they didn't realise how this tech would be bent to the ends of the likes of China at the very least. It will be a case of do it, or you don't sell in China.

        1. Anonymous Coward
          Anonymous Coward

          Re: Rug


          I would suggest they have copied it from Google, Facebook and Microsoft (see link). The only difference being, Apple is open about it despite the shitstorm they must have known it would bring down on them.

          Can I point out that despite using Mac computers, I have never had an iPhone. Never had a smartphone of any brand. I have a perfectly good original Nokia 3310 and when I am out with my mates, I sure as hell don't need the internet with all it's bollocks of interruptions.


        2. tip pc Silver badge

          Re: Rug

          “ I would be interested to know where this came from originally”

          For a business that has continually got things right, certainly over the last 10 years (see share price, dividends and profits), it does seem strange that they’ve screwed the pooch sooooooo badly (what a terrible phrase).

          If it was obvious to me it was obvious to Apple that it was a terrible idea.

      2. Anonymous Coward
        Big Brother

        Re: Rug

        Drugs are passe.

        The vector will be spousal and child abuse with the FBI requesting' that Roomba search for and report on any blood traces it finds.

      3. Anonymous Coward
        Anonymous Coward

        Re: Rug: We've been here before.

      4. Alumoi Silver badge

        Re: Rug

        Damn it. man! You were not supposed to tell them!

    2. DS999 Silver badge

      This system can't scan for "Winnie the Pooh"

      Only specific images of Winnie the Pooh. If there's a meme image circulating they could identify that, but if there are hundreds of different memes it would only match the ones they have hashes for.

      1. mark l 2 Silver badge

        Re: This system can't scan for "Winnie the Pooh"

        Not according to Apples PR dept, who claim their magical technology only uses hashes of photos, yet can detect similar looking photos or where its been edited. So I say these two are not compatible and it must be analysing the photos using AI to pattern match them rather than just hashes to see if they match know abuse images.

        Plus why does this need to be done on device, if its only for photos uploaded to the icloud, why not just scan for the photos when they hit Apples servers and leave the privacy in place on the device?

        1. Anonymous Coward
          Big Brother

          Re: This system can't scan for "Winnie the Pooh"

          Doing on device means that images can be encrypted in cloud: only on device must they be decrypted. If Apple can decrypt them in cloud then this is a backdoor: the sort of thing governments would like.

          1. FILE_ID.DIZ

            Re: This system can't scan for "Winnie the Pooh"

            The iCloud data is encrypted at rest, but with Apple's encryption key.

            Source -

            1. Graham 32

              Re: This system can't scan for "Winnie the Pooh"

              "can be" being the important phrase. Some journos have suggested this csam move is a precursor to full iCloud encryption with user-owned keys.

        2. Irongut

          Re: This system can't scan for "Winnie the Pooh"

          > why does this need to be done on device, if its only for photos uploaded to the icloud, why not just scan for the photos when they hit Apples servers and leave the privacy in place on the device?

          Your suggestion is allowing the security to gallop out the open stable door. Plus it would mean Apple handling CSAM and having it on their servers, by scanning on device they can prevent the images getting to any Apple owned equipment and prove the images were in your possesion. Something that may become important when it comes to a court case.

        3. DS999 Silver badge

          Re: This system can't scan for "Winnie the Pooh"

          Not according to Apples PR dept, who claim their magical technology only uses hashes of photos, yet can detect similar looking photos or where its been edited

          You aren't understanding what they are saying, or what the technology is capable of (see there is a lot of information available)

          It can match the same photo if it has been modified (cropped, resized, quality changed etc.) but not a different photo of the same thing. If you and I are standing next to each other and take a picture of the same thing, they are subtly different just from the angle alone, let alone if we have different phones or different settings on the same phone that result in markedly different output. They will not be matched.

          This is designed for taking a bunch of KNOWN existing child abuse photos and matching them, even despite cropping or changing the size/quality level which is done all the time on the internet for photos. It simply won't work for similar photos not based on the same original, the hashing depends on details of how the image compression was done that end up very different between two originals of the same subject.

          1. tip pc Silver badge

            Re: This system can't scan for "Winnie the Pooh"

            If you and I are standing next to each other and take a picture of the same thing, they are subtly different just from the angle alone, let alone if we have different phones or different settings on the same phone that result in markedly different output. They will not be matched.

            The whole point of Apples technology is that it detects those 2 photos as the same. The clues are in what they’ve said. The tech is designed to resist defeat by edit changes. No doubt if there wasn’t sunch a stink we’d be hearing by now how their tech can use the example hashes to detect new cases.

            I’d rather there was absolutely zero csam, it’s beyond deplorable. By the time a photo is taken it’s too late. They need to be stopped before.

            1. DS999 Silver badge

              Re: This system can't scan for "Winnie the Pooh"

              No it is not. Those are two different photos. It won't detect them as the same. Maybe read Apple's documentation I posted a link to above, you don't understand how it works at all.

  5. revilo

    who audits the hashes?

    Do they really believe we have an IQ of 50? Of course, one can audit the software which produces the hashes or compares them. I trust that they can program this correctly. But nobody can audit the smut which actually feeds the hashes. Or does anybody believe that the database of smut pictures is passed around to security researchers? Craig Federighi is an intelligent person who knows that he is misleading the press. The system design by definition to be not auditable. The basic fact remains that every user is subjected to a police software, treated like a criminal, gets a hash of kiddy porn pictures loaded on their machines and will be completely depending on the goodwill of the folks feeding the offensive database (which is not apple). In the future and some countries this will certainly also include politically offensive documents. Apple is misleading us also because it would technically be no problem to compare even encrypted files on icloud with an offensive database. Nobody would object to such checks. That the police software has to run on every users machine is completely new and unacceptable.

    1. DS999 Silver badge

      Re: who audits the hashes?

      So how is this different from the photo scanning that Google, Microsoft, Amazon, etc. clouds are already doing? You don't know what they are looking for, so they could already be doing all the terrible stuff you imagine Apple will be doing.

      Other than not using the cloud at all with any product, there is no way to avoid this if you believe it will be used for terrible ends. Check that, other than not using any sort of computing device at all there is no way to avoid this, because given what you believe you would also believe that Apple will check photos even if they aren't uploaded to the cloud, and that Android and Windows will do the same. I suppose you could use Linux, but you better compile it yourself from source - after checking the source, and checking the source of the compiler you used, and reading "On Trusting Trust" and realizing that even checking the source code isn't good enough if your level of paranoia is permanently set at 11.

      1. Anonymous Coward
        Anonymous Coward

        Re: who audits the hashes?


        Assumption alert: Your paranoia does you credit......but you are assuming that the material being scanned is in a widely accepted format. Bad guys probably use private encryption before anything enters any public channel!! So......good luck to the snoops, any of them......NSA, GCHQ, Apple, Google.....

      2. Anonymous Coward
        Anonymous Coward

        Re: who audits the hashes?

        Yes, it's the Apple customers fault for being paranoid.

        If they're not happy with Apple running their AI pattern matching software on their private photos, and then sending fuzzy matches up to their staff and contractors and teleworkers for review, then they should not be using Apple products.

        If they're concerned that Apple has removed their privacy right with this suspionless search, and apparently this is legal in the US, then they should imagine what else other US companies are doing behind their back?

        All those US backdoors in Google's cloud, Amazon's cloud and Microsoft Clouds etc.

        All that slurpring of private data, for anyone in a three letter agency to have a read through if they're bored, or you said something to upset them.

        Today its the iPhones, but tomorrow it will be scanning iMac's SSD drives, and if you don't like it, don't use those Google, Amazon, Microsoft or Apple kit.

        It's good that you try to drag all US cloud tech down with Apple, DS9999.

      3. Kabukiwookie

        Re: who audits the hashes?

        Other than not using the cloud at all with any product


    2. Kabukiwookie

      Re: who audits the hashes?

      Do they really believe we have an IQ of 50?

      Of course they don't think that everyone has an IQ of 50. They're only targeting their existing and future customer base.

      It is a shame that this will probably mean that after Apple's glowing example, other manufacturers may be pressured by politicians to set up some similar scheme.

  6. Lil Endian Silver badge
    Thumb Up

    Not Just iPhones

    "I also think it's interesting that they're offering research grants towards doing research for any mobile devices and not just iPhones."

    Well, naturally Corellium wouldn't have a singular focus on Apple for any reason would they?


    So, now it's the third party researcher that chooses the target. Corellium covered. GJ Corellium :)

    Edit: Being less cynical, it is a Good Thing (tm)

  7. Anonymous Coward
    Anonymous Coward

    Encryption? Apple AI? So what would this be?

    Is it some banned material? Is it even a photograph? Or is it just the output of a random number generator....designed to confuse?


    To get to the point.......the bad guys can also do this sort of private processing BEFORE material enters a public channel! Maybe Apple AI can figure it out!!









































































































    Let us know when (and how) to unpick it!

    1. find users who cut cat tail

      Re: Encryption? Apple AI? So what would this be?

      You wanted to know? All right then.

      You have two options. Give us the decryption key. Or spend a few years in prison. You don't have the key or say it doesn't exist? Then your options are limited to the second one.

      1. Alumoi Silver badge

        Re: Encryption? Apple AI? So what would this be?

        Obligatory xkcd:

      2. Anonymous Coward
        Anonymous Coward

        Re: Encryption? Apple AI? So what would this be?


        Multiple Assumption Alert:

        1. Is it even encryption? Maybe (as stated) it might just be a random stream in base64! How do you know?

        2. Since when is private encryption deemed to be illegal?

        3. If it is encrypted, how do you know that the message is not perfectly legal? How do you know it isn't a recipe for Black Forest Gateau?

        4. Even if snoops want to decrypt....the key isn't enough!! Is it Blowfish? Is it IDEA? Is it PGP? What's the algorithm? Maybe that is private too!!

        Still....a good try at scaring the commentards on El Reg!!!

    2. Anonymous Coward
      Anonymous Coward

      Re: Encryption? Apple AI? So what would this be?

      > Is it some banned material? Is it even a photograph? Or is it just the output of a random number generator....designed to confuse?

      Downvoted for unnecessarily shuffling electrons. One line was enough to make your point.

  8. Eclectic Man Silver badge


    When a work colleague announced that his wife was having twins, and he would bring in photos, I made some rules:

    1. No nudity

    2. No shit

    3. No vomit

    4. No crying

    I don't know how Apple's software will distinguish between children / babies in images compared to, for example, teddy bears of a similar size (and you can get them legally in bondage harnesses, though I suggest you don't search for such images from a work computer). I expect that there may be a period of over-reporting of images, and I wonder whether the actual Apple employee who identifies an image referred for inspection as bring of child abuse will actually be identified to the legal authorities or if will just be 'Apple child protection team'. It will also be interesting to know how the various countries' law enforcement organisations will engage.

    New parents and indeed old parents often like boasting of their children's progress / humorous accidents (see 'you've been framed' for any number of childhood accidents caught on video) by sending images of them proudly holding up the cup awarded for second place, or covered in mud after falling in a puddle. Designing the algorithms to detect child abuse images rather than normal childhood activities and verifying it will be very difficult. After all being wrongly accused of child abuse is going to be very distressing.

    1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: Kids

      Apple’s software does not need to distinguish between photos you have taken and child abuse material because that is not how the technology works.

      A US child protection NGO called the National Center for Missing and Exploited Children (NCMEC) maintain a database of known indecent images. These are images which law enforcement have found on defendant’s devices, have assessed as being illegal and submitted to NCMEC. NCMEC use technology from companies such as HubStream and Project VIC to maintain this database. This includes a ‘voting state’ where illegal material has to receive a number of confirmations before the item makes it into the production database.

      Typically MD5 and SHA1 were used to hash the images and that hash set (not the images) given to law enforcement agencies, digital forensic experts and internet companies so they could block content.

      NCMEC also run CyberTips which allows internet companies to tip off law enforcement about illegal activity on their platforms.

      The problem is that compression, resizing or minor edits of images mean they will not generate the same MD5. Microsoft developed PhotoDNA as a way to match images with minor changes. They licensed it for child protection work. The problem is it’s not very good.

      Neural hash is Apple’s version of PhotoDNA. It is a mechanism to detect known CSAM where minor changes/recompression has occurred. It is presumably better than PhotoDNA (or may be about the same but means Apple don’t have to cough to using an MS technology).

      Neural hash is not about looking at a new image, eg a teddy bear in BDSM gear and saying ‘is this CSAM’.

      Apple will not have access to the CSAM. They will have licensed the hash generator to NCMEC who then run it over their data, giving the hashes (which cannot be turned back into the images on their own) to Apple.

      So the database is compiled by an NGO with multiple law enforcement agencies input.

      There are no guarantees ‘bad’ or erroneous data will end up in the hash database but this is hardly Apple’s fault. They can’t hope to generate their own data without serious efforts (the database represents over a decade of law enforcement work) and legal issues (Apple cannot curate it’s own CSAM collection). A mitigation against the bad data is the fact there has to be multiple hits before you’re flagged for human review.

      I wish the so-called journalist of technical outlets such as this one would do some research and report accurately rather than chasing click bait headlines. It’s fine to point out the many flaws of this situation but they’re not giving the technical readers the full picture.

    3. Kabukiwookie

      Re: Kids

      And what sort of person would be attracted to a job that requires one to ''verify' possible pictures of this type?

    4. Anonymous Coward
      Anonymous Coward

      Re: Kids

      >being wrongly accused of child abuse is going to be very distressing

      And I wonder who may be sued for a wrongful accusation. Political hay may be made out of this sort of thing: e.g., one might not want to run for public office even if exonerated.

  9. Anonymous Coward
    Anonymous Coward

    What about the training data?

    So the code can be analysed - what about all of the training sets that have been thrown at the system? Neural net code can be analysed and verified, but the secret sauce is how it has been trained.

    And here is the big problem - child pornography is a strict liability offence meaning that researchers would require special permission from the government to even obtain copies to repeat Apple's experiments.

    1. Anonymous Coward
      Anonymous Coward

      Re: What about the training data?

      It’s not really a training set in the normal ML sense. They have generated their ‘neural hashes’ for every image in the database rather than built a model. Apple’s algorithm is really only good for detecting almost identical copies of known abuse images. The data set is compiled by law enforcement. If you don’t trust that data, that’s hardly Apple’s fault. It’s the same data used by every content provider doing similar scanning.

  10. martyn.hare

    Ignore the technical aspects for a moment…

    There’s a giant elephant in the room nobody wants to talk about. The fact these checks could actually be harming children more than not having them.

    The theory is that infringing material gets added to a database, allowing other abusers to be rapidly caught through illicit possession of child abuse images. As in, if you catch one child molester, you should be able to bag a whole bunch of them from the trading of images. In fact, law enforcement have themselves been known to use honeypots to aid in trapping and catching these people. This doesn’t sound like a bad idea at all on the surface, in fact, it’s practically a law enforcement wet dream, where investigators can share simulated abuse images (cruelty-free, no children harmed in the making of) to try to catch sickos before they can do real harm to real children.

    There’s just one problem with this approach when automated at scale and it’s not a small one. This creates a ‘need’ for new images to be created and shared privately, as older images are now more risky to possess since the longer an illegal image is in the wild for, the more likely it is to have been added to a database. Add to this the fact that images depicting abuse which don’t involve real, living humans are also added to these databases and you now have a situation where sickos now have an incentive to sexually abuse more children since it could be perceived as a safer choice (for the abuser) depending upon the situation.

    It’s very easy to say how many people got caught in possession of CSAM but it’s hard to say how many additional children will be molested as a result of interventions like these. Due to a lack of transparency with the public (good luck getting FOIA answers) there’s no way to assess if this measure even does protect the children as claimed. In theory, it should, however, in practice, I very much doubt it.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like