back to article Apple didn't engage with the infosec world on CSAM scanning – so get used to a slow drip feed of revelations

Apple's system to scan iCloud-bound photos on iOS devices to find illegal child sexual abuse material (CSAM) is supposed to ship in iOS 15 later this year. However, the NeuralHash machine-learning model involved in that process appears to have been present on iOS devices at least since the December 14, 2020 release of iOS 14.3 …

  1. doublelayer Silver badge

    A useful analysis of Apple's announcements

    Since this issue has deservedly earned a lot of attention from these forums, I think a few blog posts I've read will be of interest to others here. A security expert who runs an image analysis system and therefore already deals with CSAM reporting analyzed the statements made by Apple. A blog post from ten days ago reviews the technical details Apple released and how this compares to his own reporting mechanisms (spoiler, probably badly but they won't provide enough detail). A second post from this Monday reviews their announcements ever since (further spoiler, lots of contradictions but little new information). I found these informative, especially with the additional experience the author brings to the analysis.

    For those who prefer cleartext URLs, the posts are published on the blog at https://hackerfactor.com/blog and are the two most recent posts at the time of writing. You probably want to read them in chronological order.

    1. Pascal Monett Silver badge
      Thumb Up

      Re: A useful analysis of Apple's announcements

      Thank you for bringing this information to our attention.

      Quite an interesting read.

    2. big_D

      Re: A useful analysis of Apple's announcements

      Very interesting analysis.

    3. T. F. M. Reader

      Re: A useful analysis of Apple's announcements

      Good read, and I add my thanks, too.

      A couple of comments though (at the risk of looking as Apple's apologist in the matter which I am most definitely not). First, the blog author "calls bullshit" on the "1 in a trillion" claim. The claim may or may not be bullshit, but the blogger seems to interpret the number as the probability of flagging an image incorrectly. Disclosure: I also spent a few minutes working out the "prosecutor's fallacy" numbers on the basis of the same assumption, but then I decided to check.

      Apple's technical paper (linked in this El Reg's article) is actually not technical enough for me, but it does say that they have a "CSAM content threshold" (how it is set remains a mystery - that's one possibility for "1 in a trillion" to be bullshit) for your account, and they supposedly cannot examine your photos unless/until that threshold is breached (that may also be bullshit). I interpret it as "if there is an occasional match - true or false - your account won't be flagged, but if you habitually upload child pornography to iCloud then we'll notice". It certainly does not look like the probability of a false positive on a single image, and this particular criticism is misleading. It's a bit surprising because the threshold is covered by the blog post in a different context.

      Secondly, as far as I understand Apple only do the analysis on images that are about to be uploaded to iCloud, and thus at least some of the "legal" part of the criticism is invalid as well. In fact, I thought that the blog made an interesting argument why doing the match on the device was the only avenue open to Apple within their own ToS. Some of the "legal" criticism in the blog post may be correct - IANAL, and I don't doubt that the blogger is a domain expert.

      1. martyn.hare

        Well… from what we know

        I’m just a lay person but common sense is a powerful tool and we now know a lot more about how the algorithm works thanks to the infosec community stepping up.

        Images are all resized to 360x360 as the very first step, so if you take burst shots with minimal differences in detail and they all get uploaded to iCloud there and then… well, you’re gonna hit that threshold fast in the case of false positives. If one image in the burst set falsely matches and the subject stayed reasonably still, all of them likely will. The threshold is allegedly 30 images, which is a low number when talking about bursts. We also know the algorithm doesn’t care for colour too, as can be seen by the description in Apple’s own technical summary, which means you’re limited to a subset of potential bits within the RGB colour space which describe shades of grey at best.

        This means the potential number of combinations to test for is quite limited too and collisions will happen, potentially quite frequently. Finally, you have to remember that many of the source CSAM images will themselves be blurry or low-resolution, as every image matching the criteria which has ever been seized will be included, not just modern digital images. There will totally be scans of Polaroids, glossy/matte paper photos developed from analogue cameras, as well as photos of photos which were found to be shared between early smartphone users and folks with slower, old fashioned internet connections. All of that needs to be catered for and that isn’t even including video files, where differences in analogue noise need to be catered for too.

        This isn’t factoring in scope creep which will inevitably happen. Microsoft says to this day that PhotoDNA will only be used to find CSAM… except there’s written evidence to the contrary: https://about.fb.com/news/2016/12/partnering-to-help-curb-spread-of-online-terrorist-content/

  2. Anonymous Coward
    Mushroom

    Not the problem

    It seems like Apple has successfully moved the conversation on to what is, in essence, nit picking. It allows them to handle little problems individually and promote their internal mitigations.

    But we're missing the forest for the trees.

    The real problem is that Apple now has a mechanism that looks at every photo on your phone and sends them to a government depending on what the government wants.

    This is not technology that needs to be improved by fixing faults like those in the article.

    This is technology that needs to be banned.

    1. Falmari Silver badge
      Megaphone

      Apple are hypocrites

      @HildyJ I agree with most of your post except this “This is technology that needs to be banned.”

      With probable cause and a warrant, I believe law enforcement should be able to deploy this tech to a suspect’s device.

      But Apple are not law enforcement there is no probable cause and a warrant. They have deployed software without notification onto their customer’s devices which will scan selected images for evidence of a crime. Now I am sure for most here it is illegal in their countries for law enforcement to place scanning software like this on someone’s phone or computer without probable cause and a warrant the same would apply to searching said devices. Apple are treating their customers as criminals.

      This will be just the start if Apple are allowed to do this. Why stop at images why not include video and text, all data. Once Apple open the door how can they say no to scanning for evidence of more serious crimes than possession of CSAM. Why set the bar at CSAM let’s just get rid of the bar and scan for evidence of any crime?

      Finally, as they have got away with scanning selected data on the phone why not just search the whole phone, it makes sense ‘think of the children’.

      If Apple go live with this, then when it comes to their user’s privacy, they are just hypocrites. They put all this security in place to protect user’s data from everyone including law enforcement even when law enforcement is in possession of probable cause and a warrant. But not to worry Apple don’t need such niceties as probable cause and a warrant they will scan all their customers for evidence of a crime.

      1. Tomato42

        Re: Apple are hypocrites

        if technology already exists it will be abused by the agents of the police state, especially if clueless judges are in the loop.

        1. Falmari Silver badge

          Re: Apple are hypocrites

          @Tomato42 I agree "if technology already exists it will be abused" . But isn't that the case for most technology if it can be abused someone will find a way. For this technology it is too late it already exists.

          This technology can also be used ethically. With probable cause and a warrant police can search a person's computer. They could then run this software to flag up images that may be CSAM and need to be checked. To me that would not be an abusive use of the technology.

          I am sure there are other uses for this technology that do not count as abuse say cataloguing your image collection. I am afraid like most technology there is potential for abuse which I think Apple's use is.

          1. doublelayer Silver badge

            Re: Apple are hypocrites

            The difference is that this technology could be removed, and then it can't be abused. It still exists as a possibility as it always has, but it doesn't exist as an available exploit. If the code is not on people's devices, an abusive government cannot make use of it, and the user is therefore safe from this avenue. That is why it should be removed.

            1. Falmari Silver badge

              Re: Apple are hypocrites

              @doublelayer I agree it should be removed. I was referring to tech in a general sense not Apple's implementation of the tech, in reply to the tech should be banned. See post below.

      2. stiine Silver badge
        FAIL

        Re: Apple are hypocrites

        Who's government warrant? Yours? Mine? Theirs? In your country, pictures of "A" are legal and "B" illegal. In my country, picutures of "B" are legal and "C" are illegal. In their country, picgures of "C" are legal and "A" are illegal. Which images are going to become part of a database used by this feature? A? B? C? or all three?

        1. Falmari Silver badge

          Re: Apple are hypocrites

          @stiine When I said law enforcement could deploy this tech with probable cause and a warrant I was referring to the technology in a general sense, not Apple’s implementation of the technology as a feature. In other words, law enforcement would deploy their own version of this tech, not switch on Apple’s feature to search a device.

          Let’s take my country the UK as an example. They would have their own software implementing this tech, written by who ever they chose to build it. Matching hash against a database populated by CEOP, not the US one supplied by NCMEC, there is no A, B or C. Just like taping someone’s phone or searching their computer it would require probable cause and a warrant. That to me seems an acceptable way to use the technology.

    2. Alistair
      Windows

      Re: Not the problem

      HildyJ, I completely agree.

      Lets start with the CSAM, that, most folks will get on board with because children.

      Next, lets look for terrorist memes!! Yeah, terrorists, lots of folks will be okay with hunting terrorists.

      Oh, right Drug Smugglers!!!! we can go get them too!! No one likes a drug smuggler.

      How about those nutbars that wanna keep the oil barons wallowing in their money pools, Lets go get the anti-greens next!!!

      (I suspect that someone else wrote it better a long time before me, but I'm quite sure even the anti-vaxx community will get it, cause, you know, they'll be on that list too)

      1. Chris G

        Re: Not the problem

        I think it goes deeper than uses for catching yhe bad guys, how difficult is it going to be to adapt it for commercial purposes?

        I can't honestly say I fully understand how the software works but if can surveil one set of information wiith it you can surveil anything with some tinkering.

        Banning is effectively meaningless since the concept is out of the box, there will be many asking 'how can this work for me?'

        1. Pascal Monett Silver badge

          Re: Banning is effectively meaningless

          I'm pretty sure that, if there is a country banning it from phones sold on its soil, the banning will be effective. Complicated to put in place, perhaps, but if, say, China were to tell Apple to take it out, you can bet that Apple will have it out in a jiffy.

          1. doublelayer Silver badge

            Re: Banning is effectively meaningless

            I agree, but looking at the globe, I can't think of anywhere which will do it. Every dictatorship will be thinking of the potential uses for them. A lot of democracies may be thinking the same and haven't stopped privacy violations before. Those few democracies which tend to be forceful about protecting their residents' rights might think about it, but they don't want to be described as "those people who defended child abusers" so they will probably let it slide.

    3. Neil Barnes Silver badge

      Re: Not the problem

      Not perhaps that it should be banned, but more maybe that one should be in control of what applications/software runs on ones own device. It should be possible for any user to remove any application from their hardware, even if that is part of the system software. Yes, things might break, but that's what genius bars are for...

      Though of course no-one would actually choose to install such software: those who do not hold/use/seek such CSAM images would consider that they never need it; those who do would avoid it.

      Nonetheless, I would feel happier in a world in which such software was not installed - the same applying equally to, though for different reasons, the vendor provided crapware that finds its way onto both phones and PCs.

    4. Lord Elpuss Silver badge

      Re: Not the problem

      "This is technology that needs to be banned."

      Unfortunately won't solve the problem - you can't put this kind of thing back in the box, because now people have developed a taste for what's possible, they'll keep looking until they find a way to MAKE it possible.

      The only real solutions here are abstinence or an arms race. Either mass ditching of companies that try to do this, or developing better and better counter-tools & security,

      1. big_D

        Re: Not the problem

        The German government Media Agenda committee announced yesterday that this feature should be removed...

      2. Anonymous Coward
        Anonymous Coward

        "you can't put this kind of thing back in the box"

        Oh yes, you can. Companies do not like huge fines, products removed from shops, executives jailed, and the like.

        If you mean the political will is towards overall surveillance and not towards privacy you may be right - but it's not because the technology used. There's a lot of existing technology that is banned because it is too dangerous.

        For example once it was easy to buy dangerous and poisonous substances, today not so much.

    5. Sam not the Viking Silver badge

      Re: Not the problem

      A mechanism to look at every photo..... hmmm. It's not a big step to look at other content; how about those patent-ideas or potential legal cases? Don't pass them on to law-enforcement, get in there first, make money from them.

      Does anyone seriously think this intrusion is what the operating system of a personal device should be doing?

      1. Doctor Syntax Silver badge

        Re: Not the problem

        Yes. Apple. And probably a few others are salivating. Even now either Google have had something similar in the works for a good while or somebody has had a bollocking for not doing it. And then there are all the security agencies. And more.

        You meant users? Of course not, but they don't count; they're the product.

    6. big_D

      Re: Not the problem

      The German government committe on Digital Agenda yesterday openly criticised it, saying it should be removed immediately.

      Digital Agenda chairman Manuel Höferlin (FDP): CSAM is the :

      “größte Dammbruch für die Vertraulichkeit der Kommunikation, den wir seit der Erfindung des Internets erleben” (The biggest dam burst for the trust in communications, that we have seen since the invention of the Internet.)

      “Every scanned content destroys the trust of users, that their communications are not being monitored. Without trusted communications, the Internet will become the biggest surveillance instrument in history.”

      From German Mac & i website:

      https://www.heise.de/news/Grosse-Gefahr-CSAM-Scanning-auf-iPhones-stoesst-auf-Kritik-aus-dem-Bundestag-6167950.html

    7. Dan 55 Silver badge

      Re: Not the problem

      It's not just photos, it also checks Siri and web searches and could probably be expanded to include other file types.

      And what's to stop Apple running different country-by-country on-device checks depending on how they're leant on?

      As the blog in the first post argues, it's probably why they're doing this in the first place, because they have a CSAM problem and were told by the US government to sort it out. Well, now they have a tool which can be turned into a Swiss Army knife should any country require it.

    8. Chatter

      Re: Not the problem

      This does nothing to identify perpetrators only victims. If and that is a huge "IF" the Apple ecosystem was totally secure then I would agree that this could possibly go after real perpetrators of a sick and unnatural activity. The problem is that the only truly secure computer has no hardware and no software. Therefor the Apple Ecosystem that is being scanned can and will have pictures on many accounts that the account holders do not even know is there. I for instance was just told that my free account was about full. I have no idea of what is there or how to really look to see what is there. Therefor if I have fallen, without consciously knowing it, to social engineering, I could potentially have the tax records for Al Capone's 1935 illegal scams saved there. Even if it is my Bio Metrics that put it there, a properly hacked system will also be able to put it there with my Bio Metrics.

      1. Anonymous Coward
        Anonymous Coward

        Re: Not the problem

        By 'This does nothing to identify the perpetrators only victims,' do you mean the phone on which that hash was generated? I'm pretty fucking sure it identifies someone other than the 'victim'. Also, who are you calling the victim, you don't make that clear.

    9. jezza99

      Re: Not the problem

      Indeed. It would be straightforward to use this technique to match photos against a hash database of, say, faces of people that a government doesn't like instead of a database of CSAM hashes.

      The technical details really are irrelevant. It is the fact that Apple will scan your photo library at all which is the issue.

  3. Anonymous Coward
    Anonymous Coward

    Oh, we WILL be creating and widely distributing "poisoned" images.

    If Apple doesn't back down, their spyware WILL be defeated by being overwhelmed.

    1. hoola Silver badge

      That maybe the case but during that process a huge number of completely innocent people will have their lives destroyed.

      The trouble with anything like this is the assumption that those caught are guilty. Names are "released" and vigilante groups will go out of their way to discover and ensure that information is spread as widely as possible on Social Media.

      None of that can be undone. Yes, the authorities can make mistakes but compared to this feed of information it is a pittance.

      1. martyn.hare

        It would be the end of Apple

        It would be the end of Apple if names came out and you can bet that if innocent people started getting knocks on the door that their iPhones will be blamed. I think Apple will use every excuse they can to NOT report people, while agreeing a framework which is technically approved, as even a single mistake could result in a class action lawsuit from concerned device owners.

        Someone has already claimed to have reverse engineered NeuralHash and the most damning claim made so far is that the model results in different hashes depending upon which iDevice you use. This means the testing mechanism itself is likely unstable across firmware updates too, further increasing the risks of both false positives and false negatives. To exploit the system as it is believed to function today, not only would one have to make a poisoned image with a known collision across all iDevices, one would also have to keep up with unstable changes across sets of OS updates.

        The principle still stands that I personally won't be using iCloud any more if this goes ahead and I definitely won't be buying a new shiny when my current iPad and iPhone go EOL. Privacy and cleaner platform security was the key reason I migrated to Apple products. They just lost their USP and I just lost my key reason for buying their stuff.

      2. David 132 Silver badge
        Facepalm

        Names are "released" and vigilante groups will go out of their way to discover and ensure that information is spread as widely as possible on Social Media.

        "She's a paediatrician! She even admits it!! Get her!!!!!!!!!!"

        And so on. Didn't Private Eye run an ongoing series of "Things That Aren't Paedophiles" back at the time of the last paedo-hysteria a few years ago? Pedalo, Pedagogue, etc etc...

        Anyone storing notes about cryptography algorithms on their iPhone should be very worried, if there's any reference to Nonces in there...

  4. elsergiovolador Silver badge

    Human

    They downplaying by saying that humans are going to review the matches. Likely someone on a minimum wage, probably in a third world country. Let's assume though that these images will be reviewed by highly paid computer vision specialists. They will only be able to see a low res version of an image without that was going to be uploaded and problem with that is, that they will be not able to tell false positives apart. For example if someone turns image grayscale and reduces dynamic range, when downscaled, it will look like a gray rectangle. The false positive that was shown looked exactly like that.

  5. DS999 Silver badge

    Sounds like

    It won't take long before someone is able to produce a number of images that result in false positives, and people will circulate them and store them on iCloud as a sort of denial of service attack on Apple.

    1. tip pc Silver badge
      Facepalm

      Re: Sounds like

      Are you getting it now?

    2. Anonymous Coward
      Anonymous Coward

      Re: Sounds like

      It might not take long, but the generation of matches would require involvement in child abuse. I really cannot see any mass take up of the distribution of false matches to overwhelm a system designed to protect children.

      I can see people turning off icloud sync though. It’s convenient, but frankly not worth it if there’s a risk of using it resulting in a law enforcement investigation.

      1. Anonymous Coward
        Anonymous Coward

        Re: Sounds like

        No it would not. You don't need the image that was used to generate a hash in order to attempt to generate that same hash from a different image....Try again.

    3. DS999 Silver badge

      Re: Sounds like

      So I saw an article about this this morning. Apple had already thought about it, and the documentation they supply already detailed how they get around it. There is a second layer of checking done in the cloud with a different (this time non public) matching algorithm if the conditions for the first match are triggered, specifically to avoid such adversarial attacks. Only if that ALSO shows a match does it proceed to the final step of human review.

      1. Richard 12 Silver badge

        Re: Sounds like

        That statement is either obvious bollocks, or an admission that they are intending to illegally unencrypt and process images uploaded to iCloud.

        Mere possession of CSAM is illegal, other than as part of an ongoing criminal investigation by the police or criminal trial. Therefore Apple are immediately guilty the moment an unencrypted image exists on an Apple server, and their human is also guilty if it's sent to their device.

        There is no defence in law - which is of course one of the really stupid things about those laws, but politicians always were incapable of considering consequences.

        1. Adrian 4

          Re: Sounds like

          "Therefore Apple are immediately guilty the moment an unencrypted image exists on an Apple server, and their human is also guilty if it's sent to their device."

          This seems an excellent point. While Apple claim to avoid accessing a customer's data by searching on the phone itself and then alerting police to make further enquiries, the verification stage where they intentionally copy the data (having reasonable expectation that it's CSAM) to their servers to do a second hash appears to be a hole a mile wide.

      2. tip pc Silver badge

        Re: Sounds like

        So there is far more to the mechanism than they actually divulged initially.

        They are deliberately partially divulging information and withholding detail while asking us to trust them.

        If the researches hadn’t probed the neuralhash then there would not have been any further disclosure of additional checks and balances server side.

        They say they won’t bow to government pressure but how do we know, what stance are they willing to take, what guarantees do we have?

  6. Anonymous Coward
    Anonymous Coward

    Notice how quite Samsung is?

    You'd think the number #1 phone maker in the world, Samsung, would be making hay of this.... "we cannot scan for images of Mohammed or gay pride flags or Tianamen square, or banned poltical images, because we do not scan your private photos like Apple does, even on the 'for the children' excuse".

    But no, strangely silent.

    Samsung switched to OneDrive, Microsoft's cloud, Microsoft make the PhotoDNA algo, the algo that takes a photo, reduces it to an ultra low-res icon, then hashes the icon to obfuscate it. No wonder you need to sign an NDA to examine it.

    The next revelation here is Apple dragging the rest of them down with it. Particularly Samsung.

    "Sure our algo is crap and we're looking at your photos behind your back, sure we call all our customers pedos as excuse for preemptively searching their phones, but Samsung is also doing this AND WORSE,"

    Apple should indicate to the customers which images are flagged and notify them of the on-going investigation into their images. If the false positive rate is so low, as Apple claims, they have nothing to hide.

    If Apple are doing nothing wrong, they have nothing to hide from their customers.

    1. Pascal Monett Silver badge

      Re: Notice how quite Samsung is?

      Maybe Samsung has nothing to say ?

      The fact that Samsung is using a Microsoft Cloud product does not make it responsible for what Microsoft does.

      1. Lord Elpuss Silver badge

        Re: Notice how quite Samsung is?

        I would suspect Samsung do have the tools/capability, but haven't decided which side of the marketing fence will cause them the least damage.

        LEFT side: "We abhor kiddie Pr0n in all its forms, and have a far better way of doing this than Apple. We're just not telling you about it." (Media reaction: "Samsung even worse than Apple, nefarious spying tech included free with all Galaxy handsets")

        RIGHT side: "We abhor invasions of privacy, and would never EVER run or deploy anything like this Orwellian nightmare." (Media reaction: "Samsung doesn't care about human decency, allows Galaxy handsets to be used for pure evil")

        1. Anonymous Coward
          Anonymous Coward

          Re: Notice how quite Samsung is?

          @Lord Elpuss,

          Samsung moved their customers to Microsoft OneDrive on a take it or delete your cloud basis. Microsoft does the photoDNA, the crappy #Hash[26x26 pixel] photo matching algo known for its false positives and used by this database*

          So they are for sure doing this. You can tell by the deafening silence.

          On your 'LEFT vs RIGHT' comment. There is of course the third option, and that is: "we do not search our customers without a properly issued legal search warrant". Or even "its encrypted and we don't backdoor our encryption, since such a thing undermines our customers security and thus weakens everyones security.".

          Nobody, left or right wants some greasy contractor in a darkened room, pouring over their private lives. They want *other-people's" to have the greasy contractor treatment, not them!

          * Apple's matching algo claims of 3 in 100 million false positives for 1 image, it sounds dubious, their image matching algo is nowhere near that, I suspect the 1 image is the key to the misdirection there ... i.e. 3 in 100 for 1 million image set, 3%.

          Maybe don't ask where they got their 100 million customer images test set from, to make that claim!

          Lets say 1% since its not independent matching, i.e. their algo falsely accuses 1% of their customers, and they expect their contractors to fix that up with a bit of additional privacy invasion.

          "For the children" oh course, its funny nobody datamines for NRA membership or gun ownership to protect kids from school shootings. Well not openly, you wonder if they analyze for gun ownership and GPS mine for school attendance in secret. Big data is big money! But only if you can bypass the pesky privacy thing and get at the data.

      2. Anonymous Coward
        Anonymous Coward

        Re: Notice how quite Samsung is?

        On the contrary, Samsung are directly responsible for what Microsofft OneDrive does. Samsung forced a migration to OneDrive:

        https://news.softpedia.com/news/the-transition-from-samsung-cloud-to-microsoft-onedrive-kicks-off-officially-531266.shtml

        "Microsoft and Samsung have signed a long-term collaboration that allowed the software giant not only to bring some of its mobile apps to Android devices launched by the South Korean company but also to turn OneDrive into the recommended cloud service for those buying a Galaxy phone."

        “If you select OneDrive integration, the data stored in Samsung Cloud will be transferred to OneDrive. Upon completion of the transfer, all the Samsung Cloud synced/stored Gallery and Drive data will be deleted. If you select the Samsung Cloud data download, Samsung Cloud data will be completely deleted from Samsung Cloud irrecoverably 90 days after you selected the data download or the final termination date (June 30, 2021), whichever comes first,”

        1. doublelayer Silver badge

          Re: Notice how quite Samsung is?

          So they decided not to run their own cloud. That's not a surprise to me--I've purchased phones from many companies who didn't run their own cloud service. That doesn't make them responsible for Microsoft because they gave people choices including not using the cloud at all.

      3. Doctor Syntax Silver badge

        Re: Notice how quite Samsung is?

        "The fact that Samsung is using a Microsoft Cloud product does not make it responsible for what Microsoft does."

        But they do have responsibility for their choice to use it.

    2. Irongut Silver badge
      Headmaster

      Re: Notice how quite Samsung is?

      Quite what?

      Quite rich? Quite sexy? Or maybe even quite silent... if only there was a word for that... oh yes... QUIET.

      Learn to spell, it might just stop you looking like a particularly dim 10 year old.

      1. Cereberus

        Pot Kettle Black

        If you are going to pull somebody up on their spelling, essentially on their English language skills, I would make 2 points:

        1 - Show some tolerance. People make mistakes, and due to how the brain works could re-read what they have written several times and not notice the error. Alternatively English may not be their first language.

        2 - Before commenting on others skill perhaps you should brush up on your English grammar?

        why does this need to be done on device, if its only for photos uploaded to the icloud, why not just scan for the photos when they hit Apples servers and leave the privacy in place on the device?

        FTFY version - Why does this need to be done on device? If it's only for photos uploaded to the iCloud, why not.......

        1. Falmari Silver badge

          Re: Pot Kettle Black

          @Cereberus +1 for Pot Kettle Black"

          "Why does this need to be done on device? If it's only for photos uploaded to the iCloud, why not......." Why not:-

          A) Apple are performing the scan of an image that is on the phone which is not Apples property. I would call that an invasion of privacy.

          B) Because the scanning software runs on the phone it is using the phone's resources which belong to the phones owner.

          1. Ken Hagan Gold badge
            Facepalm

            Re: Pot Kettle Black

            We need a whoosh! icon.

            1. Falmari Silver badge
              Pint

              Re: Pot Kettle Black

              @Ken Hagan "We need a whoosh! icon."

              Well I certainly need one a very large one. :)

          2. Adrian 4

            Re: Pot Kettle Black

            Why is a software agent that runs on a user's computer (phone) any different in law from retrieving the data and executing the algorithm on your servers ?

    3. Charlie Clark Silver badge

      Re: Notice how quite Samsung is?

      Samsung has already been found out for excessive tracking from their TVs.

      On their phones they make extensive use of third parties, including Microsoft, and just gets users to agree to whichever EULA is required.

      This doesn't make them any better or worse than Apple. But, at least, so far they haven't announced that they're saving the world from child pornography.

      1. Lord Elpuss Silver badge

        Re: Notice how quite Samsung is?

        I'm still disgusted at the fact that my $3,500 Samsung smart TV insists on serving me ads right in the app dock - AND gives the ad focus, so if you use muscle memory to select your most common source then you more often than not select the ad instead of the source you wanted.

        Aside from the immoral and probably illegal aspects here it beats me that they think it's a good idea. I now have a deep rooted animal hatred for all things Samsung, AND the crud they're serving me as ads.

        1. DS999 Silver badge

          Re: Notice how quite Samsung is?

          Does it still do that if you disconnect it from the internet? Because there is no reason to connect a TV to the internet, or use its apps - the apps will stop getting updates long before the TV stops being useful, so you might as well get a set top box for apps and treat your TV as the monitor it is.

          1. Lord Elpuss Silver badge

            Re: Notice how quite Samsung is?

            Yes it does. It's a dumb screen running my AppleTV - but it's got a payload of ads already embedded and it cycles through them. They also don't link to anything any more (no internet) so even more useless.

            For the record: I'm with you personally on the 'dumb screen, smart set-top box' philosophy, but that's not the case with all users. For many, having Netflix, Amazon, <insert streaming service of choice> built-in and accessible through the dock has genuine added value - so it's not right or fair to say there's 'no reason' to connect it to the internet. Besides - why should the user abandon key functionality because the vendor decides they want to be antisocial c*cksuckers and grab a few extra dollars?

            It would be a different case if I'd paid half price and agreed for the cost to be subsidised by ads, like a Kindle, but I paid full price for this and did NOT agree to have ads served at me at time of purchase. Samsung can f*ck right off - I hate them with a passion. F*ck you Samsung, f*ck you.

            1. DS999 Silver badge

              Re: Notice how quite Samsung is?

              Wow, that sucks. When I eventually someday replace my Panasonic plasma (thankfully a dumb screen) I'm planning to get an LG OLED, unless something better comes along by then. But I guess I will need to confirm it can't show me ads even when I don't use the 'smart' features I don't need or want. Ugh!

              1. tip pc Silver badge

                Re: Notice how quite Samsung is?

                Get the Panasonic z2000 which has the master oled from lg.

                I’ve got Panasonic plasmas that refuse to die (2007 & 2012) and last year got an hz1000, crappy apps and no Disney+ or appletv apps but I’ve got an appletv 4k (older model) that’s working great.

              2. Lord Elpuss Silver badge

                Re: Notice how quite Samsung is?

                I think if you've never connected it to the internet then you should (might) be ok. My problem is that mine used to be connected to the internet, until it started serving me ads - and now it's too late because it's downloaded the payload and will keep serving them ad infinitum (pun intended).

              3. Matthew Collier
                Thumb Up

                Re: Notice how quite Samsung is?

                I have an LG OLED C9 connected to the internet and it doesn't show me adds. I think there is an option for a service to extend it's functionality which you can choose (or not) to subscribe to (I didn't), which might then induce ads, but I'm running:

                YouTube, Netflix, Amazon, iPlayer, ITV Hub, Channel 4 and 5 catchup, without seeing ads (except within YouTube, obvs...)

                without ads. I do keep meaning to setup a PiHole though, for all non-PC devices to run through...(belts and braces)

                HTH!

  7. Pascal Monett Silver badge

    "it presupposes an attacker committing a very serious federal felony"

    I'm pretty sure Russian hackers aren't much concerned by that.

  8. mark l 2 Silver badge

    No matter how Apple try and spin this as being able to protect children while also protecting privacy of the iPhone users, they have effectively just backdoored the iPhone for the 5eyes. And no doubt once Apple goes down that route the pressure will be on other phone manufacturers to follow.

    Maybe its time to invest in a Pinephone or similar open source phone now before you can't avoid the 5eyes viewing everything you do 'for the sake of the children' on any mainstream manufacture.

    1. Zippy´s Sausage Factory
      Unhappy

      And not just the 5eyes, for the Saudi government, the Israeli government, the CCP... basically any government that chooses to pass laws to regulate this technology.

      What are the chances the CCP will have a banned image list of their own they require Apple to check for? And that they'll pass a law requiring the name and location of anyone found with those banned images be sent to them immediately?

      This isn't even a case of "for the sake of the children" but a case of "don't open that Pandora's box"

    2. elsergiovolador Silver badge

      It will be a matter of time until you will have to have a scanner on an open source phone.

      Then it will be a matter of a simple stop and search. "Sir, show me your phone. Do you have a scanner? No? Let's go to the station then".

  9. Bartholomew
    Alien

    next step

    Will be to scan all voice conversations on and off the phone for bad trigger words and report them all back to the apple, who needs to know when anyone is not being nice and where they are at that time. Think of the children, that is always the very first excuse used when super dodgy stuff is happening.

  10. deadlockvictim

    AsuharietYgvar»...We cannot let Apple's famous 1984 ad become a reality. At least not without a fight.

    I'm not sure what AsuharietYgvar is saying or, indeed, wants to say.

    Apple's famous 1984 ad introduced the world to (a beta version) of the Macintosh computer. The ad itself was based on George Orwell's novel, '1984'.

    Within the ad [1], a young shot-put-thrower runs into an auditorium of people listening to something akin to Big Brother, throws her hammer and in smashing the screen, shows the aforementioned people another way, in this case, an overpriced, uinder-spec'd computer with a revolutionary interface.

    Does AsuharietYgvar want to prevent the young woman from doing what she did (although she *now* loves Big Brother)?

    Maybe he/she meant the book, '1984' and is unaware that Steve Jobs was paying homage to it with this ad?

    Or alternatively, perhaps the good poster is retrospectively trying to hinder the DTP-revolution that broke out in the mid-1980s. It's a nice try but I think that the dust has settled on this one.

    [1] https://www.bing.com/videos/search?q=youtube.com+1984+apple&qpvt=youtube.com+1984+apple&FORM=VDRE

  11. thames

    findimagedupes

    There is a program for Linux called "findimagedupes" which will search for approximate matches of images. It has been around for about 20 years, so this is not a new idea.

    If you were to run this program over a large collection of photos you will not only get genuine matches, you will also get false matches. The larger the collection of images you run it over, the more false matches you will get.

    It's generally pretty good, but some of the false matches are just completely inexplicable, as in there seems to be no rational reason as to why there was a match. The number of false matches seems to go up exponentially with the number of images being compared.

    And by "false matches" I mean matches that should not by any reasonable criteria should be seen as matches. If we are talking about photos from a common subject domain, then close matches are more common than pure theory would suggest. If for example we are talking about photos of human professional models, there are certain common poses they will often assume, which can lead to them appearing reasonably "similar" even if they are actually different.

    It is quite eye opening to try this for yourself to see what the shortcomings of the technology are. I suspect it's the sort of thing which works fairly well under controlled conditions in a lab, but which doesn't scale up to dealing with billions of photos without producing excessive numbers of false matches.

    Your collection of holiday snaps are not a good example because they tend to be too diverse in terms of subject matter. I'll leave it to your imagination as to where to obtain a large collection of suitable (similar subject matter) photos upon which to run the experiment, but there are a number of well known locations from which to obtain perfectly legal material.

    1. Anonymous Coward
      Anonymous Coward

      Re: findimagedupes

      Simply unset filtering in Bing image search and search for ANYTHING AT ALL...

  12. Tron Silver badge

    The elephant-in-the-room problem that nobody seems to be mentioning.

    The obvious way to dodge this is to produce new content. If you make it very difficult to safely circulate old content, new content will become more highly prized and more financially lucrative. Apple will increase the inherent value of unrecorded CSAM, newly created to avoid their checks. It's an arms race. You build a better bomb, the other guys will too. You succeed in arresting people with old content, new content becomes more desirable. When countries saw the first HIV outbreaks, abuse of children increased markedly as people switched to younger victims.

    So Apple will not be helping. They will simply be pushing pervs to create new content from newly abused victims. If you want fewer new victims, it may actually be better to have the pervs circulating old CSAM. A child is abused to create an image or video. It is better for 10 people to share that content than 9 more victims to be abused to create unrecorded CSAM for those 10 people.

    1. Anonymous Coward
      Anonymous Coward

      Re: The elephant-in-the-room problem that nobody seems to be mentioning.

      You are correct, but 'better' wasn't the right word to use, perhaps 'less terrible'?

  13. Ian Johnston Silver badge

    Is there any reason at all to believe that this checking will be confined to KP and not spread to "anything which the CCP / Mr Lukashenko / the NSA" would rather like to know you are looking at?

  14. Sloppy Crapmonster

    "I’m less concerned about the attack than some observers, because it presupposes access to known CSAM hashes," said Mayer. "And the most direct way to get those hashes is from source images. So it presupposes an attacker committing a very serious federal felony."

    Shirley the most direct way to get those CSAM hashes is to have a copy of the official CSAM hash database. It wouldn't surprise me to know that Apple have a copy of all the original CSAM images themselves, though.

  15. Ken Moorhouse Silver badge

    "deliberately-constructed false positives."

    There's the other side of the coin, where people sharing unlawful content will simply alter their images to not flag up before sharing them.

    1. stiine Silver badge

      Re: "deliberately-constructed false positives."

      how will you know? if your phone performs an NSLOOKUP csam-reporter.apple.com ?

  16. Cybersaber

    Mayer is blind here:

    "So it presupposes an attacker committing a very serious federal felony."

    Framing someone for a criminal offense is itself an offense. If one is willing to do it in the first place, why would this be a bar?

    Hitman:

    "I was all set to whack the guy, but there was no crosswalk - what? You think I'm going to jaywalk like a maniac?"

    1. Anonymous Coward
      Anonymous Coward

      How are you going to prove you've been framed? The miscreant who has your name, spouse's name, children's names, home address, phone numbers, and every IP address you've had for the last 10 years, and a copy of an email that you (ha ha ha) sent requesting the materiel?

      It doesn't matter for 'strict liability' crimes, and this is one of them.

  17. Ken Moorhouse Silver badge

    Image Container Licensing

    The trouble with most file formats is that they are not regulated. It is easy to strip JFIF data from an image and to manipulate an image without leaving an audit trail of changes. At some point in the future I foresee a time when a new image format will arrive on the scene which is enveloped by an Image Container License Wrapper. Licenses would be issued by a government agency, anyone can apply for one using the tenets of KYC (Know Your Customer), and this entitles the user to create or edit images and publish them. Once in place, all websites, etc. will be obliged to use those formats, rather than traditional jpg, gif, png images. It will be a win-win for many people: Copyright can be enforced more easily, image substitution can be better controlled. Woe betide anyone using the web to publish unlawful images as they will be trivially tracked down and prosecuted.

    In areas of the world where KYC is not practised, a generic Image License Box can be used, but this reduces its legitimacy and could be blocked by policy. Ok this is a potential weakness in the idea, but fingerprinting can be used to detect acceptance of unlicenced image streams in a similar manner to normal copyright fingerprinting techniques.

    This would take many years, but deals with the problem at source, not further down the line where there is a lack of traceability.

    1. Tron Silver badge

      Re: Image Container Licensing

      Would you also like us to apply to a government dept. before going shopping or digging our gardens?

      1. Ken Moorhouse Silver badge

        Re: apply to a government dept. before going shopping

        Go ahead and mock all you like.

        There are people that will not be able to go shopping themselves should the use of cash be restricted in countries practising strict money laundering laws. They will, in fact, have to apply to a government dept to be able to do so.

        If this need to detect unlawful images is strong enough, existing technology is like a colander. The problem cannot be solved using it. I am only predicting what could happen in the fullness of time. I'm not suggesting it is something to look forward to.

    2. doublelayer Silver badge

      Re: Image Container Licensing

      I think this wins a prize for simultaneously useless and creepy idea of the day. Congratulations.

      But really, was this meant to be a joke? Is my sarcasm detector not working today? That's an honest question. If it was serious, do you know that wouldn't do anything for this scenario--if there's an untracked format for images, which there is, then people can continue to use that when providing images of crimes. They're already using Tor hidden services a lot. They can figure out how to download an old version of a program so they can open the formats, and they would only have to do that if the new format was actually made mandatory.

      1. Ken Moorhouse Silver badge

        Re: creepy idea of the day

        Creepy it may be. I'm only thinking what is likely to happen in the future. Tech has opened up a Pandora's box of functionality with unintended consequences. Putting the lid back on that box is impossible. The solutions being advocated are mere sticking plasters which will result in bad performance when multiple iterations of bad and good modifications are made to counter each other's tactics. I'm only going by what is happening in other fields where KYC is causing massive increases in compliance checking. We could get to the situation in the future where having a JPG on your pc is a hot potato. Think how much easier that is to detect by outsiders than it is to detect a JPG that contains an unlawful image.

        1. doublelayer Silver badge

          Re: creepy idea of the day

          "We could get to the situation in the future where having a JPG on your pc is a hot potato."

          No, we couldn't. There are so many images used by programs and websites that there's no way of figuring out which of the thousands of images you care about. No, they're not going to make an expensive, likely cryptographic, format for your desktop icons and video game assets. Oh, and video frames too, because if you don't stamp each of those, people will put their pictures in one and have users run it like a slideshow. That adds complexity and size to anything that reads or writes image data. You might as well ask for mandatory KYC on all text, which isn't going to happen either given how often you use the same text as other people.

          "Think how much easier that is to detect by outsiders"

          It's really hard to detect. You have several existing formats to detect, which requires a full scan of the user's disk, which the user is likely to dislike. When you've done that, people will immediately invent new formats to get around that. It is a 2D array and disks are big--it's not hard. You're now playing whack-a-mole with format designers. That's without considering pictures embedded into programs which display it when run (if you just set pixels, it will look like text). Then steganography to hide it in something else. Then programs to retrieve the correct bit pattern from another file which isn't signed.

          In order to reach step one, you already need something a lot more invasive than has ever been tried. It's not practical. In your title, you left out two words. It didn't win the prize for creepy idea of the day--Apple's got a monopoly on that whenever they open their mouth lately. It won for creepy and useless idea of the day.

          1. Ken Moorhouse Silver badge
            Pint

            Re: It didn't win the prize for creepy idea of the day

            Your previous post awarded me that title!

            I'm offended now.

            Seriously though, you are an established commentard on here and I respect your stance. The way progress is made is often by sensible discussion, not through unilateral dictat, which is frequently the case with tech (Apple in this instance). Hence my posts. Someone needs to come up with better ideas cos Apple's solution is not going to cut it.

            1. doublelayer Silver badge

              Re: It didn't win the prize for creepy idea of the day

              Yes, I awarded the full title which you edited and I stand by it. The rest of that comment raises real problems about the practicality of the system. When I said you couldn't do it with video and you couldn't solve the ostensible problem without covering video, that was, in my view, a serious flaw which would prevent a system as described from ever being implemented. There were many more. None were jokes.

              By the way, that's only a list of the technical flaws which came to mind immediately. There are other technical flaws including the systems required for maintaining image history metadata with integrity checking. There are also administrative problems getting the world to accept a single standard for this which would work at all and getting all the software developers to only ever process images with that check in place, despite the fact that manipulation of visual data is so easy it's a frequent homework assignment in introductory computer programming classes. A full summary of the fatal flaws I see in it would be quite long.

              When I call something useless, it means I think it would either be infeasible to implement or would fail to achieve the goals. In this case, I think both apply. That is without at all covering the ethics of such a suggestion, which is the basis for the "creepy" label which you appear to accept. You may take offense at the flaws I suggest, but I have seen nothing to demonstrate that I am incorrect about their existence.

              1. Ken Moorhouse Silver badge

                Re: You may take offense at the flaws I suggest

                My comment was intended to be taken as tongue in cheek.

                As I said, what I have suggested may seem neither palatable nor practical, but there are examples I can give where people have said exactly the same thing about something, and five, ten years down the line the impossible has happened.

                FWIW Doing some back of envelope research Getty Images has, by all accounts, a pretty effective system for policing over 400 million of its images scattered around the world.

                That's one company.

                Now consider the total market for royalty images and how much web traffic is generated in tracking that inventory for unlicensed usage. Now add in the effort that web designers put in to protecting their intellectual property rights.

                Still pie in the sky? Time will tell.

    3. normal1

      It is easy to strip JFIF data from an image.... ~ without leaving an audit trail of changes.

      Would like to subscribe to your newsletter.....

      /former large format graphics tech.

      1. Ken Moorhouse Silver badge

        Re: /former large format graphics tech.

        You know exactly what I am referring to.

        1. stiine Silver badge

          Re: /former large format graphics tech.

          Unless you mean Kodak camera's from 1955, I have no idea to what you're referring.

  18. revilo

    thanks for the article

    I got yesterday my android phone and today my google wear watch. I had been a die hard apple fan before August 5. IThis has completely reversed to disgust to the company who has betrayed its customers and treats them all like potential criminals. By the way: I have not known what I was missing: drag and drop documents from your desktop to your phone like books or songs, wow! this had been so complicated on the iphone. Having a smart watch which actually looks like a decent watch. I think I do not even go back to apple if they would now reverse their decision to plant police software with hashed kid abuse pictures planted on iphones, a system which by definition is not auditable.

  19. normal1

    Children are still being kidnapped.

    The KKK totally disproved the "satanic panic" back in the 70s; by giving brain damage to all the children making claiims of kidnapping and sexual abuse.....

    /the "satanic" victims were reporting being kidnapped by groups burning crosses.....

  20. This post has been deleted by its author

    1. stiine Silver badge
      Unhappy

      I thought that the donut was still unoccupied???

  21. Anonymous Coward
    Anonymous Coward

    Well, there seems to be a key issue with the presupposition that because it is "seriously illegal", no one would do it. There are all kinds of state actors and miscreants around the world who feel comfortably safe from Uncle Sam's reach who would like nothing better than to do their part to take down The Great White Demon's technology...

  22. jhnc

    I suspect the reason for your "inexplicable" matches may often be related to debian bug #87013:

    Although the code finds pairs of images whose fingerprints really do differ by less than the threshold (say: a1=a2, b1=b2, c1=c2, a1=c1, b1=c1), it then, for user convenience, by assuming transitivity, coalesces these pairs into sets (leading to: a1=a2=c1=c2=b1=b2). In some cases this is fine, in others (especially large corpora) not.

    Of course, sometimes it is the case that fingerprints of very dissimilar images are close to each other.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like