The Good, the Bad and the Ugly of Apple’s Curate’s Egg

You can’t make an omelette without breaking eggs, or so goes the saying. Apple’s recent privacy announcement included details about online child safety features that they intend to introduce is, we feel, to be broadly but not unquestionably welcomed. For too long the handset manufacturers, a key element of the Online Digital Context, have been silent on this issue, preferring instead to focus on privacy. We applaud Apple for the boldness of this initiative, but they’ve served up more of a curate’s egg than an omelette.

We’d like to explain what we think is the good, the bad and the ugly about Apple’s privacy announcement as it seems to protect online predatory pedophiles, hides vital information about perpetrators of crime from law enforcement and quite possibly renders useless the IWF’s Watch Lists, one of the few protections that children have online today.

Let’s start with the good. As I mentioned we applaud Apple’s boldness, taking a proactive stance with their technology on this vital issue. If nothing else, this has created a global conversation about this pernicious and complex global social problem, one which we at the SafeToNet Foundation have long been involved in.

One of our charitable objectives as agreed with the UK’s Charities Commission is to “educate and inform’ the general public on this issue which we do mainly through our ongoing Safeguarding Podcast series which explores the law, culture and technology of online child safety. You can find our podcasts here or wherever you find your podcasts.

The second good thing about Apple’s announcement is their focus on privacy. Privacy has been woven throughout Apple’s recent product launches and developer conferences, so inevitably they had to use privacy as the platform for this online child safety announcement. Online privacy is a vital aspect of our lives today and many civil liberties depend upon it, and we all require it for daily activities such as online banking.

Another welcome feature of this privacy announcement is that the technologies Apple are using check both the inbound and outbound images entering and exiting children’s accounts. Part of the pattern of grooming behaviour is to desensitise a child to sexual images by sending these images to the targeted child, so by rendering inbound inappropriate illegal images unreadable, Apple are doing a good thing.

Children may of course be sending intimate images, also known as SGII, Self-Generated Intimate Images, even though they have not been targeted by an online predator, they may simply be exploring their sexuality.

There are two problems with children doing this online. The first is that in many countries around the world, this itself is illegal. But all too frequently, and this is where the real problem lies with SGII sharing, the original receiver of these images then forward shares the images which then goes at least locally viral (throughout a school, for example), and the original sharer, more often than not a young girl, get the blame and the devastating shame.

In addition, these images end up on pedophiles’ image collections for further sextortion campaigns against their young victims as in the legal case John Doe vs Twitter spearheaded by ICOSE. So here again, here Apple are protecting children and doing a good thing.

[Added 15/8/21] We like that this seems to be using photoDNA, a mature, tried and tested solution for finding CSAM hashed images. It’s also “approved for use” if that’s the right term by the EU’s ePrivacy Temporary Derogation which you can hear about in this podcast with John Carr OBE.

Apple will alert the child’s parents (or carers) if a child attempts to send an intimate image or looks at an inbound illegal image. We’ve discussed the involvement of parents and carers in children’s online safety with Sonia Livingstone OBE and Stephen Balkam CEO of the US Family Online Safety Institute (FOSI) and based on these expert opinions, we welcome Apple’s initiative here. After all, it’s unfair to place all responsibility for online safety onto children, which is what some safeguarding tech does, and some civil liberties campaigners and individuals seem to promote – see the image below:

A rather extreme view in our opinion, posted on LinkedIn in a thread about the online grooming of the young schoolgirl Shamima Begum.

We also like the fact that Apple will send the CSAM file(s) and the associated “Safety Voucher” to NCMEC for law enforcement officers to take further action on. And we like that this analysis is being done on the device and that Apple are not relying on the Cloud. While Cloud-based child safety features absolutely have a place, much more can be done in real time if the technology is deployed on the device and works within the battery and memory constraints that that imposes.

Plus of course this is exactly the purpose of Section230 of the Communications Decency Act, which we’ve discussed in numerous podcasts including with Child Rescue Coalition’s COO Glen Pounder, film director and child rights activist Baroness Beeban Kidron and with Rick Lane of Iggy Ventures.

One technique predatory pedophiles and sharers of CSAM use to evade detection is to doctor images. We therefore also like the ability of Apple’s NeuralHash to counter this evasive tactic. As Apple explain in their technical summary, “The main purpose of the hash is to ensure that identical and visually similar images result in the same hash, and images that are different from one another result in different hashes. For example, an image that has been slightly cropped or resized should be considered identical to its original and have the same hash.” This is illustrated below:

Apple’s NeuralHash system at work

What are we less keen on in this slew of privacy announcements, what’s the bad?

It must be noted that this is above all a PRIVACY announcement, not a safeguarding announcement. As previously mentioned, we applaud Apple’s strong stance on privacy. But as we well know, while you can have privacy of data, which is vital, privacy and safety are not the same thing. In a totally private world, bad people do bad things and children are one of the most vulnerable groups online and are especially open to predatory grooming, a process which can take only a few minutes. Real security is provided by the combination of privacy and safety. It’s not a difficult equation.

So what Apple are doing is interpreting online child safety through the lens of their privacy absoluteness. What can they do to appear to be safeguarding children online while remaining true to their strong commitment to privacy? If you feel that privacy is sacrosanct and above all else, as Apple seem to do, then we feel obliged to point out some of the implications and consequences of this approach.

First off – the definition of a “known CSAM image”, which seems to be what Apple’s solution is aimed at looking for. Based on this article, known CSAM is defined as “an image becomes known CSAM when a victim has been positively identified in the image”. Given the incredible difficulty of identifying child victims in these images and videos, this definition means the approach Apple has taken will miss the vast bulk of images and videos of sexually exploited children. CSAM is CSAM, it’s not CSAM simply because we know the victim’s name.

Apple’s Hide My Email. This in Apple’s own words “…lets users share unique, random email addresses that forward to their personal inbox anytime they wish to keep their personal email address private”. The inference here is clear – any user of this service can anonymously, privately, send illegal images of children through Apple’s email system. There’s no need to say any more on this point.

Apple’s “Private Relay” feature “…ensures all traffic leaving a user’s device is encrypted, so no one between the user and the website they are visiting can access and read it, not even Apple or the user’s network provider. All the user’s requests are then sent through two separate internet relays. This (sic) first assigns the user an anonymous IP address that maps to their region but not their actual location. The second decrypts the web address they want to visit and forwards them to their destination. This separation of information protects the user’s privacy because no single entity can identify both who a user is and which sites they visit

Private Relay, which has similarities at least in concept with the TOR browser, can easily be interpreted as a predator’s delight as explained in this blog post by the Child Rescue Coalition’s COO Glen Pounder here: https://childrescuecoalition.org/educations/apples-privacy-fix-protects-kids-but-hides-predators-online/

Once online predators understand how much protection this affords them, however unintentionally, then our fear is they will exploit this protection as much as possible and will be encouraged to commit further atrocities against some of the most vulnerable people online. This interpretation means that an unintended consequence of Private Relay is that more children will be abused and more CSAM will be distributed on Apple’s platform.

Note that Private Relay will not be universally available, China for example will not be included in this particular feature roll out, which obviously means millions of children will not be safeguarded by this announcement.

[Added 15/8/21] Apple have released a document titled “Security Threat Model Review of Apple’s Child Safety Features“. On page 13 they say “The reviewers are instructed to confirm that the visual derivatives are CSAM. In that case, the reviewers disable the offending account and report the user to the child safety organization that works with law enforcement to handle the case further.” However there is still some uncertainty about the 30 file threshold limit, more on this below.

[Added 15/8/21] We don’t like that this uses photoDNA. Apple’s iPhones produce high quality 4K video which can run in Slo Mo mode at 120 frames a second. That’s a lot of images. PhotoDNA, while being excellent at what is does, is blind to video content.

The National Centre for Missing and Exploited Children (NCMEC) is the US reporting centre for CSAM. Online child abuse and the distribution of CSAM is a global social problem and this privacy proposal from Apple needs to be also global or at least operate in the same countries as the INHOPE international network of CSAM reporting help lines. In the UK the Internet Watch Foundation (IWF) is the NCMEC equivalent and the UK’s member of the INHOPE network of CSAM reporting platforms.

Apple states that Apple will “manually review” the suspected CSAM content. This requires a considerable number of specially trained staff, which Apple can probably afford, but this means yet another pair of eyes has seen a child being abused, raped, which for the child can be unbearable knowledge as Susie Hargreaves pointed out in our very first Safeguarding Podcast.

In the US victims are afforded compensation for each and every online image and video of their abuse, which we also discussed with Susie. We also discussed their CSAM review process which Apple might find of interest and we’d be astonished if these two organisations weren’t in deep discussions about these features. The topic of online child victim compensation was also explored in an interview with Thomas Mueller Deputy Executive Director of ECPAT.

IWF’s latest report can be found here, and it underlines once again that the most dangerous place for a child to be with a smartphone is the locked family bathroom as we depict in this illustration:

However, in the name of privacy, the ID of the person having illegal images on their iCloud account will not be sent to NCMEC [added 15/8/21 as accounts with less than 30 files won’t be found] . This limits the utility of sending the images to NCMEC for further processing by law enforcement, despite Apple’s claim that these new features allow Apple “… to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM.”  Often, it’s simply not possible to identify either the location of the child being abused, or the identity of the child being abused, from a single image or in the case of video, multiple images.

The upshot of this is that Apple are sending image-based evidence of crime scenes to law enforcement for law enforcement to take further action on. However, while doing this, Apple knows that law enforcement knows that Apple knows that Apple can identify the perpetrators of this crime, but Apple, in the name of privacy, are deliberately and knowingly hamstringing law enforcement’s ability to investigate, to find and protect the child victim(s) and to ensure that justice can not only be served but seen by the victims to be served. Is this a legally tenable position?

We are not legal experts by any means but to our knowledge an accessory to a crime is the assistance by a person or persons in the commission of a crime, all of whom can be charged as joint principals. By not sharing the identity of these perpetrators of criminal acts or possessing and distributing this illegal material, is Apple assisting predatory pedophiles in their crimes? Similarly, are they perverting the course of justice, where perverting the course of justice, at least in England and Wales, is defined as an offence when a person prevents justice from being served on himself or herself, or on another party?

Apple’s NeuralHash is the key tool for their non-real time on-device hash analysis, but as Apple themselves say “Apple only learns about images that match known CSAM” and “Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations.

The result of this is that Apple’s child safety features not only miss the vast majority of CSAM where the victims aren’t known but also these tools do not prevent the production and distribution of new CSAM. In fact, it relies on the production of this material to work and therefore for the abuse to take place. By the time hashes are added and analysed downstream of the abuse, the child victim is in a world of pain.

While the sharing of these illegal images is a crime, so too is the taking of these images and videos and Apple, and other smartphone manufacturers could do much more with on-device safetytech in real time to prevent this abuse from happening in the first place as we explored in this white paper some years ago: https://safetonetfoundation.org/whitepaper/.

WhatsApp head Will Cathart has said that they will not be adopting Apple’s CSAM detection system in their app, which implies it’s not mandatory for App developers that use Apple’s App Store to do so. If this is the case, then clearly the work around for a predatory pedophile is to use one of these other apps in preference to Apple’s iCloud. This would suit Apple but doesn’t really help eliminate CSAM, the scourge of the internet.

In Apple’s CSAM Technical Summary Document no mention is made of MacOS, just references to iOS15 and iPadOS15. Presumably this means that these tools won’t address the live streaming of encrypted child sexual abuse from Apple’s Mac laptops and desktops, which the International Justice Mission highlight in their report on the topic.

The ugly. Apple’s Threshold Secret Sharing, “…ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account. Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images”

It’s not totally clear from the information Apple has provided here about Threshold Secret Sharing to be able to fully understand it but it seems to refer to a quantity of CSAM files. Apple says elsewhere that “Apple servers flag accounts exceeding a threshold number of images that match a database of known CSAM image hashes”.

This seems quite extraordinary, and we’d like to make three observations, the first one being that one CSAM file is illegal and one too many. The second is that there is no relationship between the number of CSAM files stored and the depravity, severity and frequency of offline abuse. And the third is that these online images and videos are visual records of offline child abuse crimes which Apple are duty bound to report once they are aware of them on their systems. So why is Apple applying what seems to be an arbitrary threshold number before their child safety reporting mechanism kicks in?

In this blog post from 9 to 5 mac, “However, Federighi [Apple’s Senior Vice President of Software Engineering] seems to disclose it as part of this interview saying ‘something on the order of 30 known pornographic images matching’.

In the words of our SafeToNet colleague Tom Farrell, SafeToNet’s Global Head of Safeguarding Alliances, “I’ve been responsible for thousands of CSAM related investigations and several of the worst which I won’t attempt to describe here, have only come to the attention of law enforcement through the possession of one image. People that have fewer images than this threshold will be ignored by this privacy policy.”

[Added 15/8/21] The explanation for the 30 file threshold is given in Apple’s document “Security Threat Model Review of Apple’s Child Safety Features“. On page 10 they describe the rational behind the 30-image threshold, inter alia “Building in an additional safety margin by assuming that every iCloud Pho- to library is larger than the actual largest one, we expect to choose an initial match threshold of 30 images.” While this is there for good reasons (reducing the likelihood of false-positives) it does seem to mean that accounts where there are less than 30 images won’t be picked up by Apple’s CSAM detection tools, and therefore won’t be reported to law enforcement. We don’t have insight into the distribution curve of number of files per iCloud account, but there’s likely to be a majority of accounts with few images and a few accounts with thousands. Presumably this will result in  under-reporting of accounts to law enforcement.

Note that content which falls foul of the UK’s Online Safety Bill is where “…the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on a child of ordinary sensibilities”. While Apple’s announcement is US focused, this is a global problem. It’s difficult to see how this privacy announcement squares with the current draft UK Online Safety Bill and it’s easy to see how Apple’s focus on privacy at all costs will cost a lot in child abuse.

At the heart of everything however is the protection of children. Detecting content on Apple’s devices is a great start, but it’s imperative that the offender (which may not be the original abuser, simply sharing these images is just as much of a crime) can be identified to law enforcement so that the children that use Apple’s devices are rescued from their abusers’ clutches and justice is seen by the child victims to be served.

There are other harms too which Apple are not addressing such as cyberbullying, discussed in the UK Government’s Online Harms White Paper but which wasn’t specifically mentioned in the subsequent Online Safety Bill (draft). Both cyberbullying and online sexual abuse need to be solved by real time technologies which seems to be the only way of stopping online harms from happening in the first place.

To listen to the podcasts we’ve mentioned in this blog post, and our entire box set, please visit https://safetonetfoundation.libsyn.com or Apple Podcasts or your favourite podcast directory.


Post edited 13/8/2021 to add a reference to 9to5mac.com

1 thought on “The Good, the Bad and the Ugly of Apple’s Curate’s Egg”

  1. Robert Fairbrother

    Is Apple aware these reservations? What can they do about it? Are you able to have a dialogue with Apple?

Leave a Reply to Robert Fairbrother Cancel Reply

Your email address will not be published. Required fields are marked *

Scroll to Top