The CSAM Seesaw

Balancing Trust and Safety in End-to-end Encrypted Platforms

The Stanford Internet Observatory hosted a panel of speakers presenting views on new products and services intended to protect children in encrypted spaces. These are the key points we feel that each speaker raised in this debate and we’ve provided our responses to each of these key points.

Hany Farid has already expressed his views in this SafeToNet Foundation Safeguarding podcast, but he reiterates them in this webinar. It’s not possible to scan for CSAM content in an encrypted world – that’s the whole point of end-to-end-encryption (E2EE) after all – so rather than de-encrypt the whole of iCloud, Apple propose to perform CSAM scanning on the iPhone itself. This has create a storm of protest from the “privacy lobby” including organisations such as the Electronic Frontier Foundation, notable individuals such as Edward Snowden and online service providers such as WhatsApp, the CEO of which has publicly castigated Apple for this proposal. Hany Farid considers WhatsApp’s response as “hypocrisy” and here’s why.

What WhatsApp says what they do with regard to contraband content is on their website as opposed to in their Terms of Service (ToS). This can be interpreted as WhatsApp not duly informing their users about their policies and giving their users an opportunity to not accept these terms. However WhatsApp does provide information on their website about what they do.

First up is WhatsApp’s use of on-device scanning. Yes you read that right. WhatsApp, famous for their public promotion and use of E2EE networking, performs on-device scanning to, they say, protect your privacy and because of E2EE:

WhatsApp automatically performs checks to determine if a link is suspicious. To protect your privacy, these checks take place entirely on your device, and because of end-to-end encryption, WhatsApp can’t see the content of your messages.

You can see the contradiction here, it needs no further comment from us.

OK, we get the argument this is is for suspicious links as opposed to CSAM, but on-device scanning is on-device scanning. If Apple’s proposed on-device scanning to look for hashed images of known CSAM is the Incipient Slippery Slope that the privacy lobby claim, then why haven’t we seen the collapse of privacy already as WhatsApp have been doing this for years.

Here’s what WhatsApp say on their website, again not in their ToS, about what they do for CSAM:

Our detection methods include the use of advanced automated technology, including photo- and video-matching technology, to proactively scan unencrypted information such as profile and group photos and user reports for known CEI. We have additional technology to detect new, unknown CEI within this unencrypted information. We also use machine learning classifiers to both scan text surfaces, such as user profiles and group descriptions, and evaluate group information and behavior for suspected CEI sharing.

CEI (Child Exploitative Images) is WhatsApp’s terminology for the industry standard term CSAM, Child Sexual Abuse Material. WhatsApp go further than Apple’s NeuralHash proposal as Apple limit themselves to “known CSAM”, those images already hashed by NCMEC. WhatsApp admits that they are identifying both known and unknown, i.e. new not previously hashed, CSAM. WhatsApps says this is in the unencrypted part of their ecosystem – which must by definition be on-device.

It would seem therefore that Hany’s hypocrisy claim against WhatsApp is a valid one.

Daniel Kahn Gillmor, Staff Technologist at ACLU, discusses both the iMessage image filtering as well as the NeuralHash search for hashed CSAM in Apple’s child safety proposals. He accepts Apple’s application of CSAM search term limits on Apple’s search engine Siri, which brings him nicely inline with the UK’s Online Safety Bill which includes search engines as well as online service providers, vis a vis duties of care and transparency reporting.

Daniel refers to different tolerance levels of different families to children sending and receiving explicit images, either of themselves, of other children or of adult pornography. He also mentions that children exploring their sexuality might be exposed to rejection by homophobic parents. This is indeed a delicate balancing act, the privacy of the child vs the right of the parent to parent.

During the grooming process predatory pedophiles in their preparation to rape another child, send explicit images to the child to inure their intended victims to the idea of sex – to normalise it. In most families, it’s not acceptable for children under 13 to look at explicit adult pornography. If a child is receiving this content in their iMessage account then the parents of such young children have the right to know, and to guide and advise the child accordingly.

Daniel says that homophobic parents’ negative reaction to children exploring their sexuality online through picture sharing on iMessage will result in parental rejection. Given that this feature is an opt-in feature, the child would know that the parents will receive an indication or alert that the child has an explicit iMessage image, so it’s highly unlikely that the child would use iMessage to send such an image anyway. In any case exploring sexuality doesn’t necessarily mean using iMessage to share intimate images that children themselves have created, or indeed sharing any intimate images at all for that matter. There are plenty of other valid sources of educative advice and guidance for young teens about sexuality and identity online.

It’s the sending of self-generated images that pedophiles want children to do. This gives then new, fresh, previously unseen content for their own sexual satisfaction. In addition they can share this with other predatory pedophiles who share an interest in child rape for kudos and increased esteem with their predatory peers and in some instances these images can even be used as currency to gain access to online forums dedicated to the posting and hosting of even more CSAM.

In the context of Apple’s CSAM detection solution based on NeuralHash, Daniel says that “Other [i.e. non-Apple] client-side scanning solutions are not adversarial to the user – it is intended to act on behalf of the user and not intended to identify when the user has done something wrong or report them to an authority, parental or otherwise.”

We believe this point of view is wrong for the following reasons:

  • Posting or uploading malicious links into WhatsApp is doing something wrong; it violates WhatsApp’s ToS, and WhatsApp take proactive action to stop this from happening. How is this preventative action not adversarial to the user who has malicious intent?
  • While Apple’s child safety proposal’s aim is to prevent CSAM from being uploaded to Apple’s iCloud system, this is not “adversarial” to the user either – this is beneficial to the user who has malicious intent as it prevents them from breaking the law. There is no privacy, free speech, 1st Amendment or Constitutional carve out for posting and hosting of CSAM.
  • The making, uploading, sharing and viewing of CSAM are ALL illegal. Apple’s proposed solution also prevents other users of Apple’s iCloud system from breaking the law through accidental viewing of illegal images. This is also not adversarial, but acting on behalf of all users.
  • Apple’s proposed child safety solution further acts on behalf of the user as it also help them to respect and not contravene the privacy of the children depicted in these images and videos of offline child rape
  • Apple’s proposed child safety proposition which prevents the posting of these images into iCloud, also helps ensure that the otherwise illegal image sharers are then using iCloud “in the best interest of the child“, something demanded by the UN CRC’s General Comment 25, which wasn’t mentioned at all by Daniel during his contribution to this debate, despite ACLU claiming to be the nation’s (America’s] guardian of liberty – what about the liberty of children to go freely online without fear of molestation, predation and abuse?

Our podcast guest Jenny Greensmith-Brennan CEO of SaferLives says “I think it comes back to my soapbox piece regarding helping people to move away from this behaviour before it is entrenched and ideally before they’ve even crossed the line. If they have images on their device that match with CAID then stopping those being shared further is only a win/win”

Mallory Knodel CTO for the Centre of Democracy and Technology (CDT) suggests that metadata analysis can be used and already is used for “privacy and security” although Mallory doesn’t refer to safety. But Mallory also says that they don’t want too much metadata analysis – they don’t want to encourage more metadata analysis. This is the “Goldilocks” approach – not to little, not too much, but just the right amount. But what is the right amount and who defines it? Mallory doesn’t supply much in the way of detail for an effective approach to eliminating CSAM sharing online through metadata analysis.

Mallory also advocates for “User Empowerment Functionality” and in the same breath refers to “message franking” which “is like CSAM hashtag scanning, a measure you put on top of user reports”. We’re confused by this. Is Mallory saying the CSAM detection should only be activated by user reporting? We moot that most CSAM content isn’t seen by most users of most social media sites, the true scale of this won’t be revealed until social media service providers are legally obliged to proactively search and report it.

Mallory suggests we need to “define a set of principles”. If she is referring to a set of principles to eliminate CSAM then we couldn’t agree more and we’d suggest as a starting point the “oven ready” set of principles launched earlier this year by the United Nations – the General Comment 25 addendum to the UN Convention on the Rights of the Child.

Mallory wonders whether “predictive text or auto correct” could be analysed for content moderation. We agree with this approach and we have great news for Mallory – this is already done by safetytech companies SafeToNetNet Nanny and the BBC. Real-time behavioural analysis of outbound typed messages with real-time nudging interventions through an AI-powered keyboard has been available in multiple countries in multiple languages for a number of years.

The reason we like this approach is that children are unlikely to spontaneously send an intimate image of themselves – if they do so it is the result of a conversation. Therefore it seems logical to identify unusually sexualised keyboard-based conversations that children might be having and to intervene in real-time so that the child makes safer decisions. And it has to be in real-time otherwise you’re left with yet another victim.

Mallory suggests that “..having an image coming through that is blurry at first until the user clicks on it” is a good thing and “.. it has a value beyond just a user receiving a piece of unwanted content“. Whether this blurred image is CSAM or legal adult pornography, the fact it’s blurred will entice the recipient to click on it to see what the image is. In fact they will need to click on it see decide whether or not to see it, thus in the case of children, encouraging them to look at inappropriate images, whether legal or illegal. And what parent or carer wants to look at CSAM?

We have approached Mallory for an interview about DCT’s report which is mentioned in the video, but so far we haven’t received a reply.

Yiota Souras General Council at NCMEC short and punchy contribution argues that the problem Apple’s trying to address isn’t solved with a binary choice between privacy and safety, a point of view with which we totally agree. Yiota demonstrated the scale of the problem with some slides, which we encourage you to study in the video above. To say the numbers are disturbing is an understatement.

To concerns expressed by the privacy lobby note two things:

  • The Internet is used on a massive scale every day to facilitate the online sexual abuse of children and violate their privacy by endlessly circulating images of their exploitation. 
  • WhatsApp have shown that you can have privacy while scanning for and reporting to law enforcement illegal images of child rape. WhatsApp claims they shut down some 300,000 account per month that do this.

Yiota makes the point that NCMEC have now received over 100,000,000 reports from the public and companies of CSAM. In our podcast interview with Ernie Allen who previously headed up NCMEC, he says that towards the end of the 20th Century the distribution of CSAM had fundamentally been resolved, it had all but disappeared. And then came social media.

It’s worth reiterating how images end up on the hash list, as one of the criticisms of Apple’s approach (and therefore by extension of the existing PhotoDNA approach) is mission creep, or will capture innocent images of babies in bubble filled baths. Somehow Apple’s NeuralHash (and PhotoDNA and similar) can somehow be corrupted for nefarious purposed by government bodies.

Yiota explains it thus:

  • Only images submitted by tech companies are reviewed for hashing
  • Each image is triple-vetted by trained NCMEC staff
  • Images/videos are matched against a federal definition and include graphic sexual activity
  • The child must be clearly under 18 i.e. an infant, toddler, pre-pubescent

Under this regime it’s difficult to see how these products can be usurped into doing other tasks. A point to note here is that the definition listed above means there will be many children whose images fall outside of the hashing process, so these images will presumably not be detected by Apple’s child safety proposals.

Apple’s new measures have tremendous potential to detect countless images of csa, provide immense relief to victims and impeding offenders from sharing these images. Yiota Souras General Council at NCMEC

We fully support everything Julie Cordua, CEO of Thorn, says on this topic. It’s very interesting indeed to hear what Julie has to say about Mallory’s two favoured approaches – metadata analysis and pattern recognition.

Addressing the broad scope of child exploitation and the viral spread of CSAM requires a foundational understanding of the problem as well as a nuanced approach Julie Cordua, CEO Thorn

David Thiel, CTO of the Standford Internet Observatory, summarises the debate about Apple’s Child Safety proposals and in his summary one of his key objections to Apple’s proposal is that there is no way to audit the HASH database. This is in our opinion a non-argument, null and void of any merit. Of course there is no way to audit this database – it’s a database of illegal images of children being raped and sexually violated, audited and managed by NCMEC, the congressionally designated clearing house for missing and exploited children issues. What audit mechanism is David calling for? A public examination of this database?

David is calling for more detailed documentation how NeuralHash works – but the more detail provided increases the possibility of NeuralHash being reverse engineered or avoided by some other technical slight of hand – he seems to be asking for a “backdoor” into how NeuralHash works, but I’m sure he wouldn’t be asking for a backdoor into 2048-bit key based RAS encryption systems.

David does make the point there’s no user reporting mechanism within iCloud, for the general public to be able to report to NCMEC from within iCloud, and certainly we would welcome this as it fits the requirements of the UN’s General Comment 25.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top