Internet Safety: a blog post by Neil Fairbrother and Fred Langford

“If you were a pedophile and you were looking to invent a system for abusing children, identifying them, grooming them, being able to meet them where your safety was guaranteed to carry out acts for your sexual gratification, you’d invent the internet.” Dal Babu, former Chief Superintendent Met Police in this Safeguarding Podcast

Have we got your attention?

On 22 July 2013 David Cameron gave a speech, which turned out to be a defining moment in British child online safety. It set in train a series of events that included the creation of the WeProtect Global Alliance, the publication of the Online Harms white paper, the Age Verification debacle, a new regulator for the internet (Ofcom) and the creation of a small but perfectly formed UK Charity, the SafeToNet Foundation.

In this speech, David Cameron said: “I want Britain to be the best place to raise a family; a place where your children are safe, where there’s a sense of right and wrong and proper boundaries between them, where children are allowed to be children… And all the actions we’re taking today come back to that basic idea: protecting the most vulnerable in our society, protecting innocence, protecting childhood itself”.

Who could argue with such a laudable aim?

From an online perspective, Cameron saw two challenges: one being the proliferation of illegal material, specifically Child Sexual Abuse Material (CSAM) (often incorrectly called “child pornography”), and the other being legal material (in this case “adult pornography”) being viewed by children (for clarity a child is someone under 18). Cameron focussed principally on the search engines in this speech, which when the entirety of the online digital context is considered, is a pretty limited view.

The Online Digital Context

Ernie Allen, Chair of the WeProtect Global Alliance (Ernie’s podcast can be found here), said at the 2019 Promoting Digital Child Dignity, from Concept to Action conference at the Pontifical Academy of Science:

 “…by 1989, the problem of child sexual exploitation images had all but disappeared, then came the internet and perhaps more importantly, the World Wide Web

And one can easily add that then came smartphones with the launch of the iPhone in 2007, as this and the smartphones that followed democratised and liberated the taking and publishing of high quality photos and videos. The unforeseen, or ignored, consequence of the WWW+socialmedia+smartphones as far as children are concerned is that there are now no safe spaces left “where children are allowed to be children”. Wherever the children are, their cell phones are and wherever their cell phones are, the children are.

As this distressing data from the IWF shows:

Contextual Safeguarding is a safeguarding methodology that is contained in UK statutory guidance and which is championed by Carlene Firmin MBE [Carlene’s podcast can be found here]. The essence of this is that children pass through different physical spaces with different people (these are the “contexts”) during their lived day, but this practice and the statutory guidance stops at the online.

However, the smartphone means the “online context” is always with the child, irrespective of which offline context the child find themselves in. Even the family home is now divided into different contexts, as of course children are always online.

This 24×7 private connectedness has given rise to the phenomenon of SGII – Self-Generated Intimate Images (SGII is often referred to as Self-Generated Indecent Images, however we think “Intimate” is more accurate and much more respectful of the child. As far as the child goes, these images are the most intimate thing possible. For the “safeguarding industry” to refer to them as indecent images piles further shame and angst onto the child. We urge the lawmakers, researchers and statutory bodies to use Intimate, not Indecent). It is illegal in the UK for someone under 18 to have, share or receive such a photo, yet children as young as 11 have been taking them either as a result of thinking this is normal behaviour in a relationship or as a result of predatory extortion.

Other than Age Verification (Digital Policy Alliance Age Verification podcast can be found here), which was originally hived off into a separate project under the Digital Economy Act 2017 and abandoned amid some controversy just prior to the 2019 General Election, the Online harms white paper swept up pretty much everything else for everybody. It didn’t identify children being a special case as many argue they should be, and this caused some commentators to say that the Government was trying to treat everyone as children, it was “dumbing everything down”. Others felt it would have been easier if the Online Harms white paper had focussed just on keeping children safe online, letting the adults behave however they want (within the context of existing legislation).

Sticks and Stones may break my Bones, but Words will never hurt me

Our experience shows that safeguarding children in the global online digital context is at the intersection of technology, law, and ethics, culture & religion. It is at the centre of a vast Venn diagram, the interleaving circles of which are tectonic plates that push, squeeze, enable and disable online safeguarding capabilities. In short, keeping children safe online is a complex thing to achieve, far more so than Cameron could have accounted for in his 2013 speech.

The Online Safeguarding Plate Tectonics

Take cyberbullying as an example. Cyberbullying in the UK isn’t illegal – it is regarded by many as a “legal harm”, indeed cyberbullying is listed in the Online Harms white paper as a “harm with a less clear definition”. There is no attempt to define it, yet the purpose of the Online Harms white paper is to pave the way to regulate against it. If there is no agreed definition, then regulation would seem impossible.

The Anti-Bullying Alliance does have a definition: “…the online repetitive, intentional hurting of one person or group by another person or group, where the relationship involves an imbalance of power that is carried out through the use of electronic media devices, such as computers, laptops, smartphones, tablets, or gaming consoles”. (Martha Evans, Director of the Anti-Bullying Alliance has a podcast here).

Which is fine, but what does this actually mean? In of itself, this is not regarded as illegal behaviour. But is that really the case? What does “online repetitive, intentional hurting” mean? Does it mean the repeated sending of messages via a public communications network that are grossly offensive, indecent, obscene or menacing? Because if it does, then that’s contravening Section 127 of the UK’s Communications Act 2003 and that is illegal. And if you’re a 10, 11, 12, 13 year old child on social media and receive grossly offensive, indecent, obscene or menacing messages from persons known or unknown every time you look at your smartphone, at any time of day, hundreds of times a day, what impact will that have on you?

How will the regulator (Ofcom) regulate against children using grossly offensive, indecent, obscene or menacing messages on end-to-end encrypted social media services, even if such communications are clearly in breach of existing British communications regulation and especially when the operator of those services operates under a third country’s jurisdiction? How will they hold foreign-based directors of such companies to account for their platforms enabling the transmission of such messages, when the messages themselves are fully encrypted and unreadable by these companies? Isn’t that the point of end-to-end encryption?

According to Dr Holly Powell-Jones of Online Media Law (Holly’s podcast can be found here), other “cyberbullying” actions that may contravene various existing UK laws, one of which is over 100 years old, include:

  • A course of conduct amounting to harassment of another person
    • Protection from Harassment Act 1997
  • Sending a message that is indecent, grossly offensive, false or a threat
    • Malicious Communications Act 1998
  • Credible threats to kill
    • Offences Against the Person Act 1861
  • Using threatening, abusive or insulting words or behaviour or displaying written material that is threatening, abusive or insulting and intended (or likely) to stir up racial hatred
    • Part 3 & 3a of the Public Order Act 1986
  • Religious hatred
    • Racial & Religious Hatred Act 2006
  • Hatred on the grounds of sexual orientation
    • Criminal Justice & Immigration Act 2008
  • There are existing criminal laws around using technology as part of coercive control, stalking, domestic abuse, image-based abuse (i.e. “revenge porn”, indecent images of under 18s, “up skirting” etc)
  • And – in addition – there are civil laws to uphold our rights as well, such as Defamation (spreading damaging lies about someone) and Privacy (breaching confidentiality where someone has a reasonable expectation of privacy).

The Network effect

And all this, dear reader, is happening over your networks. It’s fashionable, even justifiable, to give the social media companies a kicking on this. But if you take away the networks, even a £10,000 gold-plated jewel-encrusted 6G iPhone Pro Max won’t allow predators to predate on children. So far, the network operator’s action has been to successfully argue that they have no responsibility for what passes across their networks.

Section 230 of the Communications Decency Act (CDA) of 1996 is a landmark piece of legislation in the United States. Section 230(c)(1) provides immunity from liability for providers and users of an “interactive computer service” who publish information provided by third-party users:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Section 230 has frequently been cited as a key law that has allowed the Internet to flourish, often referred to as “The Twenty-Six Words That Created the Internet”. For further information on Section230, please refer to the podcast from Rick Lane former Director of MySpace, Eleanor Kelly Gaetan Director of Government Relations for the National Centre on Sexual Exploitation (NCOSE) and Stephen Balkam CEO of the American Family Online Safety Institute (FOSI)).

In a podcast interview for the SafeToNet Foundation, Andrew Kernahan, Head of Public Affairs for the Internet Services Providers’ Association (ISPA), had this to say about the “mere conduit” defence:

So the “mere conduit” defence comes from the [EU’s] eCommerce Directive that was implemented in the UK as the eCommerce Regulations. This gives protections to ISPs in that they are judged as mere conduit. So all of the activity that is transmitted on their network by their customers they’re not legally liable for unless there is another legal protection or legislation that requires them to act.

So, for example, we have this mere conduit defence, but at the same time copyright infringement blocking takes place by a notice to ISPs to block access to sites that infringe copyright. But crucially, a Judge there is making a determination, or a Court is making the determination, as to whether that content needs to be blocked or not.

… So it’s not a Wild West where ISPs are just given this defence as a mere conduit to not care about what happens on the network, but it does mean that where policymakers and society want ISPs and access providers to act, they need to do so, using ideally legal mechanisms so that ISPs aren’t being the sort of judge and jury of online content.”

Which is all well and good, and very convenient, but it does place the onus elsewhere; who should be the judge and jury of online content, if anyone?

DoH!

At the time of writing, the EARN IT Act has been introduced by a group of US Senators which would, they say, create incentives for companies to “earn” liability protection for violations of laws related to online child sexual abuse material (CSAM). If this passes it will have a global ripple effect.

Australia’s eSafety Commissioner, Julie Inman-Grant, is promoting the concept of “Safety by Design”, especially for children online. The initiative, Inman-Grant said, “shifts responsibility for safety back onto technology organisations themselves”. (Julie’s podcast can be found here).

There are three main principles to her proposals:

  1. service provider responsibilities
  2. user empowerment and autonomy
  3. transparency and accountability

Within them practical steps for providers are offered, such as:

  • Nominating individuals to be accountable for user safety policies and implementation
  • Establishing clear escalation paths for user concerns (where the users could be children as young as 10, even though often 13 is the minimum age to join a social network)
  • Implementing in processes to detect, flag and remove illegal and harmful content, such as cyberbullying and child sexual abuse material

What might this mean for network operators and ISPs in their decision making when designing and implementing services?

Let’s consider the implications for online child safety of an internet network encryption technology that’s very à la mode, DNS over HTTPS (DoH) or perhaps more properly, RFC8484.

Just in case you don’t know, today and seemingly since the dawn of time, your DNS lookups are unencrypted, they are sent in “clear text”. This has the privacy lobby up in arms as in theory at least, these clear text lookups could be intercepted and the men in black hats would know that you searched for www.waitrose.com. Or www.kiddieporn.com. And it’s the latter type of search that is quite useful to know about.

Indeed the Internet Watch Foundation’s members make full use of these unencrypted clear text lookups to block URLs containing illegal child sexual abuse imagery that they have proactively found using proactive searches, or that have been reported to them by the public through their reporting page at https://report.iwf.org.uk/en

Neil Brown of decoded.legal argues that we need to stop thinking about such DNS interventions as a panacea, or something which DNS was ever designed to support, but rather as a hack; an exploitation of vulnerabilities of the DNS infrastructure, to bring about a particular result. Arguing that DNS should continue to be used or abused in this manner is essentially an argument that:

  • DNS represents the best way of tackling the solution because that’s what has always been done and
  • long-standing vulnerabilities should be made a core part of the intended working of the system, rather than bugs to be fixed.

All of which may be true, but if the introduction of new services or architecture disrupts this “accidental” method of dealing with images and videos of child sexual exploitation, then it is incumbent on the designers of these new services to include in their service design features that will either allow this method to continue, or propose an alternative which is at least as effective, if not more so.

And what takes precedence, privacy or safety? Lianna McDonald, Senior Executive of the Canadian Centre for Child Protection recently tweeted this quote from the Phoenix 11, a group of online sex abuse survivors “I want to stress how our rights to find and remove the images of child sexual abuse should outweigh any privacy rights that are protecting paedophiles to hide the content.”(Liana’s podcast can be found here).

If full end-to-end encryption, of which DoH is a part, is deployed then it is impossible to scan for this illegal and highly damaging content, it cannot be found, cannot be blocked unless something else changes. Service providers will, could or should have to scan their own servers using PhotoDNA or equivalent and this will take some service design thinking. Perhaps this could form part of the criteria to “earn” their immunity as proposed by the Earn It Act?

In a SafeToNet Foundation podcast interview Fred Langford CTO of the IWF said this: “… So if somebody has been a victim of abuse, we are aware from talking to survivors of abuse that the revictimization of knowing that people are still able to see their abuse out on the internet, is obviously very troubling. So the more checks and interventions that we can put in is going to help reduce the revictimization of those victims.

… if the filtering solutions can be bypassed, obviously it leaves a great deal of anxiety and angst for that victim to think that more people are going to potentially stumble across that material. And what is the outcome of that? Does it mean that more people are going to start seeking their abuse in the future?

… it means that if somebody has got a predisposition to have an interest in this sort of content, they may never realize that if they never stumbled across the content. If they hit content accidentally, it may awaken something that they previously wouldn’t have been able to realize. And so there are a number of risks associated with not being able to filter known content from being viewed.”

To give you some idea of the scale of this abhorrent content, Susie Hargreaves OBE, CEO of the IWF, said in another SafeToNet Foundation podcast interview… in 2018 we removed over 105,000 web pages of child sexual abuse and that’s millions of images and videos. Compare that with the first year I was at the IWF, which was seven years ago, we took 9,000 web pages down”.

 Another victim impact of DoH encryption is this, again from Susie Hargreaves: “Tara, who I met in the States who was sexually abused by her stepfather from birth right up to the age of about 15 when she was rescued… and her stepfather got 60 years in prison, so it gives it a sense of how bad it was.

 She had an assigned police officer and in the United States, you can opt in to be notified anytime someone’s caught with your images on their computer because it’s linked to damages. You have the right to sue them for damages. And she opted in and she had had over 1500 notifications when I’d seen her, and the police officer who worked with her told me that one of her images had been shared 70,000 times”.

If these images become unfindable, untraceable, then this (American) right to sue those distributing them ends. This is another form of victimisation as the compensation due can’t be paid as the images can’t be identified.

And there’s another deeper subtlety. DNS has always been in Layer 3 of the Seven Layer model, in the Network layer. But DoH takes that DNS function and puts it into Layer 7, which is the Application level.

ISO OSI Seven Layer Model

Now this might sound a bit arcane, but there are implications for all of us because it means that the application provider is now responsible for the DNS lookups and not the network provider. If the application provider turns out to be a dishonest person, a bad actor or organization, could this not then lead to more harm, and more undetectable harm?

According to Fred Langford, yes it could: “They [ISPs] are potentially being taken out of this area, but because it’s moving to the application level and with apps, people using apps, it does mean that applications on mobile devices or fixed devices can program their own DNS.

But with applications, yes, there’s always an opportunity that somebody could develop an application that is going to tunnel through and it would be invisible to the IWF until somebody either notified us that there was a problem on that particular application. But what we could do would be limited.”

And so it goes on: if app providers become responsible for DNS lookups, then the app stores themselves have to be involved to make sure the app providers aren’t submitting nefarious apps with criminal or malicious intent. It’s not clear at all at the moment how this can be done.

Another impact of DoH is on “Parental Controls”. David Cameron referred to these in his 2013 speech: “And today, after months of negotiation, we’ve agreed home network filters that are the best of both worlds. By the end of this year, when someone sets up a new broadband account, the settings to install family friendly filters will be automatically selected; if you just click next or enter, then the filters are automatically on.

And, in a really big step forward, all the ISPs have rewired their technology so that once your filters are installed they will cover any device connected to your home internet account; no more hassle of downloading filters for every device, just one click protection. One click to protect your whole home and to keep your children safe.”

By deploying DoH, it’s not clear whether these family-friendly filters will continue to work.

There’s something else too. DNS content filters are mandated by the law of the country you are in. But in a DNS over HTTPS world, they’re are mandated by the laws of the country where the remote resolver is located, over which we in the UK have little to no control. This doesn’t seem to stack up with the stated aim of the UK government to make the UK the safest place for children to be online; it seems to scupper the very foundations of the Online Harms white paper.

Professor Sonia Livingstone OBE, Professor of Social Psychology at the LSE (podcast here), identified a number of online risks to children in a research program called EU Kids Online. Some of these risks we’ve already covered, but there’s one more that isn’t always regarded as a risk, and that is “commercialisation” as shown in the table below:

EU Kids Online Classification of Online Risk

In our podcast interview with Fred Langford, we discussed whether DoH service providers will be monetized using people’s data in a way that today DNS service providers aren’t, as well as making much more use of cookies for tracking purposes and “fingerprinting”. I asked him if this will this result in an increased commercial risk to children:

There is the potential for that, yes. So what I should have made clear at the beginning is whoever is running this DoH server has full visibility because all the traffic is encrypted from the moment it leaves the user’s device until it hits that server. So whoever’s controlling that DoH server can look at all the requests…

With it all being centralized, there is an opportunity to monetize that data and to be able to see who’s going where and how should you prioritize traffic.”

Time to act

Notwithstanding that the USA, where most western social media companies are based, hasn’t ratified it, the United Nations Convention on the Rights of the Child (UNCRC) recognises that children need special safeguards and care in all aspects of their life, including online, and the “Optional Protocol” that directly addresses the issues of Child Prostitution and Child Pornography (Sad to say, even the UN uses this incorrect term. As noted previously, it should be “child sexual abuse”), further reinforces this. (Our podcast interview with the Special Representative of the UN Secretary General on Violence Against Children, Najat Maalla M’jid can be found here).

Only now after the dreadful and devastating deaths of children, the out of control pandemic of child sexual abuse images and videos, the increasing understanding of the long term costs of Adverse Childhood Experiences (ACEs), are we as a society beginning to realise the free-for-all must come to an end and need to take technical and legal steps to ensure children who are naïve in life must be treated as a special case, especiallyonline. We need to address the view that suggests it’s too difficult to eradicate what Father Zollner, President of the Vatican’s Centre for Child Protection in this podcast, calls the “evil of online child sex abuse“, that it is a price that is acceptable, that it’s a price that we’re willing to pay, for all the wonderful things that the World Wide Web allows us to do.

The current policy of declaring indemnity by the constituent technical parts of the online digital context, the ISPs, network providers, handset manufacturers and online services providers, results in having to educate young children about the worst aspects of human behaviour. It ends up placing responsibility for child safety onto the children themselves which in turn results ultimately in victim blaming: “If only you’d done as you were taught, then this wouldn’t have happened”. This cannot be right and neither can the resultant damage done to children’s mental health.

It can’t now be acceptable to design and implement new services and technologies without thinking about the impact this new service or technology will have on children, given that there are no effective age verification or estimation barriers to children being on social media platforms.

The example given here, DNS over HTTPS, may provide a marginal gain for privacy by encrypting our DNS lookups but this is set against a range of other issues as discussed, all of which have yet to be resolved, and which may result in overall less safety for children. It’s not fair to children to implement a new feature which removes even an “accidental” safety feature and not to replace it with something else.

Back to Fred Langford’s podcast interview: “The international standards bodies for technology don’t have a policy consideration… So when RFC8484 was developed, nobody thought, “How is this going to impact children?” “How is this going to impact any of the efforts from terrorism?” “How is this going to impact on all the other myriad of vulnerabilities that happen online?”

And … it’s been a bit of an awakening for those bodies as well, because many of the techies that are involved in these conversations are raising this as a serious issue. They’re saying don’t want to develop a standard that’s going to damage kids, but inadvertently because of the process that’s been in place for a number of years, that’s exactly what they’re doing.

So the pressure’s really been put on the Internet Engineering Task Force, the IETF, that manage all of these RFCs to actually do something about it and to change that policy consideration.”

Will you, as a network operator, as a key component of the online digital context, demand standards from the likes of the IETF and others, that help safeguard the safety of children online? Will you build in to your network services design and technical working practices a child-centric, holistic, child safety led approach to help combat the scourge of online child sex abuse?

It’s time to act.


Originally published in the Institute of Telecoms Professionals quarterly peer-reviewed paper The Journal and reproduced here with kind permission.

All of the podcasts mentioned in this blog post can be found on ApplePodcasts, Podcast Republic, Spotify and other podcast distribution network, or directly from here: https://safetonetfoundation.libsyn.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top