Blocking isn’t the solution
By SafeToNet Foundation
Changing attitudes to communications technology:
1876
“An amazing invention but who would ever want to use one?”– US President Rutherford B. Hayes, referring to the recently invented telephone and not seeing the point.
1882
“The Americans have need of the telephone, but we [the British] do not. We have plenty of messenger boys.”William Preece Engineer-in-Chief of the British Post Office, unable to see the benefits of network-based person-to-person communications.
1910
“We would never get away from it … It’s bad enough as it is, but with the wireless telephone one could be called up at the opera, in church, in our beds. Where could one be free from interruption?”– Michael Idvorsky Pupin,New York Times, Sunday Magazine Section, 6. Prescient opinion of a possible future world, which of course has now materialised.
2016
The United Nations (UN) declared that “online freedom” is a “human right,” and one that must be protected in resolution A/HRC/32/L.20. Online communication is now so vital, it’s been considered as a human right by the United Nations.
Blocks, filters and risks
There has been a huge change in internet usage over the last decade, with the dramatic twin rise of social media and smartphones. There has as a consequence been a rise in online predatory behaviour and cyber-abuse. This in turn has led to significantly more media coverage of these issues which has led to increased concerns with the public and increased calls for something to be done.
There are a range of responses available, some are more effective than others:
- Do nothing – leave things as they are and let market forces and young people figure things out themselves
- Blocking – screen time restrictions, a seemingly obvious technical solution
- Network-level filtering – another technical solution which is a form of blocking that attempts to prevent undesirable content from being seen
- Smart apps that allow young people to fulfil their online lives while protecting them from extreme risk
The “Do nothing” option hasn’t worked – indeed it’s exactly this laissez-faire approach that’s led us to where we are today where at the extreme, predators have a direct access vector to young people, through fake profiles on social media and anonymous browsing tools.
“Parental Controls” (or “Social Media Blocking” or “Parental Blocking”) and Internet Content Filters are understandable but knee-jerk reactions to the risks of children going online. There are risks associated with going online and parents, and society in general, have an obligation to protect children from risk, but blocking counter-intuitively doesn’t seem to be the answer.
Going online is a protected UN-defined human right. And not all exposure to risk is either harmful in of itself, or results in harm. It can be argued that exposure to risk is a good thing as it leads to resilience, in an analogous way to that which vaccines work to improve the resilience of the body’s immune system.
Keeping children safe online is a complex topic bounded by technical, legal and moral issues:
An example of this is sexual communication with a child. This has been illegal since 2017, but before this new law was passed in the UK it was technically possible, legally permissible but morally reprehensible for an adult to engage with a minor on the topic of sex (with a view to perpetuating abuse). However, this is now both morally reprehensible and illegal, though technically still possible: there have been over 3500 prosecutions for this new offence in less than a year.
According to Ofcom’s 2017 Children & Parent’s Media Use & Attitudes report, “going online can expose children to unwanted experiences:
- 17% of 8-11s and 29% of 12-15s who go online say they have ever seen something online that they have found worrying or nasty;
- 45% of 12-15s who go online say they have seen hateful content online in the last year, an increase since 2016;
- One in ten 12-15s have seen something online or on their phone of a sexual nature that made them feel uncomfortable;
- 12% of 12-15s say they have been bullied on social media, equal to the number who say they have been bullied face to face.”
Set against this background, and bearing in mind lurid headlines in the press, it’s a comforting thought that if access to the internet is blocked, then so too is cyber-abuse, and all will be well.
But this approach is simplistic. It deprives young people of their human right to be connected online and deprives them too of all the positives and opportunities that come with being online. The large-scale EU Kids Online report, based on a sample size of some 25,000 children, has this to say on the matter:
“Since risk increases as use increases, it might seem simple to call for restrictions on children’s use of the internet. But online opportunities and digital literacy also increase with use, so there is no simple solution. Rather, ways must be found to manage risk without unduly restricting opportunities.
As with riding a bike or crossing the road, everyday activities online carry a risk of harm, but this harm is far from inevitable – indeed, it is fairly rare. The EU Kids Online survey provides clear empirical support for policy efforts both to manage children’s encounters online so as to reduce harm (though not necessarily to reduce risk). This should be achieved both by designing the online environment to build in safety considerations and to increase children’s digital skills, coping and resilience.
Society has a responsibility to provide guidance and support for children facing online risks. But it is also important to support children’s capacity to cope themselves, thereby building resilience for digital citizens”.
So, risk doesn’t necessarily result in harm, not all risk is harmful and exposure to risk and developing strategies to deal with risk, is a good thing.
According to Ofcom’s 2017 Children & Parent’s Media Use & Attitudes report, “the majority of parents whose child goes online continue to agree that they trust their child to use the internet safely, and feel they [the parents] know enough to help their child to manage online risks. They are also more likely to agree than to disagree that the benefits of the internet outweigh the risks, although agreement has decreased since 2016 among parents of both 3-4s and 5-15s.”
But what are they all doing online that’s so important?
Before deciding or even discussing how to curtail children’s online activities, it’s important to understand what they use “the internet”, and specifically social media, for. The short answer is “everything”. As SafeToNet’s Young CEO, 15-year-old Emma said:
“If we’re not online, we don’t exist.”
The SafeToNet Foundation has interviewed young people from the age of 12 through to 16 about their social media usage and the main apps they use are Snapchat, Instagram, WhatsApp and Facebook. In addition, they use some minority ones (from their perspective) such as Houseparty, Musical.ly and Pinterest. They use these platforms for social chatter, discussions about homework and the world in general such as politics and following news.
It’s not all just posting selfies.
Unsurprisingly this correlates with Ofcom’s research, summarised in the following graph:
Social media sites or apps used by children 12-15, 2010, 2013, 2016 and 2017 [Source: Ofcom]
What’s particularly striking here is the change in use of Facebook and Snapchat. Famously Facebook started as a closed community for university students, and then high school children. Once adults started to use it, it lost its caché and young people have started to migrate away from it to Snapchat. In the words of one young person [girl, 14] in SafeToNet Foundation’s research:
“I’m not on Facebook because my Mum’s there.”
Ofcom also investigated what young people use these platforms for, and unsurprisingly their results are a close match for SafeToNet Foundation’s results:
- Interest in news rises to almost all (96%) 12-15s once they are asked to choose from a list of 11 types of news, on topics including music, celebrities, sports and ‘serious things going on in the UK’
- Four in ten (37%) 12-15s say that they actively look for news, rather than just coming across it, and a similar proportion (41%) say they either look for, or get, updates about any type of news they are interested in more often than weekly
- The most popular sources of news among 12-15s who are interested in any type of news are TV (64%), social media (56%) and friends or family (48%)
- Around one in five say they read paper copies of newspapers (17%) or magazines (14%) for news
- Apart from editing photos, making pictures or making videos, relatively few children say they have done any of the other online creative activities that we asked about
- Online civic participation (signing petitions, sharing news stories on social media, or writing written comments or talking online about the news) is evident among a quarter (26%) of 12-15’s and one in 20 (4%) 8-11’s
Other activities young people undertake online are are detailed in this table (Ofcom):
Online creative activities ever undertaken, by age: 2017 [Source: Ofcom]
In their report “The new Normal: Parents, Teens, and Mobile Devices in the United Kingdom”[1], the authors found that:
Parents say …
Family travel is the activity that parents are most likely to report being helped by mobile devices. Almost half of parents (48 percent) say their own use of mobile devices helps family travel, and almost a third (31 percent) say their teens’ mobile devices help.
Teens say …
Teens agree that device use helps family travel and outings. Teens are less likely than their parents to say that any family activities are negatively affected by mobile devices.
Blocking, by definition, prevents young people from participating in any of the above activities. This prevents young people from participating in social and creative activities, activities which are core to their lives today. Blocking prevents them from self-expression and developing their sense of self. It also prevents them from finding like-minded people with whom they can more readily identify than with their real-world peers.
Looking in the mirror – what about parents?
We know that young people imitate the behaviour of the adults in their lives. Adults may well feel that the children they have responsibility for spend too much time online – but how much time do parents or caregivers spend online?
According to a report published by Psychology Today[2], children feel unimportant, and have to compete with smartphones for their parents’ or care givers’ attention.
In a study of six thousand eight- to thirteen-year-old children, 32% reported feeling “unimportant” when their parents use their cell phones during meals, conversations, or other family times. The children reported competing with technology for their parents’ attention. Over half of the children in the study said their parents spend too much time on their phones.
This correlates with data from USC Annenberg and Common Sense Media, the US’s best-known non-profit organisation for children’s media research, advocacy and parenting advice. Their recent research gives an insight into the ‘new normal’ for UK families in the digital age. Its survey of UK 13- to 17-year-olds and their parents, focusing on ‘screen time’ and the thorny question of ‘addiction’, shows that:
- Nearly half of parents (46%) describe themselves as ‘addicted’ to their mobile device; also, a third of teens (35%) think their parents are ‘addicted’ too
- Half of parents (51%) say they get distracted by mobile devices at least once a day
- More parents in the UK feel the need to respond immediately to texts, messages and other notifications (57% vs. 48% in the US and 36% in Japan)
- Screen time conflicts are common in today’s families with children – ranking as the third most common source of conflict for parents after chores/helping around the house and bedtime/sleep, and ranking fourth for teens (after chores, sleep and homework conflicts)
Yet 86% of parents say their teen’s use of mobile devices has not harmed or has even helped their relationship; and 97% of teens say the same of their parents’ mobile use. Further, most UK families do not think mobile devices disrupt meal times, most parents allow their teens their privacy online, and most are optimistic about the benefits.
Relatedly, another Common Sense Media survey recently reported in the US that for most teens, their use of mobile devices reassures, connects and reduces tension more than the opposite. And Ofcom, which prefers the term ‘dependency’ to ‘addiction[3]’, finds from its annual UK survey of parents and children that most parents say their child maintains a good balance between screen-based and other activities.
Children’s attitudes to screen time
Ofcom’s 2017 Children & Parent’s Media Use & Attitudes report shows what young people think of their use of screen time:
- As in 2016, the majority of children aged 12-15 think they have a good balance between screen time and doing other things
- As in 2016, around half of 12-15s disagree that they find it hard to control their screen time (53%) while slightly more than a quarter agree (27%)
- When asked about the balance between screen time and doing other things, two-thirds of 12-15s (67%) believe they have a good balance; this is also unchanged since last year.
Interesting, isn’t it?
Does network-level filtering work?
Research recently published (July 2018) in Cyberpsychology, Behaviour, and Social NetworkingVol. 21, No. 7, contains some interesting, counterintuitive and perhaps disappointing conclusions about network-level filtering.
This research says: “In the context of Internet filtering, the number needed to filter (or NNF) can be calculated to indicate the number of households that must use filtering tools, for one additional child not to be exposed to online sexual content. A low NNF, between 1 and 10, is desirable as it minimizes the costs associated with treating many individuals to benefit just one.
Our results indicated that between seventeen and seventy-seven households would need to be filtered to prevent one young adolescent from encountering online sexual material. A protective effect lower than we would consider practically significant.
Following our analytic approach, we did not find confirmatory evidence that filters were effective for seeing nudity, private parts, people having sex, or any of the four types in line with our preregistered hypotheses. In fact, contrary to our predictions we found evidence in the direction opposite to what we hypothesized…: households reporting using filters were more, not less, likely to have an adolescent who reported having seen violent pornography in the past 6 months.
Our confirmatory analyses, based on 2018 data from the United Kingdom, provided a more rigorous test of filtering effects. This delivered conclusive evidence that filters were not effective for protecting young people from online sexual material”.
What’s the problem with filtering?
This research into the effectiveness of filtering needs to be put into context. It is in fact an efficacious approach, as the work done by The Internet Watch Foundation (IWF), a UK-based charity, shows. They have been tremendously successful in the UK in reducing the amount of illegal pornography hosted in the UK from 18% to less than 1%, by identifying, classifying and distributing URL lists to ISPs and other filtering organisations.
But the problem lies within the distributed, non-centralised and unmanaged structure of the internet. While this is its greatest strength, it is paradoxically its greatest weakness. Anyone anywhere can post anything online.
Other countries are still on the journey of acceptance that there’s a problem. Recognition of a problem is the first step to solving the problem.
Distribution of child pornography and impact of IWF on UK [Source: IWF]
Conclusions
The “Do Nothing” approach has become ethically and morally unacceptable. Laws are being passed which make certain activities illegal, but in of themselves they will not prevent cyber-abuse or predatory behaviour. Current technology-based solutions aren’t the solution – network-level filtering has been shown not to work, and simple blocking disenfranchises young people from all the positive uses of the internet and social media and would seem to contravene their UN-defined human rights.
There needs to be another way, a method that allows our young people to benefit from the many positive aspects of being online while keeping them safe enough from risk that they develop resilience. Perhaps the answer lies with a different technology? Instagram hit the headlines this week with an announcement that they will use Artificial Intelligence in their effects to safeguard children from the excesses of cyber-abuse.
While this is to be welcomed, Instagram’s solution reaches only Instagram users, and as our research shows, young people use much more than just Instagram. And they are fickle customers to – previously Facebook was the cool place to be.
Perhaps what’s needed is a different approach, a “smart app”; an AI-based solution that takes advantage of the power of the smartphone and cloud-based AI and Machine Learning technologies, one that lies not within the social networks, but with a truly independent provider, one that works across all social media platforms and across all mobile device platforms.
And then maybe, just maybe, children and young people will be able to take full advantage of their human right to be online, without fear of extortion, abuse, or intimidation.
——
[1]Annenburg & Common Sense Media in partnership with Sonia Livingstone, professor of social psychology in the department of media and communications at the London School of Economics and Political Science and editor of Parenting for a Digital Future
[2]https://www.psychologytoday.com/us/blog/going-beyond-intelligence/201711/turn-smartphone-mom-and-dad
[3]In this case, “addiction” is not a clinical diagnosis but a perception about the presence of mobile devices in their lives and their impact on everyday family life [source: THE NEW NORMAL: PARENTS, TEENS, AND MOBILE DEVICES IN THE UNITED KINGDOM USC Annenburg & Common Sense Media in partnership with Sonia Livingstone, professor of social psychology in the department of media and communications at the London School of Economics and Political Science and editor of Parenting for a Digital Future