Safeguarding podcast – Digital Ghosts and Blue Whales with SWGfL
By Neil Fairbrother
In this safeguarding podcast, David Wright Director of the UK Safer Internet Centre at SWGfL discusses digital ghost stories, Brexit, the ICO’s Age Appropriate design code, Age Verification and the “digital age of consent”, the Online Harms white paper, how safeguarding in the online digital context can be better integrated into the day-to-day operational activities of schools, and Safer Internet Day 2020.
http://traffic.libsyn.com/safetonetfoundation/SafeToNet_Foundation_podcast_-_Digital_Ghost_Stories_and_Blue_Whales_with_SWGfL.mp3
There’s a lightly edited for reader clarity transcript below for those that can’t use podcasts or that simply refer a good read.
Neil Fairbrother
Welcome to another edition of the SafeToNet Foundation’s safeguarding podcast where we talk about all things to do with safeguarding children in the online digital context.
Education, education, education, as someone once said. Schools are very much on the frontline of child safeguarding, but as we explore the online digital context, it’s clear that this is far from a straightforward and easy space to understand. So where do schools go for expert advice and guidance? One place might be the SWGfL or the South West Grid for Learning, and today’s guest will help us guide us through what SWGfL is and what it does; David Wright, welcome to the podcast.
David Wright
Hi Neil. It’s great to be here and thank you for the invitation.
Neil Fairbrother
Can you give us a brief resumé please of who you are, your background and who and what is SWGfL? What do you do?
David Wright
Yes. So, my name is David Wright and I am Director of the UK Safer Internet Centre at SWGfL. SWGfL is a charity, we’re nearly 20 years old, we were initially established to connect schools to the Internet, the “schools’ ISP”. Fundamentally we come from being an educational technology charity, although very quickly that role became one around safeguarding children online. So going back to 2003 as we were connecting schools, and so bearing in mind this was just sort of six months after Soham and the tragic murders of Holly Wells and Jessica Chapman, one of our secondary schools in Western-Super-Mare suspected its caretaker of accessing child abuse content. Anyway, that was a catalytic moment for us around we need to do something around this.
Bear in mind in 2003, clearly, technology was hugely different to what it is today. We were essentially taking out dial-up ISDN lines into schools at the time, putting in a whole two megabit per second broadband connection, which was at the time revolutionary, but now sounds unbelievably woeful by today’s standards. Anyway, that work very quickly became one around online safety. And that is essentially our focus and indeed as a charity our primary purpose, the health and wellbeing of every everybody but particularly children in the digital space.
So we produce resources and primarily those are for schools and other agencies working with children. That’s our primary focus. In 2010, we became the UK Safer Internet Centre alongside our partners Childnet and the Internet Watch Foundation. So in that context we’re one of 32 European National Safer Internet Centres, with similar obligations and responsibilities around managing child abuse content online, around a helpline which we contribute to the Centre, a helpline that supports the Children’s workforce, those working with children across the UK around online safety issues. And awareness raising, so things like Safer Internet Day, which in 2020, will be on the 11th of February.
Neil Fairbrother
Indeed. Now you’ve mentioned a couple of other organizations there and you seem to be connected into a whole ecosystem of other organizations, and quite a complex one at that. First of all, there’s the SWGfL and as you say there’s the UK Safer Internet Centre, which in itself is a partnership as you said with Childnet and the IWF, and they seem to be funded by the Connecting Europe Facility and they in turn are connected to something called INEA. And then there’s the Insafe network which connects to BIK, or Better Internet for Kids. It seems to be almost like an onion, there’s so many layers to this. Could you clarify how all of this connects together, David?
David Wright
So you’re right, UK Safer Internet Centre is a partnership of the three charities and we are co-funded by the European Commission in that role as our 31 sister Centres across Europe are as well. So the sorts of agencies that you’re referring to there are essentially the Better Internet for Kids is the current title for the Funding Program that the European Commission has. INEA is the agency that essentially merely manages the funding on behalf of the European Commission, so it’s the inner workings of what is the European Commission. But fundamentally it is co-funded by Europe in terms of our activities as they are across all 32 Centres.
Neil Fairbrother
Okay. Now obviously at the moment we’re dealing with a small issue called Brexit, and so I’m duty bound to ask you if there’s a dependency on EU funding or funding from the European Commission at least: is there an impact of Brexit for you?
David Wright
So you’re absolutely right, and Brexit is looming large for all of us. So yes, clearly our European Commission and European Union funding is dependent on eligibility and there’s, there’s this whole series of unknowns. I would point out that there’s 32 Centres and there’s currently 28 member states, so the network is a little broader than merely the European Union. But then that’s going to be down to the agreements between different countries and the European Commission about membership.
So there’s a lot of conversation going on with UK government and various different departments around this, whether that’s routed via European Union or indeed some of the public service and the public good that we operate. So some of the activities, the resources, the services, lots of them for example the Helpline is there to support those public service and public work for Children’s workforce around online safety issues. So there are lots of unknowns, although I hope that we get some clarity at some point soon around what’s going to happen to the future.
Neil Fairbrother
Well, yes, indeed, don’t we all? Now there’s a very interesting blog entry on your website, which explores the Information Commissioner’s Office’s or the ICO’s Age Appropriate Design code. What is your view of this code?
David Wright
And so this was a particular and actually unique as part of GDPR. It was an obligation within the UK that the ICO was given and we’re great advocates of this particular aspect I think. So looking at the design codes for particularly for children, the design particularly of online media and online services for children is particularly healthy. There are all sorts of, I think, issues associated with ages online, and so some focus, some attention on that is a welcome and a healthy thing.
Neil Fairbrother
You mentioned the issues around age there. Can you have an effective Age Appropriate Design Code without an effective Age Verification or Age Estimation process?
David Wright
So it’s one of the questions that have been made going decades back. We’ve always lived by this minimum age of 13, which is clearly as a consequence of US legislation, the COPPA legislation…
Neil Fairbrother
Which is itself up for review at the moment.
David Wright
Indeed. And the question that we always, when we’re talking to children, when we’re talking to adults as well, particularly parents, trying to both highlight this age 13, but also I suppose break some myths.
So it is not illegal for anyone under 13 to have an account, for example, on Facebook or Whatsapp or Snapchat, although Whatsapp is 16. It’s merely a violation of their terms and conditions. The offense that’s committed under COPPA is that if you report somebody for being under 13, if the provider doesn’t take action, they’re then the ones committing the offense if they don’t take any action. So trying to provide some clarity around what this actually means. GDPR introduced what is essentially a “digital age of consent”, which ranges between 13 and 16 across Europe, depending on the decisions taken by each member state.
So for the UK it’s 13, so anyone over 13 can provide their own [digital] age of consent and verification. Whereas in other countries, for example, in Germany it’s 16. And that’s why we saw the changes that Whatsapp introduced going from 13 to 16, the increasing in the minimum age requirements for use of Whatsapp at the beginning of last year. So it is a complicated picture and so any form of clarity, particularly in the focus and the support by the ICO is always very welcomed.
Neil Fairbrother
The digital age of consent is an interesting one because obviously first of all, you’ve got the supply of various goods which are restricted by ages for children as they go through their journey from childhood to adulthood. And the IWF amongst others have very strong evidence that shows that 13 seems to be the peak age for exploitation for sexual purposes. And yet 13 is the minimum age, or the starting age to be on social media in most countries. Should we not simply follow the Germany example and raise the minimum age to 16?
David Wright
So there’s a couple of points in there I think. So raising the age, and I think it’s still quite early to determine how that would be received and how that would be implemented. Has it stopped for example, 14 year olds and 15 year olds using particular services. I think it’s too early necessarily to tell.
Neil Fairbrother
Well this is where you need to have a robust, non-porous or as non-porous as possible Age Verification or Age Estimation process.
David Wright
Indeed and we were due the introduction of Age Verification for a plus or minus 18 on July 15th, but due to some technicalities it’s been delayed by six months. This is the introduction of Age Verification to access legal, adult pornography, so from commercial adult content hosts or sites generating revenue by providing pornography. So I think that will be an issue when that gets implemented, and the government are clear in the that the delay is merely a delay rather than a change in policy. So we do still expect it to be implemented in the early part of 2020. That I think will give us some early indications about the effectiveness of Age Verification mechanisms, particularly to do with children.
If it is effective, if it’s proved to be effective, I would probably expect then there to be the introduction of similar mechanisms around this age 13, or at least digital age of consent, too. We wait to see. Some of the numbers are pretty significant that the BBFC and DCMS are talking about around pornography consumption, particularly amongst children, and so we’ve already got a benchmark around that sort of research. And so we will be able to measure the effectiveness of any AV introduction as a consequence of the Digital Economy Act. So early part of 2020.
Neil Fairbrother
Okay. So this Age Verification may well be extended to a social media depending on the results of the initial trial with restricting access to adult legal pornography.
David Wright
Yes. So I would expect that. That’s a personal opinion, but we would expect to see some of those mechanisms being applied once you’ve kind of got the precedent around Age Verification or age checking online. Now assuming that the same mechanisms would be available to plus or minus 13, one might assume that it might be another thing that can go in to help protect particularly young children. So currently we see huge popularity in apps such as TikTok, particularly amongst sub-10 year olds, so evidently Age Verification mechanisms, or where it’s applicable, verifiable parental consent mechanisms, maybe suitable as well.
Neil Fairbrother
Okay. Now you’ve got a very interesting report on your site written by two researchers, Andy Phippen Professor of Social Responsibility at the University of Plymouth, and Emma Bond Professor of Socio-technical Research at the University of Suffolk. The report is called “Digital Ghost Stories: impact, risks and reasons”. What are digital ghost stories?
David Wright
So this, Neil, this is some research that Andy and Emma conducted after the latest instance around “online challenges” that we saw during the last week of February or early part of March [2019]. So for listeners to be able to recognize this, I’ll use the term once and once only, the instance of Momo.
It was the latest “warnings issued” about online challenges or suicide challenges. People may recognize as well back in 2017 around the “Blue Whale” challenge, in 2018 we saw the “Doki Doki Literary” websites. So these were all warnings issued about online scary content or particularly, suicide content. So content advocating and apparently causing children to commit suicide. We in an alliance with the NSPCC, with Samaritans, with Internet Matters, a lot of NGOs, have a policy around not naming websites in these particular instances because we think it is counterproductive.
Neil Fairbrother
In what way is it counterproductive?
David Wright
If you were to say to a teenager, “You see that big red button on that wall, whatever you do, do not press that big red button because bad things will happen”. All you do is you raise curiosity, you raise interest and you raise the focus on exactly that sort of thing, and that’s why we don’t name websites. The Internet is full of not nice content. I say it’s full, it has a lot of not nice content. It doesn’t really need us to direct children towards it.
Now in all these cases as well, the research that we undertake in terms of validating authenticity, so for all those these three instances, at the time we undertook research across the European Network together with partners as well, and in no cases have we found any form of evidence that children are actually committing suicide.
A lot of things are I think what has been dubbed fake news, or fabricated stories. We see, particularly the most recent one, we just see, particularly adults, evidently not understanding how media works and hysterically sharing content that essentially creates a moral panic. So the research that Andy and Emma did was looking at over that particular 40 hour “eye of the storm” if you like, taking data from search terms from two and a half thousand schools across the UK and concluded that search terms for, in this case the site in question, went up 45000%, which in our case is evidence that we just merely drive people towards the very content that we have concerns about. And that’s why we think it’s particularly counterproductive.
And I’m just going to come back as well to adults and to general media understanding and the bit that is kind of quite disclosing to us, so evidently people don’t take any form of thought. Again, I’m generalizing, but typically on a media website, you’ll see a headline, you’ll see a couple of bullet points, and then directly underneath you then have all the sharing buttons. And so if you don’t appreciate what’s at play here, you elicit an emotional reaction to those headlines, whether they’re true or they’re not, you don’t have the opportunity, or perhaps you don’t want the opportunity of, investigating further and considering the authenticity of what’s being presented to you. And so you simply press “share”.
Neil Fairbrother
The conclusion the authors have, or one of the conclusions, is they say that they feel that the Momo event “…raises the urgent need for more effective, critical digital literacy training for those in the Children’s workforce with an appreciation that some media and organizations are looking for broadcast popularity and social media recognition rather than putting children’s safeguarding as their number one priority.” So they’re treating these headlines. I mean, I’ve got one here, “The new Momo death, teen death linked to sick WHATSAPP suicide game”, click here. And that’s in the Daily Star. So they’re almost treating these things like clickbait.
David Wright
Neil that’s exactly what we’re suggesting. And like I said, I would very much concur with Andy and Emma’s conclusion there around primarily adults… [for example] we heard stories about parents, I’m shocked by this, that they sat their children down and then they showed them the Momo character or the alleged character, which is actually a Japanese sculpture taken out of context, and essentially traumatized them. The reactions were not being able to sleep and having nightmares.
So our primary concern, again, I come back to our policy of not naming these sites when these events actually happen, is we have primary concern for society’s most vulnerable children. So given a robust investigation and conclusion that these are fake or fabricated stories, we think that with using the evidence to Andy and Emma concluded, if we do drive the population towards these sites, it’s the children, the small minority of children who perhaps don’t have the resilience and might react to that sort of content in a very different way.
That’s our upper most concern, which clearly people who don’t share our view and do share and signpost to this sort of content, and we saw posters doing the rounds that were created for schools to share, and that swirled and further fuelled the hysteria. There can’t be a concern there for children’s welfare, a particularly vulnerable children’s welfare.
Neil Fairbrother
So the journalists and media need to do their research and apply their professional critical thinking to this topic before going off and writing scaremongering, sensationalist headlines.
David Wright
Yes, and I would just add as well onto that schools and other agencies, law enforcement, CAHM services should also think very carefully about issuing warnings as well. And so going through the same sorts of validation and authenticity checks before sharing and issuing warnings.
Neil Fairbrother
You mentioned traumatizing children. So your adults here are parents, concerned parents who believe they’re doing the right thing but they end up traumatizing their children and this is impacting the child’s wellbeing, which is a nice segue into my next question because you have a report, lots of reports on your website, but this one is all about young people, Internet use and wellbeing technology in the home. What is meant by wellbeing in the context of technology and digital?
David Wright
So wellbeing as a subject has very much kind of emerged more recently I think. So if we go back 20 years, the subject was all around child protection online, so children being exploited, which is clearly a harmful thing, whereas that’s still only a small percentage of children I think. The emergence in more recent years around wellbeing, so many parents may well recognize this through the amount of time, or the moods that are perhaps amplified or exacerbated by a compulsion to be online. So it’s often associated with screen time.
We saw the Chief Medical Officers in February publish guidance for parents around managing screen-based activities. They made four primary conclusions for parents, that families should have screen-free meal times, that children shouldn’t have access to screens an hour before bedtime and shouldn’t have screens in the bedroom overnight, and you should take a break every two hours. I think these four things primarily are aimed at highlighting exactly this particular sort of subject matter about wellbeing or children’s wellbeing online.
It’s an emerging issue and it’s one that affects the majority of children. So the consideration of their health and wellbeing online and we’ve seen for example, Instagram’s and Apple’s creation of screen-time reports trying to highlight to you as a user, how much time you’re spending online.
Another aspect to it may will be around body image and the kind of pressures to conform as a teenager, from what you see online. So again critical evaluation skills around [images], on average children, parents take 12 images, selfies, before they’ll be happy with one, as well as if there’s any kind of editing or amending of that image… improving, I say improving, [but really] conforming to what is considered to be a more attractive image perhaps. And it all breeds pressure again that will have an impact on particular on teenagers’ wellbeing.
Neil Fairbrother
Okay. Now there’s a curious result in this report, or at least it’s curious to my mind, in that the younger the children are, the fewer parental controls are put in place, which seems counterintuitive. I’m looking at it now. There’s a graph that says “parental controls are set up”, “parental controls are not set up” and that ratio shifts; those under four and four to six year olds and seven to nine year olds, there are fewer parental controls set up than there are for the 13 to 18 year olds, which seems quite counterintuitive. Why is this?
David Wright
That is an interesting question. This is actually a series of different reports. When we do a parents’ sessions in schools, which we do hundreds of these things each year. Each year we’ll invite the school to to extend children to undertake this kind of survey online. So we’ll gather some information that we can then replay to their parents essentially.
And so these reports, and there are I think three in this particular series of various different aspects of which one is around parental controls, where we have now some 25,000 different responses as months have gone by in terms of this to this particular service. So its origins are exactly that, to inform parental sessions.
But then we step back and we see a national picture and some national trends and trends over time as well. So it’s difficult to determine why that is the case. All we’re doing here is highlighting that this is what the data, what Andym is concluding, that the data is actually saying. So I guess we issue these reports to highlight exactly those sorts of issues and then prompt the question “Why might that be?”
Neil Fairbrother
It’d be interesting to explore that perhaps with Andy. Now the parental controls and filters themselves don’t really address some of the key issues that children are exposed to. So the top three concerns that parents have according to this report are bullying, grooming, and access to pornography and certainly in terms of bullying and grooming, filters don’t do a lot to address that. Grooming can be for sexual exploitation or for radicalisation and again, filters don’t really address grooming either. So what’s the solution for those?
David Wright
Which is another great question and I think very squarely that will be around education, around conversation and around resilience. We cannot, indeed we should not remove risk. Children need to understand, appreciate and be sensitive to risk. They will take risks and that’s a positive thing.
Clearly our job as adults, our job as parents, our job as teachers, our job as whoever we are, whether we’re working with children, our job is to be sensitive and to understand and to prevent that risk migrating to harm. That’s the job that we have.
And clearly grooming, exploitation are harmful and we should be preventing that through various different conversations, providing children with the ability to go and tell them something, to report particular issues. Whether that’s reporting to whoever they feel more comfortable with, whether that’s their parents, another adult that they have in their lives, the provider that they using and not least their friends as well.
We always talk about “children’s horizons”. And so when we contributed towards various different national policy and curriculum standards, so for very young children, so here I’m talking under-fives or perhaps five to seven year olds, what we would refer to in England as Key Stage One infant age children, it’s all around, if there’s anything that makes you feel kind of concerned, then you’d go and tell somebody, it’s an immediate reaction. It’s as simple as that. Anything and everything. Just go and tell somebody. An adult in that case.
When they get a bit older, so now we’re talking sort of eight to eight to ten, eight to eleven year olds, junior age children, it’s all around their personal safety. So perhaps having a basic understanding of personal data, who, where you’re communicating, and again, the result would to tell somebody. So it’s having an awareness of yourself and your own aspects.
By the time we get to 11 to 15 year olds, you need to have a much more of awareness about yourself, but also your friends, so your horizons are broadening as you develop. And so it’s not just yourself anymore, but it’s yourself and your immediate friends and by the time you get 16 to 18 year olds, it’s you and your community. So you shouldn’t be just looking out for yourself and indeed your immediate friends, but it’s also your community, whatever that community means.
So an understanding and appreciation and the resilience, is the thing that we’re talking about and reflecting who you should be responsible for as a child, and who you should know who you should look to. So we are great advocates for example, of Peer Mentoring programs through further research. So if a child, particularly teenagers, were involved in what the media would know as a sexting instance, 74% of children would only ever talk to their friends. And so at that particular age, the reliance on your community is a really important thing. And so Peer Mentoring programs are great. We’re great advocates.
Neil Fairbrother
Okay. Now we’ve been talking about harms, online harms in particular and we now have a new Prime Minister and typically new Prime Ministers usher in new policies or different policy emphases at least. But let’s assume as I think we said earlier that the general approach from the Government will continue to make the UK the safest place to be online, and this means that we can’t ignore the Online Harms white paper that’s recently concluded its review period. And one of the major proposals in the White paper is to create a statutory Duty of Care to make companies such as social media companies more responsible for the safety of their users and tackle the harm caused by content or activity on their services. Is this Duty of Care, the statutory Duty of Care, something that SWGfL agrees with?
David Wright
In essence, Neil, yes, we do. We do agree with a statutory Duty of Care. I think in actual fact it’s something that we’ve lived with in various different guises across the country, for what, 15 years? And so the catalyst was the tragic murders in Soham, in the wake of those we saw the introduction of the Children’s Act in 2004, which placed a statutory Duty of Care on local authorities in exactly the same way. Now as a charity, while we’re an independent charity, we happen to be owned by local authorities, and so that statutory Duty of Care, each local authority in England and Wales has for the safe and wellbeing or the safety of children within their jurisdiction. Clearly law enforcement have a duty of care to the folks within their jurisdiction and I say constituency, their particular areas as well.
So in terms of a term, I think it is very much one that we’re familiar with and one we have precedents for that we can kind of look to around what it means, and how it’s how it’s managed.
So parents do have a duty of care for their children as well. So while it might not be written in any form of legislation, short of clearly negligence for children which is something that’s an offense. Why shouldn’t we have this? If people are providing services that children are using, then shouldn’t they be subject to that as well?
The question then comes is one of scope, so who would become in scope within the Statutory Duty of Care, how might he be discharged? I think are probably more the questions that ensue, that the white paper aims to get to
Neil Fairbrother
Yes. Because the white paper seems to treat everyone the same. So what I mean by that is that children are a special case because they are children. But the Duty of Care that is proposed is broad brush, it applies to everyone. You I think just mentioned the Children’s Act that imposes a Duty of Care as a result of Soham tragedy. But in UK law, according to Graham Smith on his Cyberleagle blog anyway, could to UK law, “..private individuals and bodies generally own no duty of care towards individuals to prevent them from being harmed by the conduct of a third party”. But he’s talking about adults and not children. The Children’s Act is by definition talking about children. But the white paper doesn’t seem to treat children’s separately. It seems to lump everyone together. So should children, when it comes to online, should children be treated as a special case? Should they be treated at a discrete group online? And if so, how could that be done?
David Wright
Okay, so there’s, there’s lots of questions in that as well. So I do think everybody has the right, or an entitlement when using services, to use them in a safe, particular manner. So for example, on behalf of the Home Office, we operate a helpline that supports victims of what is dubbed revenge porn. By definition these are adults and adults who have had their intimate images shared without their consent, which again is an offense within England and Wales which was introduced in April, 2015. I think everybody should benefit from technology but should be free from harm. Clearly children are a different case because children have unique vulnerabilities. Children are children, children will take risks.
Children are programmed to take risk but need more protection and more support to prevent those risks migrating to harm. So the same can be attributed to adults in some particular cases as well. So adults are perhaps either have learning difficulties or who are uniquely vulnerable in one form or another whereas they might not be able to prevent that risk migrating to harm. They may not be aware of that harm, but everyone has an entitlement I think to to be free from harm.
Neil Fairbrother
Okay. We’re running out of time, so very quickly. One last question if I may. Safeguarding in the online digital context is an arcane and dynamic space at the intersection of legal issues, technical issues and ethical or cultural issues. And we started the podcast with the “Blairism”, education, education, education. Now SWGfL does provide a lot of information to teachers and schools, but other subject matter experts do say that this topic isn’t institutionalized into the curriculum or into the operational aspects of schools. Do you agree with that? And if so, what could be done to make it so?
David Wright
We have seen a dramatic rise in prominence and priority around online safety in recent years. We’ve been working in this space, SWGfL, we’ve been established for 20 years and work in the space for over 15 years. And where we are today particularly, we saw particularly since 2013 a prominence in a rising tide of priority and prominence around this particular subject. No day goes by without one issue or another actually appearing in various different guises, and that’s a healthy thing. Is it systematically embedded yet? No, no, it’s not. Although we are making great strides. In England from this September, there’ll be the obligation on schools to introduce online safety or internet safety into the SRE curriculum to make it a mandatory thing. And so that’ll become mandatory for all schools from September, 2020.
We see it involved in some of the digital competence frame working in Wales, the Welsh Government, DCF, around online safety. Again, the introduction of digital competencies. Scotland too making great efforts. The Scottish Government action plan introducing a whole range of 28 different actions that the around online safety too.
So it’s clearly becoming a Government priority and I applaud, we welcome that. We support that. We do have quite a long way to go. So if we just take the data from 360 degrees safety. Each year, we publish an annual assessment report of schools. I say we, Andy compiles data that’s of the fourteen and a half thousand schools using 360Degrees Safe across the UK. It’s a self-review tool. It highlights what schools are strong at, what they’re weak at.
We know that schools are strong on filtering and policy. We know consistently, the weakest part is one of the staff training. That shows that the data suggests that schools spend more time training parents than they do their own staff, which despite it being a statutory obligation in England since September 2016 for schools to be able to provide or include online safety within safeguarding training. There’s a big gap that we have there, at least that’s what the data and that’s where the evidence is all pointing at.
So we do have quite a long way to go, although I do very much welcome the priority and the evidence, just by merely the fact that we do have an Online Harms white paper. The Government’s clearly taking this seriously. And from an education perspective, you say “education, education, education”, that’s where we need to focus on, so we can provide and create children who can benefit from technology. We now have this entitlement to be free from harm.
Neil Fairbrother
David, I think we’re going to have to leave it there. Thank you so much, it’s something I can talk to you all day on. This I think is absolutely fascinating. So thank you so, so much for your, for making so much time available
David Wright
It is my pleasure. And again, thank you. Thank you for the invitation to be able to talk to, I could talk about this for quite a long time.
Neil Fairbrother
And one other thing, Safer Internet Day. When did you say it was going to be?
David Wright
Safer Internet Day 2020, on the 11th of February. The title and the theme will be exactly the same, but would very much encourage everybody just to go to save for internet.org.uk, the UK Safer Internet Centre website. And put the date in your calendar. We’re going to release content in terms of education packs and conversation starters, and would encourage parents and other people to sign as a supporter.
Neil Fairbrother
Fantastic. Okay. David Thank you