Safeguarding podcast – The VoCo Manifesto & the Fox in the Hen Coop with Antonia Bayly of the DCMS

In this Safeguarding Podcast we discuss the VoCo Manifesto with Antonia Bayly Head of Safety by Design and Age Assurance Technologies DCMS. Age Assurance vs Age Verification vs Age Estimation, how it all works and the implications for online child safety.

There’s a lightly edited transcript below for those that can’t use podcasts, or for those that simply prefer to read.

Welcome to another edition of the SafeToNet Foundation’s safeguarding podcast with Neil Fairbrother, exploring the law, culture and technology of safeguarding children online.

Neil Fairbrother

Fundamental to the discussion of online child safety is the question: “What is a child?”

The reason for this question is that there’s a hypothesis that says that if we can work out who children are online, then we can better provide a safe online environment for them. But what seems at first blush to be a simple solution is in fact, rather complicated. To explore how this might be done, I am joined by Antonia Bayly who works within the UK Government’s Department of Culture, Media, and Sport as Head of Safety by Design and Age Assurance Technologies. 

Welcome to the podcast Antonia. 

Antonia Bayly, DCMS

Thank you for having me. 

Neil Fairbrother

Can you give us a brief resumé please, so that our audience from around the world has an understanding of your background and expertise?

Antonia Bayly, DCMS

Sure. Well, as you said, I’m a UK civil servant. I’m a Policy Advisor in the Department for Digital, Culture, Media, and Sport or DCMS, as you’re probably more familiar with it. I currently lead on Safety by Design, which is around how we can design and build safer platforms for users and also Age Assurance technologies. Previous to my current role, I also worked on the development of the Online Harms white paper and proposals for the future regulatory framework for online harms. I also looked at the existing content liability regime for illegal content online. And then previous to that I was also post the [Brexit] referendum, looking at the potential future relationship between the EU and UK on digital.

Neil Fairbrother

Okay, well, there’s a whole podcast on that topic. Now you recently produced a report called the VoCo project or the Voco report. What is Voco and what is its purpose?

Antonia Bayly, DCMS

So VoCo, or the Verification of Children Online, is a child safety research project. It responded to the challenge of knowing which online users are children. As you said earlier, it’s quite a complex and broad issue. It was motivated by the hypothesis that you set out at the start that children’s online safety and wellbeing will be improved if we have an internet that actively recognizes them and adapts the spaces that they use to make them safer. 

So this project set out to understand the challenges for making this a reality and in a way that works for everyone, for children, for parents, and also for platforms. As part of that, it looked in detail at Age Assurance. This subject matter is quite complex and far reaching and it touches on a wide range of stakeholder considerations. So the project took a holistic approach to exploring the issues, it combined multi-stakeholder engagement and collaboration with research, and also some technical prototyping as well. 

There were two phases to this project. The first one started in January, 2019 and lasted for 12 weeks and the report that was published and at the end of last year was the summary of phase two, which ran over the Autumn of 2019 and into the start of 2020.

Neil Fairbrother

Some listeners may be intrigued to learn that GCHQ was involved. GCHQ being the UK’s famous spy organization. Should people be concerned that GCHQ were involved?

Antonia Bayly, DCMS

Not at all. I think a lot of public image of GCHQ is around it being a spy agency, but it that’s not the role that it plays in lots of really important parts of government policy. Protecting online and the specific challenge of knowing which users are children is a priority for GCHQ because they play an important role in combating online grooming. And actually the team that was directly involved in this project was the counter CSA team. So they have a huge amount of experience working with the Home Office and also at the National Crime Agency on that issue as well. 

They were the initial driver behind this project from their work on countering online grooming. They recognize that simply blocking children from services wasn’t the right answer and had lots of unintended side effects. And they also recognized that if you can identify offenders, that’s hugely beneficial, but what is also as beneficial is being able to identify the children in the first place and provide them with adequate protection. I would also say the other thing is GCHQ has an important role to play in government in tackling complex internet problems and identifying children online is a significant complex internet problem. So it was only right that they were involved in the project.

Neil Fairbrother

Okay, thank you for that. Now when it comes to Age Verification, we were supposed to have an Age Verification system introduced about 12 or 13 months ago, which was designed to exclude children from adult legitimate adult pornography sites. Where are we with that project? Do you know?

Antonia Bayly, DCMS

I think you’re referring to the Digital Economy Act part three, which was placing Age Verification on commercial pornography sites. So firstly, it should say that that’s very separate from the VoCo project. The VoCO project was a research project and Digital Economy Act part three is is policy. 

The decision was made I think in late 2019, so Autumn time, to not implement part three of the Digital Economy Act and instead to pursue the objectives of preventing children from accessing online pornography through the future Online Harms regulatory framework. And the thinking behind that is the strongest protections in those Online Harms proposals that we’ve drawn up are for children. So all companies within scope, regardless of their size, will be required to assess whether children access their site and if so, to provide additional protections to those children.

So one of the criticisms of the Digital Economy Act was that it scope didn’t cover social media companies and we know that that is where there is a lot of online pornography that children can access. So the government’s new approach will include social media companies within its scope and all sites where user generated content can be widely shared, which includes commercial pornography sites as well. 

So together we expect this approach to bring into scope more online pornography that is accessible to children than the Digital Economy Act part three would have done. So the intentions of the Digital Economy Act part three, haven’t been dropped, I think there’s often a misunderstanding about that. They’re just simply being brought through in a different way that we consider to be more comprehensive and to provide a higher level of protection for children.

Neil Fairbrother

Okay, thank you for clarifying that Antonia. Now the VoCo report distinguishes, I think between Age Verification, Age Estimation, and Age Assurance. What are the differences between these three things?

Antonia Bayly, DCMS

So the terminology around those definitions is something that wasn’t necessarily an initial objective of VoCo, but it became apparent that it was necessary during the project. And it’s actually been a really valuable output defining those terms and having them commonly used. It’s been very beneficial in policy development and discussions between regulators. 

So Age Assurance is the term that we use as an umbrella term for technologies that assess a users’ age and within that you have subcategories. And one of those sub-categories or types of technologies is Age Verification, which provides you with the highest level of accuracy about a user’s age. And that often refers to technologies that check a user’s age against officially provided data, so passport data for example. 

I’m conscious that it’s been used as a shorthand by lots of people to mean Age Assurance. Whereas Age Estimation is everything that isn’t Age Verification, where you have a less accurate, or you have less confidence in the age of the user, but there’s a whole spectrum of confidence levels that you can have within that. The point is it’s not as high as being verified.

Neil Fairbrother

Okay. Now the report also refers to some US legislation, which we’ve covered in various podcasts in the past, and that is the COPPA, the Child Online Privacy Protection Act, which defines 13 as being the minimum age below which parents must give consent for children’s data to be collected by social media companies. And it’s used as a proxy by social media companies as their a minimum age to be online. Is this sufficiently clear for parents that actually this law, which isn’t even a British law, is about data collection rather than being a law about the minimum age to be on social media?

Antonia Bayly, DCMS

So I don’t think it is recognized by parents that that is the reason for it. I think it’s become conflated you know, the intention around data protection and the idea that this is for safety. It’s really motivated by the fact that if by saying you don’t have children under the age of 13 on their sites, they therefore don’t have to gain parental consent for acquiring their data and processing it. 

One of the interesting things that we found during Voco, we had a whole work stream that was dedicated to engaging with children and understanding their experience of the online environment, and understanding quite how they perceive the current age checks and what they think is suitable in terms of age bands for access to services. And it’s clear that that age 13 barrier just isn’t working. Children routinely lie to access those services. In fact, they just don’t see it as lying. They don’t see it as the same as telling a fib to Mum and Dad. They see it as a necessary step to gain access to the platforms, the appeal to them that all their friends are on and actually provide them in many ways with lots of really enriching entertainment, social activities, but also because platforms aren’t acknowledging that child users [are on their platforms] present a high level of risk to them as well.

Neil Fairbrother

The EU Kids Online program that was run by Sonia Livingstone came up with the framework of Online Harms and there two really that impact age. One is Risk to the Person and the other is Commercial Risk. Are we conflating these two, or do both of these get resolved if there was a sufficiently robust Age Verification or Age Assurance system in place? 

Antonia Bayly, DCMS

Whether Commercial Risk and Risk to the Individual are conflated – I think often actually the Commercial risk probably gets to be forgotten about in discussions around online safety. So I would say that it’s not necessarily a case of conflation, it’s a case often of just not being given enough attention. I think the way to see Age Assurance technologies is it’s a really important tool to enable you to provide greater levels of safety. You know, the first step is knowing which users are children. Then you’ve got to actually create an environment that’s safe for them. So simply knowing which users are children ] isn’t going to enable them to be protected against harms to themselves or commercial harms, unless you take additional action on that. So the way to see it is it provides companies with crucial information to be able to place children in safer online environments.

Neil Fairbrother

One of the issues that social media companies have informed your report about is that they have concerns about their liability in respect of recognizing child users. What liability concerns have they’ve raised and it doesn’t children’s safety trump any liability concerns they might have anyway?

Antonia Bayly, DCMS

Yeah. So this was what they were feeding back to us was actually around whether, if they were to implement Age Assurance technologies or other safety measures, how that would impact on their liability. So it’s very similar in the discussions that you might be aware of that’s happening around the EU on content liability and proactive monitoring. So would their efforts be recognized by a regulator and would that impact on their liability? So for example, if they implement Age Assurance and other safety measures and harm does still occur to a child user, would the regulator take into consideration the effort that that platform has made when considering whether a breach of the duty of care, for example, in the online harms legislation has taken place? 

And something that we need to recognize is that however we might want it to, we’re never going to eradicate all online harms. We can make as good enough as we can do and the future Online Harms regulator will set out a code of practice or codes of practice, it’s for them to decide quite what it will look like, that will set out the best course of action for companies in fulfilling that Duty of Care and providing particularly children with an adequate level of protection. Key actions that companies take will be considered in terms of whether they are compliant and also whether enforcement action will be taken. So that’s what we were hearing from companies

The other thing that we were hearing from companies during the project was a recognition that safety features and in particular Age Assurance features create disruption to user experience, and that has commercial implications as well. So if you’re doing it and you’re not getting any assurances that this affects your liability you might have questions about why you were doing it.

Neil Fairbrother

Well, yes. And therein lies another question. The commercialization of children – should that be even allowed?

Antonia Bayly, DCMS

I mean, that’s a very big question. I mean, it takes it back, I think, to what we’re seeing with the, it’s out of my policy area, but I think it takes it right back to the Age Appropriate Design Code that the ICO’s brought out. Ultimately the commercial incentive for these companies is largely data. And children’s data can be a huge revenue stream for them, which is why, and I mean, at the moment, there are some of them that say that they can’t tell who are adults and who are children online. So the step of recognizing which users are children and then putting in place adequate data protection that are required, that means that you don’t profit in the same way off children’s data. This is a step towards that.

Neil Fairbrother

Okay. Now you have within the VoCo report itself the “VoCo Manifesto for Change.” What is the Manifesto for Change?

Antonia Bayly, DCMS

So this was a key output of phase two, and it’s really looking sort of beyond the project at the future, the kind of VoCo vision that we would like to see fulfilled; so that safer online environment, safer internet for children. And what it’s looking at is really taking away from the technology because we recognize that ultimately technology can get us so far, but a really crucial thing that we found during our research was that it was the relationships between children, parents, and platforms that needs to change as well. 

We refer in the VoCo report to “Digital Parents” as a recognition that there are lots of sort of responsible adult roles that a child may have, not always a biological parent. But that relationship needs to change to the point where platforms – their relationship with their user changes, so they’re considering in a different way how to design and build their online platforms. Are child safety [factors] at the heart of those things? They’re considering how they acquire and process data differently. They’re considering how they implement technologies and also how they engage with government, law enforcement and charities to bring about all those things in a better way. 

We also feel that, and what the Manifesto sets out, is the relationship between children and parents online needs to change because at the moment, the relationship between parents and children is often undermined by the current internet system. Parents reported to us during VoCo that they feel really torn at the moment between allowing their children to have access to sites that provide them with lots of really beneficial opportunities, they think actually during COVID it’s really put a light on the benefits that can come from the internet but at the same time exposing them to risks, or protecting them, but then restricting their access to really valuable resources and they reported feeling overwhelmed and disempowered by the current situation. So we need to get to a place where that relationship is actually supported and reinforced by the online environment rather than undermined. 

So what I should say in the report, we’ve kind of structured it around a triangle, to show the need for that kind of strong relationship. But at the heart of this is around trust. You’re only going to get a real change in the online space and safety for children if those stakeholders start trusting each other more. So platforms trust that children won’t lie about their age. Children trust that if they do provide their age, they’re still going to be given access to a fun and valuable online environment. And then that parents trust that companies are doing what they can to adequately protect their children. 

And the takeaway from this is, and you know, what we set out when we were doing VoCo, is not to create a reduced version of the internet for children. You know, as I said, at the start around GCHQ’s motivation, blocking children isn’t the answer. It’s around still giving them a really rich and enlightening online environment, but providing them and their parents with the right tools to adequately protect themselves.

Neil Fairbrother

Okay. You mentioned in the Manifesto for Change that there is a difference between the platforms’ “intended audience” and “actual audience”. What is the difference between those two?

Antonia Bayly, DCMS

I mean, it goes back to your point around COPPA and children over the age of 13. At the moment in lots of platforms’ terms of service, they say that their intended audience is 13 and above, or in the case of some 16 above or in the case of you know, commercial pornography sites, 18 and above. But their actual audience is not that. We know that children are on these sites. So there is a kind of gap between their intended and their actual. 

It’s around whether that gap needs to be entirely closed and you need to Age Gate an environment and there will be some commercial pornography sites, for example, where blocking is required because of the risk presented to children. But there’s others where actually such a hard approach isn’t needed. Platforms should consider whether edits and changes can be made to the services that they’re offering for different age bands that still make them age appropriate, but allow a kind of greater level of accessibility that’s age appropriate and safe.

I’d just add that’s also one of the things we were looking at with the Age Assurance aspect of the work, is that moving away from Age Verification, how can other Age Assurance solutions be used to established age to different levels of confidence that mean that you can place children into these safer environments. And companies are already starting to do that. You know, they might not be using as robust Age Assurance technologies as we would like, but they’re already beginning to create more Age Appropriate environments. 

TikToK, for example, has prevented a direct messaging for user accounts under the age of 16. Other platforms are considering whether they allow live streaming functions to be allowed to accounts under the age of 16. So the change is beginning and we’d like to see companies embrace that more.

Neil Fairbrother

Okay. But this is still based on a self-identification of age isn’t it? There is no mechanism in place used by these companies, it’s down to the child, as you say, not to lie. But it’s in the child’s interest to lie. As far as the child is concerned, they are incentivized to lie to get at these additional features.

Antonia Bayly, DCMS

So, yeah, I mean, at the moment there’s an incentive to lie, because there are no alternatives, you know, the kind of the platforms that they want to access don’t have an age appropriate alternative to them. What we are hoping to deliver through the Online Harms Regulation is an environment where first the companies do start adequately assessing age and also start considering and implementing for children.

And something that we in the workstream that looks at the child’s experience online during VoCo, what we did hear repeatedly from children is that they don’t necessarily want to be in online environments with adults, they don’t necessarily want to be exposed… in fact, they said they don’t want to be exposed to these harms, but they now see it as an integral, or not an integral, an unavoidable part of that online experience. 

That was actually quite a sad insight, recognizing that online harms to them has become normalized because there aren’t any alternatives. So I think at the moment, they are incentivized because there’s no alternatives. We believe that if alternatives were offered that were just as valuable to them, but safe, children wouldn’t be incentivized to lie about their age.

Neil Fairbrother

Okay. Now paragraph 3.2, I think it is of the VoCo Manifesto for Change suggests that “platforms should establish the likelihood of children accessing their platform”. How can a platform do that if it’s a new platform?

Antonia Bayly, DCMS

So I should say this aligns with what we’ve got out in the proposals for the future Online Harms Regulation. It’s also very similar to what the ICO is saying and it’s Age Appropriate Design Code too. You’ve highlighted the point around what if a platform doesn’t yet have any users that it can assess. Another way you can do that is to look at how much you think your platform will appeal to children. Does it have certain functions and features that are appealing? But the other thing as well is if you’re going to implement Age Assurance solutions, you can prevent children from accessing that site anyway, if you feel that there is too high risk,

Neil Fairbrother

Okay. Paragraph 3.3 says that “…social media companies should establish the level of risk to a child on their platform.” Now, surely it will be in their interest to claim there is low risk or lower risk than there actually is, or in fact no risk at all. Or is this something that Ofcom the regulator would be involved in, so they can be held to account for their own risk assessments?

Antonia Bayly, DCMS

Yeah, I think no one’s suggesting that companies should be allowed to self-assess without any overt oversight on that. It is a core part of the future Online Harms regulation will be some form of assessment of risk, and Ofcom will be considering how that’s done, what the risk assessment approach should look like, what risks are covered and will be requiring companies to do that. And when it comes to a potential breach or enforcement that risk assessment will be key to understanding how companies have understood the risk on our platforms and gone around in mitigating them.

Neil Fairbrother

Okay, what is the Data Source Type Taxonomy or DSTT?

Antonia Bayly, DCMS

So this was an output of the phase two project. It was a really valuable piece of work. It looked at the different types of data sources that could be used by platforms or by children to assess age and it went around categorizing them and going through their considerations for use. So for example, data protection considerations, accessibility, cost and that sort of package. It’s intended to be a live document. So it’s a sort of snapshot in time of current data sources. 

What we found is the data sources that can be used largely break down into three categories. So Category One is “Officially Provided Data Sources”, which tend to be used for age verification. So stuff like credit card data, passport data.

Then there is “User Provided Data”, things like biometric data or parental consent or peer consent or peer approved.

And then lastly there is “Automatically Provided Data, which is the stuff that we probably aren’t in the conscious that we’re doing for platforms. So behavioral analytics and how you’ve used the service and that sort of thing. 

Then another kind of really key part of this work was in those considerations was putting some estimation on the confidence levels that each of those data sets provides. On the one hand, you’ve got obviously officially provided data, which provides you with a really high level of confidence, but we assessed that other data provided other levels of confidence going down in the kind of age estimation side. And an interesting piece of work that was planned was looking at how you might be able to combine these data sets to improve levels of confidence and a sort of mathematical approach was taken to that.

Neil Fairbrother

Yeah, I mean, there seems to be a little bit of a contradiction here because the younger you are the fewer officially provided and therefore more reliable identification documents do you have. Children don’t have driving licenses or credit cards or even passports perhaps, yet they are the things that provide a higher degree of certainty that someone is the person who they claim to be. And yet it’s children that need the most protection, it seems to me, and yet we’re relying on user-reported information or automatically reported information, which may not be as robust.

Antonia Bayly, DCMS

So I think there’s certainly a point around how different Age Assurance solutions and the data they rely on, what exclusion risks they present. I think if a company is having to use Age Verification solutions, at least in the vision that we have on the Online Harms Regulation, that means that that site poses a very real, a high level of risk to children and that children need to be aged gated from it. 

So the risk of not being protected is slightly different. I think there is absolutely an unintended risk potentially that children go to less safe sites because they can’t access the sites that they want to access. But that’s where actually strengthening the data sets that don’t have to have such a high level of protection are really important in enabling children to have a choice from other data that is accessible to them, so they aren’t they aren’t restricted from accessing sites that are safe and appeal to them. 

The exclusion point is a really important one and one we’re actively looking at at the moment. So we’re starting a piece of research that will look at whether exclusion risks are posed by different Age Assurance solutions, how they affect children, particularly vulnerable children, and what recommendations we might be able to make to Age Assurance providers to improve accessibility. 

I should also say your point around the access to officially provided data, the same is true also for adults as well obviously in a different context, but not all adults have access to passport data as well. There are teams at DCMS thinking about this issue, but more broadly in terms of Age Assessment or online ID and accessibility for people.

Neil Fairbrother

Okay. Now you do have a list of 10 different ways that could be used to infer age, one of which is self-declaration which may or may not be true. One of the interesting ones I thought was “peer group reference”. It says here that “I have contacted individuals in your peer group and one or more of them have confirmed your age”. 

So are you saying that a schoolchild in a particular class might have to nominate their colleagues and for their colleagues to vote almost, or confirm that the age of that particular child is the age the child says they are?

Antonia Bayly, DCMS

So this is one of the data sources that we identified as you say. We’re not saying as government these are the absolute ones that you must use. This was a piece of research. But what the peer piece does is go out to individuals who’ve already been age assessed and considered to have an accurate level of age established and get them to give the age of another user. Obviously it has a much lower level of confidence than using officially provided data or using parental consent. 

And that is something why that piece of work that was looking at confidence levels and how you combine them to drive up levels of confidence was being done. You might choose to use peer consent and biometric data, for example, and combined you feel that you have a high level of confidence, because you could have a situation where users friends have said, yes, they are this age and then the biometric data comes back much lower, and you have a higher level of confidence in the biometric data than you do in the peer piece.

Neil Fairbrother

Okay. Now the report also refers to PAS 1296 2018, which we covered in an earlier podcast with Rudd Apsey of the Digital Policy Alliance and PAS 1296 introduces the concept of “Age Check Exchanges”. What are Age Check Exchanges? How would they work?

Antonia Bayly, DCMS

So Age Check Exchanges, and we’ve covered it in the VoCo report, and we looked at that for a technical trial as well, is dealing with the issue of what if users are all providing slightly different types of data? What happens when platforms all need to assure the same user, but in different ways? And what it allows you to do is to provide a piece of data about yourself, have that assured as being this particular age, and then that can be turned into a token or tokenised, which means there’s no other kind of information there it’s just a “yes” or “no” about whether somebody is a particular age or in a particular age bracket, and that can be provided to platforms when they ask for it, when they ask for a verification of a child’s age. 

So what it enables is Age Assurance to be done at scale in a way that protects children’s data, but also makes it a much more manageable process for both the user with the child and the parent, and also for the platform, because otherwise you have platforms all doing age checks for the same user over and over again.

Neil Fairbrother

So would this be the equivalent of, on an e-commerce website where you put in your credit card details? The credit card detail part is handled by a credit check exchange, so to speak, and not the individual e-commerce site that you’re dealing with.

Antonia Bayly, DCMS

Yeah, exactly. It’s looking at an interoperable solution. This isn’t going to work for every company. There are companies that are going to choose to build their own proprietary Age Assurance technologies, because they might have the data that they feel confident do that already. And there are going to be other companies that prefer to work directly with one Age Assurance provider. But we were really interested in the interoperable solution because we see interoperability as being so key to something scaling.

Neil Fairbrother

Okay. One of the areas mentioned in the report are “Trust Frameworks”, and you refer to the trust network, or the Trust Framework, that the New Zealand government is creating, which they claim supports “digital identity services”. Is this moving towards some kind of online equivalent of an ID card?

Antonia Bayly, DCMS

No so Trust Frameworks are very commonly used in the Digital Identity space, because it’s an area where you need higher level assurances or trust that people providing these services do have high levels of data protection, do have good cyber-security, are following appropriate rules and standards. And age is an attribute of identity. They are different things, but it’s important to recognize the relationship to be able to make that distinction between them. So there are lots of things happening on Trust [Frameworks]. 

DCMS is currently developing a Trust Framework for Digital Odentity providers that can benefit Age Assurance providers, because it is beneficial for them and also for users of Age Assurance solutions, to know that they are following high levels, high standards and the rules that are being set out, you know, particularly around aspects like data protection.

In terms of whether Age Assurance solutions are following an additional ID card, you know, you don’t need to establish identity when you are assessing age. In the case of Age Verification, you might be using the same data sources, so for example passport data, but there is no need for an Age Assurance provider to actually do a full ID check. That’s not necessary. And there might be some companies and some Age Assurance providers that do do that, but that isn’t necessary at all, particularly in the space of children’s data.

Neil Fairbrother

Okay. Now the underlying assumption is that by recognizing children as children, services and service delivery can be altered or more targeted perhaps to provide a safer space for children online, as we said at the outset, and that’s the kind of underlying hypothesis behind all of this.

In the VoCo report, there’s no mention of the practice of “catfishing”, where adults pretend to be children and in a VoCo-enabled world, if an adult poses as a child and subsequently gets verified as a child, doesn’t that increase the risk of harm to other children. Isn’t this like the Fox in the hen coop who has been verified as being a hen? So do we not also need to have Age Verification based on hard metrics, so to speak, for adults to filter adults out, as well as Age Assurance or Age Estimation using perhaps a combination of softer metrics to filter children in?

Antonia Bayly, DCMS

Yeah. So I think where we say that would like to see platforms Age Assuring users, they would have to Age Assure everyone to identify which users are the children, therefore users that are assessed as being adults, shouldn’t be able to access the kind of environments that have been deemed safer for children. 

And one thing we would like to see is platforms considering how they, as I said earlier, treat their features and functions, so that it’s much harder for maybe an anonymous user, maybe a clearly adult stranger, to contact a child. 

And I think there are two things here. Obviously there is a risk that a determined offender will try to trick the system and appear to be a child. So is able to get around any additional safety measures that a platform has in place, which is why you need robust Age Assurance checks at the start, if you’re providing features that do present that risk, but also that these things are an ongoing check that you combine ongoing assessment of user behaviour, or asking at key milestones for users to once again provide evidence that they are the age that they are.

And a lot of platforms are doing, you know, some of the big platforms are already doing elements of this. They do you consider, they do monitor suspicious behavior. They do identify when they think that an account isn’t actually an adult account and is actually a child or vice versa. 

During the VoCo research, when we were talking to children, one child was reporting that they put their age as, I can’t remember what it was now, but clearly not the child, and I think she was under 10. And she kept them being kicked off this large social media platform, because they were identifying that her behavior online, on the platform, was clearly not the behaviour of an adult. It was a child. And they do that quite a lot. And they do that also in terms of identifying suspicious behaviour that might be indicative of illegal activity.

Neil Fairbrother

Okay. Now we are rapidly running out of time, so the last two questions, Antonia. Section 4.1 refers to “…working towards a template standard for Age Assurance that is desirable, feasible and practical.” What is this template standard?

Antonia Bayly, DCMS

So when we say that we’re looking to work towards a template standard, what we’re looking to do is identify where there are existing standards that can be improved and updated, and also where there’s space for creating new standards, be it a Code of Practice or similar. And the aim is to get us into a position where Age Assurance providers have a clear guidance in terms of how they can build and design appropriate Age Assurance solutions that provide high level of assurance for platforms and for users that they are doing these checks in a highly effective way, but also in a way that preserves children’s privacy and also their data protection. 

And then similarly on a platform side that they are implementing Age Assurance solutions in an appropriate way and they’re also taking steps to understand the risks that their platforms present. 

So it’s a broader sense than a single standard. And we’re doing that at the moment through an updated PAS1296, which is that specific Age Assurance standard. We’re working very closely with the digital identity team and DCMS to see how the Trust Framework that they are producing for digital identity providers can support Age Assurance solution providers as well. And also the work taking as part of the future Online Harms regulatory framework in terms of ensuring that regulated companies fully understand their requirements and how to assess risk.

Neil Fairbrother

Okay. Within the Template Standard, there three layers of impact on implementation on child safety, there’s high, medium, and low. And I would just like to pick out one or two features in there if I may. Within the high-impact grouping it refers to a “Parental dashboard”, which “…works across platforms to enable parents to control and implement platform settings.” That’s going to require a significant amount of collaboration between these competing social media platforms, is it not?

Antonia Bayly, DCMS

So it’s something that we recognize would have a high impact on a parent’s ability to improve and control their children’s safety across different platforms. Obviously having that level of interoperability between different platforms is challenging. It will have high impact. Whether it’s feasible is something that is still being sort of assessed. It was something that was identified during the project. It’s not something necessarily that we’re looking to take forward into policy.

Neil Fairbrother

Okay. One final question, Antonia, what are the next steps?

Antonia Bayly, DCMS

So we’re doing a number of things, we’ve already started a range of things following on from the VoCo report. We’re trying to bring a lot of the findings now into policymaking. We always said that the research wasn’t a blueprint for government policy, and what we’ve done since the publication is review what insights are valuable and how we can progress those further. 

So key things that we’re now doing are the updates to PAS1296. It is around looking at how we can join up with the digital identity Trust Framework. We’re doing a piece of research that I mentioned earlier on exclusion risks for children with Age Assurance providers. So Age Assurance solutions and ensuring that these solutions are as accessible as possible. There’s join up taking place between the relevant regulators, ICO, OfCom and also DCMS to try to bring together the piece on data protection and child safety, particularly with an eye to Age Assurance as well. And then obviously we have published the government response to the Online Harms white paper that sets out our intentions for regulation in more detail. 

So we’re taking the lessons learned from VoCo, the insights from that, and feeding that into the work around assessment of risk, around confidence levels of Age Assurance and matching that to the the risks posed by platforms.

Neil Fairbrother

Okay. Thank you so much. That was absolutely fascinating. A really interesting piece of work and a great insight into how Age Assurance can help protect children online.

Antonia Bayly, DCMS

So much for having me.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top