Safeguarding Podcast – Levelling Up and the Online Safety Bill, Damian Collins MP

By Neil Fairbrother

In this Human Rights Safeguarding Podcast: the UK’s Online Safety Bill with Damian Collins MP, the Bill’s Scrutiny Committee’s recommendations, the meaning of “harm”, what offline laws apply to the online if any, Section 230, Francis Haugen, End-to-End Encryption (E2EE), the Health and Safety at Work Act 1974, the role of Ofcom the Regulator and how they will ensure social media companies comply with this new legislation.

https://traffic.libsyn.com/secure/safetonetfoundation/SafeToNet_Foundation_podcast_-_Levelling_up_and_the_Online_Safety_Bill_with_Damian_Collins_MP.mp3

There’s a lightly edited for clarity transcript below for those that can’t use podcasts, or for those that simply prefer to read.

Welcome to another addition of the SafeToNet Foundation’s safeguarding podcast with Neil Fairbrother, exploring the law, culture and technology of safeguarding children online.

Neil Fairbrother

In 2013, the then Prime Minister David Cameron made a speech in which he promised to make the UK the safest place in the world for children to go online. His successor, Theresa May, continued this policy and published the Online Harms white paper. Her successor, our current PM Boris Johnson, last year published the draft Online Safety Bill. Needless to say it’s complicated. To help guide it through it, I’m joined by Damian Collins, MP for Folkstone and Hythe, who has been chairing the bill’s Cross-Party Scrutiny Committee. Welcome to the podcast, Damian.

Damian Collins MP

Thank you. It’s great to be with you.

Neil Fairbrother

Thank you, Damian. Please can you provide us with a brief resumé, so that our audience from around the world has an appreciation of your background?

Damian Collins MP

Yes. So as you rightly said, I’ve been chairing the Joint Parliamentary Committee on the Online Safety Bill. So this is unusual for the UK Parliament. It’s a Committee of both Houses, both Lords and Commons. It’s been set up as a pre-legislative Scrutiny Committee, which means that the Government had already published a draft bill on online safety and our job is to scrutinize that bill and suggest improvements, changes that could be made to the bill before it’s introduced to Parliament. So this is a government bill. There will be later this year in the spring a government bill coming before Parliament to create the Online Safety regime.

We produced a report in December setting out our recommendations on how we thought the bill could be strengthened. And in particular, as you rightly said, it is a complicated bill. So we wanted to bring more clarity to the to the bill, make it much clearer for users what their rights are, for children how they would be protected, for tech companies what their responsibilities would be and to set out really clearly how these measures will be enforced.

And central to all of this is the role Ofcom, our media regulator, who would be the regulator for this. This bill really marks the end of self-regulation by the tech companies in the UK. The companies would be asked to comply with Codes of Practice that will be mandatory, setting minimum standards and a Regulator whose job is to make sure that they’re complying and would have power to take action against them if they don’t.

Neil Fairbrother

Okay. Well, thanks for that. We’ll try to unpack some, if not all of it. Now one of the big differences between the Online Harms White paper and the Online Safety Bill is that the Online Harms White paper contained a pretty comprehensive list of harms, whereas the draft Online Safety Bill restricted itself to just two specifically name ones, namely terrorism and child sexual exploitation and abuse or CSEA plus a generic definition of harm. Why were so many harms not included in the draft bill?

Damian Collins MP

Well, the Government presented the draft bill as it thought best, but I can tell you what we thought of it as a Committee and how we sought to change it, because what the draft bill does is it says that criminal material has to be removed. Well, it does anyway. It does in UK law, it does in EU law. But the current problem with the way the law works is the companies are under no proactive obligation to find it. They have to remove it if it’s reported to them, and that therein lies part of the problem. So we need to make these duties more proactive.

Now the other thing to draft bill does, is it says that what the Government can do is on top of illegal content, it can specify content that it believes is legal but harmful. So that could include, say online bullying. But the government would have to say, have to define, what the harmful content was, then all the platforms will be required to do is say what their policy is to combat that.

if their policy is not to have a policy, then not very much would happen. Or if their policy is, you know, very loosely worded in their terms of service and often the terms of service for these companies are full of competing obligations, which it’ll be impossible to determine exactly what they should or shouldn’t do in a particular situation. So we thought we need to simplify this a lot and what we should do, what, what the bill should do is to translate existing offenses in law, into the regulatory regime, so that if something is illegal offline, it should be regulated online.

And the job of the regulator is to demonstrate how you translate those offenses across. So you then end up with a regime, which is what we’ve recommended, which is based on existing laws, regulations based on existing laws, and then the job of the regulator is to set the standards that we expect the companies to follow in enforcing it.

Because what we found throughout this process is, well, people assume that things that are illegal offline are enforceable online as well. Now for the worst criminal material, terrorist offenses, child abuse, then it’s very clear. You don’t need context to understand whether this is illegal or not, but when you consider things like racism, you know, religious hatred, offenses like that, the existing laws weren’t necessarily drafted to apply in a regulatory regime online.

So what we’ve proposed in the report is to say, well, let’s bring these offenses back, clearly onto the face of the bill. Let’s write them in on the face of the bill and say that the job of the regulator is to set the Codes of Practice and set out how we expect these laws to be enforced online.

Neil Fairbrother

Okay. All that sounds reasonable, we’ll no doubt have a look at some of those laws shortly, I think. The draft Online Safety Bill defines three categories of content that are harmful to children as Primary Priority Content, Priority Content and Un-designated content. What is meant by those three different types of content Damien?

Damian Collins MP

Well, the priority harm is set out throughout the bill. It’s I mean, the way the bill has been drafted, but obviously, you know, we’ve spent six months work trying to move the bill on, you know, from some of these labels, which aren’t always that helpful. I think what we should expect to see from the bill is that some of the minimum design standards that are set out in the Age Appropriate Design code, which applies to the way data is gathered and used and systems and tools that are designed that children engage with, we should expect the equivalent level of standard to apply there. And the regulator taking a view on Safety By Design, the way products have been designed, the way they could be used by children, whether that is consistent with the protections set out in the Age Appropriate Design Code. I think those things are really important.

And this thing we also wanted to move on was on Age Assurance, which is very, very important, you know, for protecting children. I’ve got 14 year old daughter and a 12 year old son so we’re right in the sweet spots of this debate at home and the draft bill at the moment only says that you have to explain to the Regulator what your policies are on Age Assurance. We want to go further than that and say, no, actually you’ve got a set minimum standard so you can demonstrate how you can protect children from accessing adult content through your platform. What are you actually doing to stop that? And you have to have a policy in place. You can’t just say that this is something you’re working on or looking at. And that I think is one of the most important areas of the debate, where parents in particular have some of the greatest level of concern.

The other thing I think, which is, I think sadly really will be really important for younger users is how you bring into force the new you offenses around promoting things like self-harm. Now, this is a particular problem for teenage girls, it’s a problem for all people, but you know, where you see you know, content where a young person maybe is engaged with content that promotes self-harm or glamorizes self-harm and as a consequence, they see more and more of that content recommended to them. I think, you know, by bringing these new offenses on self-harm, plus I think the highest standards that should apply to protecting children, and the regulator can really start to establish quite powerful Codes, which will change the way the internet works for younger people in particular.

And as I said before, you’ll notice throughout our report as well, quite a strong emphasis on the principles around Safety By Design, which is that the systems themselves aren’t created in such a way that it means they’re likely to cause harm. And again, the standard there for protecting children should be higher.

Neil Fairbrother

Okay. Now in the Online Safety Bill content is deemed as being harmful “…if the provider of the service has reasonable grounds to believe that the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on a child of ordinary sensibilities”, which is quite a mouthful and quite complicated, and I know that you’ve recommended that it should be changed to something a little bit different, but there are all sorts of questions around that and I think it’s a lawyer’s dream to have a proposition like that in a piece of legislation.

What is meant by reasonable grounds? Should it be left up to the service provider to decide this, after all, they do have a vested interest here? And how is consistency across platforms maintained? How to prove content has a direct or indirect impact, adverse or otherwise? And what is meant by a child of ordinary sensibilities and what happens indeed, if you are not a child of ordinary sensibilities? So I wonder if you could unpack that a little bit for us.

Damian Collins MP

Yeah. So I think some of the problems with how it’s been drafted is I think it’s language that is more appropriate I think for thinking about adults. When you start talking about, you know, an adult of ordinary sensibilities, because then that’s language that is used in the existing Malicious Communications Act.

It is designed to say to something that someone who you know, a person of ordinary sensibilities, someone who would you know, who would reason be so shocked and offended by something they’d seen that it could cause an adverse, serious psychological impact on them. Now that that sort of suggests when you apply that kind of language to children, that children have agency. But actually, children are much more vulnerable than that and as you’ve rightly said, it shouldn’t be for the platforms themselves to interpret that and determine when that threshold has been met. It should be the job of the Regulator to set those standards that we expect the companies to adhere to and say, well, we think they’ve fallen below those standards, particularly when it comes to children.

Because that was something we changed in the recommendations of our report, because again, it suggests that you know, that the children like adults have a degree of agency and choice in what they do, but children are much more vulnerable than that. And it shouldn’t be for the companies to determine what they will do. The regulators should have the power to set those minimum standards.

Neil Fairbrother

Yeah. So the changes you’ve recommended comes from the Law Commission because the Law Commission is also looking at various forms of online harm I think, and their definition that they arrived at was “Offenses likely to cause harm to a likely audience”, which you believe is a better fit. Why do you believe that’s a better fit?

Damian Collins MP

Well and again that’s not just for children, that would be for anyone. So the serious harm was the Law Commission’s recommending, which is physical harm or serious psychological harm. The reason for “likely audience” is that it was intended, you know, that or the communication was posted in such a way, there was a clear audience for it. So you could direct offensive material that would cause harm to an individual, but it could be to a group of individuals. So if it was racially motivated abuse, or it was abuse around religious hatred, and you were deliberately targeting it at groups and organizations or individuals who are likely to be harmed by that message, then that’s what the Law Commission means by “a likely audience”, a defined audience.

Where it draws a line there is to say, if someone posts something, it’s not aimed to anyone in particular, but someone sees it and becomes offended by it, that wouldn’t be considered to be a “likely audience”. It has to be a sort of known, identifiable and deliberately targeted audience. So I think with a lot of this, you’re looking at what the intent of the person posting the content was to was. Were they intending to cause harm? Who are they intending to cause harm to? Does it meet the threshold of being harm that is considered to likely to cause physical harm, or serious psychological harm?

The question, the big question within all of this, though, for the Regulator here, based on what we are proposing is, and what the Law Commission is proposing is, what are the thresholds to act? Now, the Law Commission created an offense, the Government has made it offense in law, so it then becomes an offense in law, which under our recommendations, the Regulator would have the right to impose on the companies.

What you would need to give guidance on is when you think that threshold has been met, because in the offline world, you could take someone to court. A court of law would determine the offense has been committed. What would be impractical and potentially quite harmful in the online world would be that the social media companies would say, “Well, until a court determines this is an illegal act, we’re not going do anything”. What you want the regulators to say is, “Okay, these are the thresholds where we believe these tests have been met and a preemptive action should be taken to mitigate that content”.

If it’s really clear, then that content should be removed. It could be that there’s a step before that, where you say, well, you know, with this content is highly problematic. What we’d expect from you as a company is that even if you’re not going to remove it straight away, you are going to make sure that it’s not promoted or shared with other people. It’s not amplified by your systems. It’s not recommended. That could be one of the first effective measures of mitigation that you bring in. But it’s for the Regulator to work with the companies, to set out what it expects them to do in these different scenarios.

Neil Fairbrother

Okay. Now Damien in your report, you make seven specific recommendations for change, notably in paragraph 52. And the first one of those, Recommendation a) is “To comply with UK law and do not endanger public health or national security”.

So the obvious question there is, well, which UK law? Now we’ve touched a little bit on this already, but when you look at cyberbullying, as far as I understand it, there is no legal definition of cyberbullying and therefore there’s no law against it. But when you peel back that label of cyberbullying and look at the activities that take place, there are plenty of laws in place already that govern and limit what you are allowed to say. For example, the 1861 Offenses Against the Person’s Act all the way through to some of the more recent things like the 2017 Sexual Communication with a Child Act. Now, why is it that they cannot be applied online? Which is, I think what you said earlier, they’re not optimized for application to the online world, but why are they not?

Damian Collins MP

Well, the thing is they can be, and what we found throughout this process is that you know, a lot of people say, well, you know, what we should do is we should enforce the law online. So we shouldn’t go beyond the law. We should enforce the law online. And many people ask, well, why isn’t it already?

The problem has been with some of these laws is they were never really written with the internet in mind, or they’re written on the basis that the offense is being caused by the person who is posting the content. It’s not being caused by the platform that’s hosting it or even maybe actively recommended that content to other people. So we’re doing something new in law, which is to say, the companies have a liability, even though they’ve not created the content themselves. And they have a liability because it’s their systems that are being used to create an audience for it. And sometimes the platforms themselves actively promote extremist content or content that could cause harm.

You know, what we’ve heard from you know, going back to the question about self-promotion of self-harm and glamorizing of self-harm, where young teenage girls in particular can be vulnerable, it’s the recommendation algorithms of the companies that are promoting that content are creating an audience for it and therefore they’re making the harm potentially a lot worse.

So a lot of these laws were never written with that in mind. So the job of the Regulator here is to say, okay, we have these existing offenses in law, Parliament has determined that these actions are illegal, what the Online Safety Bill is, is It explains to the social media companies what we expect them to do, to mitigate the spread of that content. And to say that they too have a liability, they may not have created that content, but they have a liability towards it because it’s their systems that are being used to actively promote it.

Neil Fairbrother

Okay. I understand that. And we might revisit that shortly. So Recommendation b) is to “…provide a higher level of protection for children than for adults”, presumably on the grounds that you’ve already alluded to that children are different from adults. They are going through adolescence, perhaps they may even be pre-adolescent children on these platforms.

And there a number of items that you’ve included. So for example, access to, or promotion of age inappropriate materials, such as pornography, gambling and violence, material that promotes self-harm, eating disorders, suicide. Some of these you’ve already mentioned.

Now to do this, as a precursor you need to know who is the child. You’ve alluded to Age Verification, the DCMS produced a report, the VoCo Manifesto, Verification of Children Online, and you’ve said that we need to have some kind of Age Verification. We were supposed to have Age Verification introduced, I think two years ago, just before the last general election in 2019. So where are we with Age Verification? Will it be included in the Online Safety Bill, or will it be treated separately?

Damian Collins MP

Well, what we’ve recommended in the report, and I know this is one of the areas the government is actually looking at because the intention was that Age Verification will be addressed through the Online Safety Bill. It’s to say that the companies have to have Age Assurance schemes in place. We’ve not specified what sort, we could regulate it I think to determine schemes that it thought were viable and valid. There are lots of different technologies out there, or it could simply be that a platform decides to have a different policy for adult content. I mean you know, in theory Facebook doesn’t allow adult content, doesn’t allow pornography on its services. Twitter does, you know. So it could be the platforms, take a different approach towards porn material altogether.

But what we are saying is that the regulators should say to the companies, you have to demonstrate to us, if children can access adult content through your service, what checks you put in place to make sure they’re protected from it, and that, you know, who’s a child and who isn’t, so they can be protected from it. As the Bill is currently drafted here’s no minimum standard the companies have to meet, but we say the regulators should insist on that. It should be there. And that’s one of the changes that we’ve recommended

Something else which we discussed in the report and we discussed with Francis Haugen, the Facebook whistleblower, was also the sort of data that the Regulator could get from the companies. The Regulator will have quite extensive auditing powers to demand access to data and information from the companies. One of the things it could ask for is the companies’ assessment of the age of its users. Francis Haugen, the Facebook whistleblower, felt pretty confident the companies have got a very good idea of how old people actually are, not what they say they are when they log in, but based on the data they gather through that user using their service, they can have a pretty good idea.

So then the question could be from the Regulator, “Well, if you know you’ve got a lot of children who are 11 and 12 using your service, but your policy says they can’t use it, why aren’t you intervening against those accounts? Why aren’t you even shutting those accounts down, because you know that they’re younger?

The Five Rights Foundation Pathways report I think did a very good job in setting out how the companies are clearly targeting adverts at people they believe to be under 13 on their services as well. So often the tech companies, when they address this debate say [things like}, well, the problem is a point of point of creating an account. We can’t tell how old they are. If they lie, we’ve got no way of verifying, children don’t have ID. And if you want to create children’s ID, then we could use that, but that ID could be fake. So at the point of signing, we just don’t know.

But they do when they start to use the service, they get to know pretty quickly. And they get to know not just by how children use their service, but also data they gather from smart devices and other apps that children are using. They’ve got a pretty good idea. And I think this is one of the areas where the Regulator, I think, could start to put some pressure on the companies.

Neil Fairbrother

Well, yes, I mean, if the social media companies are saying that their software isn’t that smart, then there seems to be a contradiction between those statements and statements they may be making to their prime customers, which are of course the advertising community.

Damian Collins MP

Yes, absolutely right. And this has been a sort of an issue of concern for me for some time, something I raised with Francis Haugen when she gave evidence to the Committee. That’s to say all this stuff around gathering data about users… I heard someone saying when we published our report, that we were calling for the massive surveillance by tech platforms of their users in order to protect them from harm. But that mass surveillance takes place anyway. It takes place anyway to run ads, to target ads, they’re gathering all this data all of the time in order to create advertising audiences.

And going back to my example earlier, if I said to Instagram, I’m a charity that works with vulnerable teenage girls, girls that have recovered from self-abuse, self-harm and are active social media users. I’ve got a thousand 14 and 15, 16 year old girls from that category. I want to create an advertising audience based on the data of those users so I can reach 10,000 people that are like them, that are likely to be involved in the same stuff, just to reach out a helping hand. If I went to Facebook and said, could I do that with your advertising tools? They would gladly sell me an audience to do it. They would sell me that data on those users to target an ad at them.

If I said to them from a safety point of view, can you identify those users and reach out to them yourself proactively or monitor their accounts more closely, then they would say that was an invasion of privacy. Now you can’t have it both ways. You can’t gather data through mass surveillance to make money out of it but then when you’re asked to use that same data to do good, say it’s not appropriate. And I think these are sorts of issues the Regulator needs to be getting into.

Neil Fairbrother

Okay. C, Recommendation C is to “…identify and mitigate the risk of reasonably for foreseeable harm arising from the operation and is design of their platforms”. Now this brings into play the duty of care, risk registers, transparency, and accountability. And there are people who say that safety legislation, which covers many aspects of other types of industry, this safety legislation shouldn’t be applied to online services on the grounds that it would stifle innovation. But if you look at another technology industry, Tesla, they have achieved remarkable growth. I think they’re the most valuable car manufacturer on the planet bar none now, through innovation with their electric cars. And they’ve had to comply with very stringent internet and national safety standards from the get go. They couldn’t have sold their first car without meeting those safety standards. So surely this argument that safety in the online community will stifle innovation is simply bogus? It’s a red herring. That is a loaded question, I appreciate that.

Damian Collins MP

Yes. And you’re absolutely right. It is. It is bogus to say that, of course, but, and as you rightly alluded to with the motor industry and in many other industries, I mean, we would say in this country that our financial sector is one of the most dynamic and world-leading sectors of the UK economy, yet it is highly regulated and for good reason. And so there’s no reason why regulations should stifle innovation.

What regulation should do, is it should make clear to big companies what their obligations are. It should take a view on risk as other industries take a view, if it thinks that certain products or the design of certain products poses too greater risk to the users of those products, then it should have the ability to say so, and also understand what the companies themselves understand about the Le nature of risk. Part of the problem is that the companies probably do their own risk assessments, but they don’t share them. And they decide what an appropriate level of risk is.

It was interesting when from Facebook whistleblower files, the study, the internal research done by Facebook showing that I think it’s about in the UK 30% of girls who use Instagram felt heightened levels of anxiety and depression after using the service. Well, Facebook obviously determined that was okay. And in the same piece of research, it says, but they don’t feel so bad, they won’t stop using it because they’re more frightened of missing out on what their friends are doing.

Now, there are safety concerns around something like that, but at the moment, those debates take place in the dark. We’re not allowed to know that stuff unless someone leaks it, but the Regulator could get access to that. So also in some areas of the tech sector, some of this regulation already applies in Washington State in America for facial recognition technologies, a company has to go through independent sandbox trials of a new product before it can launch. Now, I think some of these Safety by Design principles could work very well in the tech sector. And we’re already starting to see that happen.

Neil Fairbrother

Okay. Recommendation D talks about “…recognizing and responding to the disproportionate levels of harm experienced by people on the basis of protected characteristics”. So protected characteristics, for example, the LGBTQ+ community in particular has a bit of a problem and you’ve highlighted this, or at least the LGBT Foundation highlighted this to you. And they said that “LGBTQ people are also at risk of being harmed by the actions of the platforms themselves with their content, with their specific content that is personal to them and their lifestyles being erroneously blocked or removed at greater levels or greater rates than other types of content”.

Now, is that because the algorithms aren’t as smart as we think they are, or are they really smart and they just happen to contain human bias anyway, or is it because manual moderation is inconsistent and very difficult to apply?

Damian Collins MP

Well, that’s quite a lot of questions there. So in terms of harms based on people’s protected characteristics, those protected characteristics are really set out an existing equalities legislation. So our principle behind our report and the way we feel those should change is you take things like the Equalities Act and you set out how that applies, what’s protected, what could be considered in terms of speech, something that runs against the principles set out in the Equalities Act. So, you know, racial abuse, abuse directed at somebody based on their religion or their sexual orientation, that’s set out in the Equality legislation, so in terms of what’s in the scope.

In terms of how good is the AI? How good are the companies dealing with this? They have to set out to the Regulator, what systems they will use to solve this problem and the Regulator might turn around and say, well, we don’t think they’re effective. So therefore you need to work out either how you’re going to improve the AI, or how you’re going to build in more human moderation.

But the job of the Regulator is not to say how to do it, but it’s to say what they expect the outcome to be. And if the outcome is not delivering, then the job of the Regulator is to intervene at that point and say, well, you’ve got to change the system because your system is clearly not good enough.

Neil Fairbrother

Okay. Recommendation E “…apply the overarching principle that systems should be safe by design while complying with the Bill”. Now, Safety by Design, I think originated with our good friend, the eSafety Commissioner from Australia Julie Inman-Grant, but there is a lot of EU and UK work on that as well. The EU’s Children Online Research and Evidence classification from Sonia Livingstone for example has got the four Cs of Content, Contact, Conduct, and Contract, should this, or something like this, be baked into the Online Safety Bill itself?

Damian Collins MP

Well, I think that the principle here is of Safety by Design is… and we see this in other areas of regulation at the moment, if you take gambling for example, the Gambling Commission makes assessments based on the level of risk involved in a particular gambling product, whether it believes it’s too risky. The Financial Conduct Authority does the same thing with financial products as well.

So some of these principles do exist elsewhere and the job of the Regulator here will be to look at certain products that have been designed by tech companies to reach audiences and saying, are these problematic? Do we think there is a design problem here which makes it likely that this is going to cause problems in the future? And so the job with the Regulator is to challenge the companies and ask for information and reassurance on that. If ultimately the Regulator believes that it’s actually designed flaws that are making it hard for a company to comply with the Codes of Practice, then that will be a clear area for regulatory intervention. I think a systemic failure like that would be a clear area where I think you could see companies fined or further action taken against them.

Neil Fairbrother

Yeah. Now Safety by Design, from the child’s perspective… the data from the Internet Watch Foundation suggests that the most dangerous place for a child to be alone with a smartphone is the locked family bathroom. That privacy lock on the bathroom door gives the illusion of security and safety. But of course they’re using a smartphone and the smartphone uses a network. The Online Safety Bill in includes social media companies and it includes search engines, but what it doesn’t include is everything else that makes up the Online Digital Context that allows the child to get from their cell phone, whether they’re in the bathroom or their bedroom, or in the street or a school, to the content that may well be affecting them.

So should the Online Safety Bill expand its scope to include devices, iPhones and Android based devices from manufacturers like Samsung? Should it include Content Delivery Networks? Should it include ISPs? Because if you take the network away, doesn’t matter what device you’ve got, you can’t get to the content. If you take the device away, it doesn’t matter what the networks are like, you can’t get to the content. Should the Online Safety Bill be more encompassing of the entire Online Digital Context, rather than trying to load everything onto social media companies, which sounds a little bit unfair to them.

Damian Collins MP

So you’ve got though in those environments is you’ve got device level settings that you can set. You can set your child’s settings on their iPhone to restrict access to adult content.

Neil Fairbrother

You can, but that doesn’t stop people sending illegal images to children through FaceTime, for example, or through text messaging. It doesn’t stop grooming. It doesn’t stop cyberbullying.

Damian Collins MP

So you’ve got device level things there. You’ve got settings you can use for your home network, you know, that will block adult content as well. We’ve accepted the Law Commission’s recommendation on things like cyber-flashing and sending them indecent images, to make that an offense that the Regulator could take enforcement on as well.

The question within that environment for the for the social media companies would be in terms of offering Age Assurance for children accessing adult content through their platforms. If despite this, as you said, someone is being directed towards adult content or pornographic content, or is discovering it on those systems as well, if you’ve got a social media service where there is adult material, and that company can clearly see that younger users are accessing it through their platform and in doing so, getting around all the restrictions that exist when going into a platform to access that content directly, then there should be some sort of responsibility placed on that company in that context.

So I think in terms of how Safety by Design should work in this environment is that there are systems in place to make it as hard as possible for someone to access that sort of content, and that all the companies are doing what should reasonably be expected of them to put those systems in place. And if they’re not doing it, then that’s a matter for the Regulator.

Neil Fairbrother

Okay. Recommendation F, “…safeguard, freedom of expression and privacy”. Now, freedom of expression isn’t an absolute right and freedom of speech does not mean you have a free for all, that you can say anything to anyone about anything at any time. The irony is to have freedom of speech, you need to have regulated speech, but many organizations such as the Electronic Frontier Foundation and the Internet Society seem to believe that the current internet should not be tampered with at all. And they cite its remarkable growth, aided by Section 230 of the US Communications Decency Act which is held by many as being the “26 words that created the internet”, and we interfere with it at our peril. Can any change, can any regulation in the UK be effective without change to the US Section 230 of the Communications Decency Act?

Damian Collins MP

Yeah, so I think yes, of course it can because the companies have to comply with legislation in different countries where they do business. So when you think of content being accessed by users in the UK, then of course Section 230 doesn’t apply, UK law applies in that context. So the companies will have to have to comply with UK law here.

The idea I think of blanket exemptions from liability that Section 230 gives, I think is false. I mean, when Section 230 was created, it was created on the basis that if someone was abusing someone else that their 1st Amendment rights didn’t apply and the platform could take action against them and say, you know, you’re in breach of our community rules and therefore we don’t want you on here.

What it was never designed to be was the idea that the companies [would have] total [freedom from] liability for doing anything at all and in almost any context. Now, if say when you went on Facebook, if your experience of using the service was simply that it was an organic feed of accounts that you follow, if it said that you can’t be signed up to be a member of a group unless you’ve proactively accepted an invitation to join it, then the experience of being on the platform would be somewhat different. You could say the user is more in control of the content that they engage with. But instead they, the users, are being profiled and content is being directed at them through the newsfeed. On YouTube, about 70% of what people watch on YouTube is played for them by the platform, they don’t search for it.

So the idea that I think the experience on social media is people only getting content from people that they follow in the order in which it has been posted and only otherwise finding stuff that they actually search for is erroneous. Actually it is a system where content is being selected by the platform, directed to the user to hold their engagement for as long as possible and to get them to come back as often as possible. So in that curated environment, I believe the companies are absolutely liable for the user experience because it’s the companies that create it. It’s not the user.

Neil Fairbrother

It does sound like they are fulfilling the role of a publisher, doesn’t it?

Damian Collins MP

Yeah, they are. They are. They’re not publishers in the traditional sense because they’re not generating the content, but they are creating the experience. What you see, the order in what you see it, the prominence given to different things, those things are determined by the company and that’s why they have a responsibility.

So the idea to say, well, if someone is engaged with extremist content, you know, or self-harm content, the fact they’re more likely to see more of it is a decision made by the hosting company itself. It’s a decision made by the platform that someone who’s engaged with conspiracy theories will overwhelmingly see more conspiracy theories. That’s not a decision the user has made, that’s a decision that the company’s made to make money out that user. I think when we’re talking about systems the companies have for themselves to make money for themselves, they should be regulated on the decisions that they make.

Damian Collins MP

Well, indeed. The other part of this particular section or recommendation is privacy and privacy is often afforded to us by end-to-end encryption which is of course essential for certain services such as online banking. I don’t think anybody would want their online banking details not to be secure and private. But bad people do bad things in private spaces. That is human nature. And a common grooming tactic is to take children from one platform onto a fully encrypted platform to further their child abuse in these private spaces. How can that circle be squared Damien?

Damian Collins MP

Well, if you’re talking about encrypted services, that is important, but difficult. There is certain data and information that the law enforcement agencies can gather about even encrypted services. It can be very useful in terms of prosecution. We’ve also said that we feel when you deal with things like anonymity online, that there should be traceability. There should be a presumption around traceability, that there should be a presumption that we can speedily and readily get data that may lead to the identification of someone who is harming others particularly in the case of children.

And a constant complaint we heard throughout our inquiry, and we reflected in the report, was that whilst the systems do exist, they can sometimes work very fast and sometimes work very, very slowly, and it can take very long time particularly to get data and information from an American company. So there’s a lot we can do, I think, to improve that, to make offenses like that far more traceable.

There is further question around the nature of encrypted services as a whole and what window there should be into those services themselves and that’s a separate review, which the Home Office has been leading on. I don’t think that will be part of the Online Safety Bill, but it’s clearly an issue that we’ll need to revisit.

And also in general, to think of the general role, I think of the social media companies in keeping their public square of discourse orderly, you know, keeping it safe. And I think this is one of the future looking roles the Regulator’s going have to have is that new technologies will come along, the nature of the online experience will change, particularly when we look at things like Metaverse, there is a danger that in the dark corners of that will hide some pretty dark and dangerous places. And the organizations that are best placed to know what’s going on in those places are the companies themselves.

I mean, Facebook can still see what happens in a closed Facebook group, you know, even if it’s not open to the rest of the world. And I think for there to be a general presumption that the companies should be proactively looking for high risk areas where harm is likely to occur, particularly if that harm involves children and they act responsibly in that regard. they share information with other platforms, they share it with the Regulator.

I think in terms of, you know, understanding the harm that can exist even in private places is really important. On the other side of privacy, of course, is I think you get into things, especially with children, is that companies aren’t gathering more than they need. The companies aren’t using personal data to allow different messages to be targeted at people, maybe messages that are inappropriate. And therefore that will be an abuse of privacy. That you are taking maybe someone’s protected characteristics about their sexual orientation and you’re using that data, that information without their consent to target them with messages they may not want to receive, or allow advertisers to do the same. And I think there’s an important debate around privacy there, around how, you know, the rights people have to protect their identity online.

Neil Fairbrother

Okay. The final recommendation you make, recommendation G, is to “…operate with transparency and accountability” and in respect of online safety the key part of that is the Duty of Care, which we’ve alluded to earlier. Now in your December report you talk about the Duty of Care and you say that the meaning of this seems to have changed from the Online Harms white paper to how it’s defined in the Online Safety Bill, where you say that “…the duties are things that providers are required to do to satisfy the Regulator, as opposed to duties to people who use the platforms” i.e. us, the users. But isn’t the whole point of this to have a Duty of Care to the users and not the Regulator as such?

Damian Collins MP

So what we’ve set out in our report is that if you have an overarching Duty of Care, the problem is around definition. I think when that was originally looked at, it was thought that an overarching Duty of Care would sort of futureproof the legislation. It would mean that even if something bad happened that wasn’t defined in the Act, that the Regulator could still intervene. The trouble with a very wide Duty of Care is unless it’s specific, it’s not clear enough what it means either to the user or to the company itself. And that’s why our approach was saying the Codes of Practice should be based on existing offenses in law. The Regulator sets out the Codes of Practice, what the companies are expected to do.

But I think there is this question around, are these just obligations to the Regulator, or are these obligations to the user? And we felt in our report, there should be user redress. And we suggested two areas where this could be actioned. One, ultimately through the creation of an Ombudsman service. So if you made a complaint to the platform about the content on the platform you’ve been targeted with, or some other complaint, if that complaint process has been exhausted unsatisfactorily, the Regulator might take a view. It might ultimately fine the company or do something like that. But if you want an individual redress, then there could be an Ombudsman service to create that.

But also, and I think potentially quite importantly, we’ve suggested to create the idea of civil redress through the courts. So you could take a company to court for its failure to meet its obligations set out in the Act. So the failure to meet its duties to the code of practice as set out in the Act, and you could take a company to court on that basis. And that could potentially be open to class action cases where multiple groups have been affected in the same way could take a company to court.

This provision exists in Data Protection law. So you can take civil action against the company for failing to meet these obligations as set out in the Data Protection act. We think you should do the same thing for online safety. And then that would be then a clear example of the failure, not only being to the Regulator, but the failure being to an individual who was significantly harmed and who the court could rule in their favour.

Neil Fairbrother

Okay. You mentioned class action there. When you look at the terms of service that many of these organizations apply, which you have to sign to use the service, if you disagree with terms of service you don’t get to use the service. So you have to agree to the terms of service. Many terms of service specifically say you shall not take class action against us. So does a private organization’s terms of service that you have to agree to trump UK national law, or does UK national law trump that terms of service?

Damian Collins MP

Well, what we’ve said here is that the offense would be regarding a company’s duties to the act. And what we recommended in our report was the pillars that sort of underpin this are the Codes of Practice. So if we are saying the Act says the Codes of Practice are mandatory, so in a situation like this on a class action lawsuit or rule, you would be saying to you can go to the courts to say, regardless of what the terms of service of the company say, the Regulator through its Code of Practices has clearly set out obligations you have to meet. And if you fail to meet them, then you fail to meet your obligations under the Act. And I think in that situation, that would be a higher test than whatever the terms of service the company say.

Neil Fairbrother

Okay. You also say that a, a possible model to help overcome this could be borrowed from the Financial Conduct Authority, the FCA, because they have a what they call a consumer principle. So the FCA’s model proposes an overarching principle that firms act in the best interest of consumers. Now that has a very close resonance with the UN CRC and the General Comment 25, which says that people should act in the best interest of the child. But unfortunately, the US where most of the social media companies are based, hasn’t ratified the UN CRC, so does that present a problem for adopting a CRC-based approach?

Damian Collins MP

Well no, because ultimately it doesn’t matter what American law says or America has done. We can legislate the standards that we expect the companies to meet when they’re serving content to users in the UK. And that happens elsewhere in the world, in other areas already. So we can determine through our own laws, what we want the companies to do and the Regulation will have the power to take enforcement action on that basis.

Neil Fairbrother

Okay. Now we’re over time so I’ll try and wrap up very quickly. In terms of sticks and carrots, incentives and punishments what’s available to Ofcom to make their regulations stick?

Damian Collins MP

Well, I’m not really sure the regulation needs to be offering carrots, but what sticks do they have? How are we going to make them do this? Firstly, the good thing about a regulatory approach is you’re not just looking at individual offenses, you’re not saying to a company, okay, the code says this, you did that. You were wrong, therefore here’s the fine.

It’s really working with the companies to say, is there a systemic failure? Are there multiple failures here? Are you clearly not only not complying with the code, you’ve got no intention of doing so. In that case what happens? Well, the regulator can levy fines up to 10% of annual global revenues against the company. So potentially hugely significant fines. There would also be named Director liability. A Director would have to be named at board level or a reporting Director to the board who would be responsible for the company’s safety regime, would be responsible for the company’s compliance. For a company that was in clear and fragrant breach of its obligations, that individual could face criminal sanctions as well. So the Regulator does have teeth. There are real things the Regulator can do if the companies fail to comply.

Neil Fairbrother

And that’s mirrored in other industries with the Health and Safety at Work Act 1974. And that interestingly says that no one has to have been harmed for an offense to be committed under Health and Safety at Work Act, there only has to be a risk of harm.

Damian Collins MP

I think the job of the Regulator is not necessary to give arbitration on every single case, but to say, you know, if something is clearly known about, if the Regulator has identified a clear level of risk that makes it likely things are going to go wrong, things start to go wrong, the Regulator I think is even more empowered to go and say, we told you, this is a problem, you’ve done nothing about it, and the thing we predicted would happen is now happening. You’ve got to fix it now, or this is what we’re going to do.

And I think that regulatory regime gives the chance for those sorts of discussions take place. The regulation can give guidance to on how they de-risk some of the things that they’ve identified as being harmful. And if the companies are not willing to comply, then the Regulator has powers. And I think without both those powers and the ability to gather data and information, that makes it quite clear the companies are not complying, then independent regulation won’t work. And those are I think two of the most important aspects of what’s been recommended in this Bill.

Neil Fairbrother

And we’re not the only country looking at this type of legislation, Damien. I know Australia is coming up fast with something. Are we in danger of having a regulatory splinternet?

Damian Collins MP

No, I think what we’ll see is, if you like, a kind of leveling up of internet regulation. Countries will try different things. Other countries will look at what they’re doing. There’s been a lot of international interest in the Online Safety Bill for exactly that reason. And I think what we’ll see is where it seems to be working and good systems are in place other countries will look to copy them. We’ve already seen that with data regulation where certain States in America have introduced data protection laws that are very similar to what you see in Europe. I think there’s a lot of close cooperation. Looking at what’s happening in Australia or the EU or the US. And I think Europe and the UK and Australia, New Zealand will be ahead of America on bringing some of these reforms.

But I think what we’ll see then is lawmakers in America looking at those changes and saying, well, why can’t we have some of those things here? You know, there are a lot of active ideas around how you start to reform Section 230 and end liability for certain known areas of harm and certain known areas of risk. So I think what we will see is this leveling up.

I would love to see a kind of, you know, global agreement on tech regulation, the kind of global version of COP, the you know, climate change summit. But at the moment, I think the danger there would be that nothing will be done until everyone could agree. I think the best thing to do here is actually do what we think is right. We’re road testing these ideas for other countries then at the same time. And I think different countries will start to coalesce around what effective independent regulation looks like.

Neil Fairbrother

Okay, what’s the next step for the Online Safety Bill Damian?

Damian Collins MP

So, as you said, we published our report in December with our recommendations, the Government’s considering those recommendations now and the Government said it will introduce the Bill for second reading in Parliament before the end of this session. So that really would mean most probably in March or April this year. And then I would expect the Bill would complete its progress through both Houses of Parliament probably by the end of the year.

Neil Fairbrother

Okay. Thank you Damian. I really appreciate that. Good luck with everything and all eyes are on you, I think!

Damian Collins MP

<Laugh> great. Good to talk to you. Thank you.