Safeguarding podcast – Innovation is illegal, with John Carr OBE

In this safeguarding podcast with John Carr OBE, we discuss how the EU went wrong with GDPR, the EU’s new ePrivacy regulation and how it will make CSAM detection innovation illegal. We ask what is WhoIs and why is this important? We also discuss how social media companies have dodged the requirement for parental consent and ask, given that the UK is leaving the EU, why EU regulations that protect children online matter to the UK.

Welcome to another edition of the SafeToNet Foundation’s safeguarding podcast with Neil Fairbrother, exploring the law culture and technology of safeguarding children online.

Neil Fairbrother

As part of the European digital strategy, the European Commission earlier this year opened up a consultation period for a new Digital Services Act package, which they say will strengthen the single market for digital services and will foster innovation and competitiveness of the European online environment. Whilst the UK is currently in the transition period for leaving the EU, the proposed new Digital Services Act will impact the UK. Probably.

To guide us through this complex issue, I’m joined by John Carr OBE. Welcome back to the podcast, John. This is your second podcast with us.

John Carr OBE

Yes! I’ll have to speak to my agent about sorting out a regular contract!

Neil Fairbrother

Could you give us a brief resumé please of your background so our listeners from around the world understand a little bit more about your considerable experience.

John Carr OBE

Okay. Well, I am extremely old, so I’ve been around in this space for quite a long time, since about 1995, 96. I was a techie-geeky-type who used to do small network installations and management, that kind of thing. But in 1996, I started working for a big children’s organization in Britain, advising them on the then new-fangled thing called the internet and its implications for children and young people. Subsequently I became Secretary of the British Children’s Charities’ Coalition on Internet Safety containing all of the big British children’s groups. I’m advisor, senior technical advisor to ECPAT International, a global children’s NGO based in Bangkok in Thailand. And I act as an advisor to the Council of Europe on matters connected with the Lanzarote Convention. So yeah, I’ve been around the track a few times and I’ve done a lot of work on the international scene.

Neil Fairbrother

Okay. And the Lanzarote Convention that just in case folk aren’t so familiar with that, that was all about the definition of sexual abuse images or something along those lines, I think?

John Carr OBE

It’s about the protection of children across a range of things, but there is a particularly important bit that deals with child sex abuse images, online grooming, and a whole range of harms against children that the internet magnified, didn’t create, but magnified.

Neil Fairbrother

Now, since we last spoke in early March of 2019 the world has changed a little bit. Theresa May was our PM, the Online Harms white paper had still yet to be published, we could meet in the real world for coffee and Zoom was something that motorbikes did. It seems like a different age. How have the last 18 months or so affected you and what you do, John?

John Carr OBE

Well, I mean, the main thing about COVID for me thankfully none of my family have contracted it, although we’ve had a few scares. But the main thing for me is I haven’t been on an aeroplane since well, the whole of this year, which is practically, well, I can’t remember the last year when that was true. And I like that. I mean, there is nothing to be said in favour of international travel on airplanes, other than getting there, meeting friends and colleagues and doing your stuff. But the whole business of flying is just awful. So doing everything by Zoom has been a kind of a liberation in a way. It isn’t the same as being with colleagues and friends and having more intense face-to-face discussions. But there you go. That’s been my COVID price.

I live very near Hampstead Heath and I was saying to a friend the other day, if I was parachuted blind fold at midnight on to Hampstead Heath, and told to find my way backwards to where I live, wearing a blind fold, I don’t think I would have any trouble at all doing it. So there we are. And of course, as you say we’re now much further along with the Online Harms white paper in the United Kingdom, and we’re a lot further on within Europe with the publication of the Digital Services Act. And in fact, not just that, but a couple of other measures which are coming along at the same time.

Neil Fairbrother

Okay, well, we’ll explore some of that in just a moment, but let’s deal with the elephant in this particular room. We are rapidly approaching the end of the transition period out of the EU, deal or no deal, so why should we in the UK be concerned about what the EU is doing with regard to the online safety of children?

John Carr OBE

Well, there are two principle things, one of which is immediate and the other of which is longer term. The immediate issue is this: because we don’t actually finally leave the EU until the 31st of December, any measures which the EU adopts up to that date become law in the United Kingdom as well, unless and until we change them. But given that there are so many things going on, we have to assume that any laws that come into effect from the EU before the 31st December could potentially be with us for a very long time. So there’s an immediate, ongoing concern about that aspect of Brexit.

And there’s one measure in particular that could be a disaster. I don’t think it will be, I think we’ll deal with it, but let me just briefly explain. In 2018, a Directive was adopted, which said that on the 20th of December 2020, so in other words a few months from now, something called the European Electronic Communications Code will become law. What that Code did, amongst other things, was to make it illegal for any messaging services to continue using PhotoDNA, or tools to detect grooming behaviour or classifiers that might detect child sex abuse images which haven’t yet been identified as containing illegal sex abuse material, but are very likely to, but they can be referred for a human review and a decision taken.

Neil Fairbrother

OK that can’t be a deliberate outcome of the legislation though, John this…

John Carr OBE

It was a mistake. When they were drafting it in 2018, I don’t think they knew that that would be the effect of what they were doing, at least I’m rather hoping they didn’t, but the point is nobody else anywhere in the Commission, or in any of the other European institutions, and none of the children’s organizations, so this is a sort of Mia Culpa for me as well, nobody spotted it until about six months ago. So unless we stop that from happening that’s the position that we would be in too, in Great Britain.

But on the 10th of September, so just a few weeks ago, the Commission published a proposal, which if it becomes law will preserve the status quo at least until 2025 and the idea is that they would then in the meantime, bring forward a proposal for the permanent solution or a longer term solution to address this issue of how we allow proactive tools to work.

Neil Fairbrother

Okay, this is the “temporary derogation” I believe?

John Carr OBE

It’s called the temporary delegation. In fact, the Commissioner has said they’ll bring forward the measure next year, but the point is “belt and braces”. If this measure is passed the status quo will be preserved until at least 2025.

Neil Fairbrother

But it does more than that though, doesn’t it? Because I think it’s Section 11 on page eight or Section eight on page 11, I can’t remember which, but it says “…the derogation provided for by this regulation will be limited to well-established technology that is regularly used by number independent interpersonal communication services for the purpose of detecting and reporting CSA online and removing CSA material before the entry into force of this regulation”. And it also says “…the use of this technology in question should therefore be common within the industry”.

Now to my mind, that is blocking innovation. So you mentioned PhotoDNA, which is fantastic and I spoke with Hany Farid about it who is credited with the invention of that particular technology and it’s very good at what it does, but it by definition, almost, relies on the abuse having been committed. It is a retrospective analysis and tagging of published photographs, and videos with a PhotoDNA for Video. But there are new technologies emerging which can analyse in real time what the camera is seeing and can hash or block out in real time. Now this legislation would presumably prevent these new emerging technologies from being deployed?

John Carr OBE

You are absolutely right and I’ve written a blog, which says something rather similar to that. It’s the first time that I can recall in effect, a law being passed, which says innovation is illegal and innovation to protect children is being made illegal. Because you’re absolutely right. It has to be well-established commonly accepted within the industry and so on, it has to have an error rate… certainly when they spoke about it at the committee meeting where it was being discussed last week, they mentioned that an error rate of less than one in 10 billion or one in a billion or something. And I don’t have a problem with that. You need very, very, very high accuracy rates. But the idea that you would make it illegal to innovate and come up with new solutions or solutions which have not yet gained common acceptance within the industry, or which are well-established, still seem a little bit bizarre.

So my suggestion to the Commission is that they adopt technology neutral general principles and say if a new tool emerges or is currently emerging that meets these criteria, it’s okay for it to carry on being used or it’s okay to introduce it for the first time, because think about it. What Commission officials are actually doing at the moment, is drawing up a list of all of the programs that are considered to be commonly accepted and well established and they are bound to miss something. And what if one of those applications is upgraded or, or there’s a need for a fix? Does that make it a new application? Does it cease to be a commonly accepted one or a well-established one? Well, it doesn’t seem right to me, but I mean, either way, I mean the fundamental point, the most important point is they certainly mustn’t outlaw the tools which are currently being used. So to that extent if all they do is preserve the status quo, but at least will be something.

Neil Fairbrother

Okay. Well, I’m sure that the Online SafetyTech Industry Association will have something to say about that because there’s clearly a direct impact on many of those companies. Now, as far as the Digital Services Act is concerned, you’ve put together some recommendations to the European Commission, five recommendations in fact and the first one of those says that “…a duty of care needs to be made explicit”. Now we’re expecting from the Online Harms legislation early in 2021 for a duty of care to be placed on electronic service providers within the UK. But how can this be enforced on an international basis? Many of the social media companies that our children use and are effected by, operate under US law well outside the jurisdiction of UK.

John Carr OBE

Yeah, well, the same is true in relation to the GDPR and a number of other rules which the European Union has adopted. So the fact that you’re not legally domiciled within the EU at least as far as the GDPR is concerned, is not material, it doesn’t matter in principle and it should be the same here. If a company wishes to operate within the boundaries of the European Union, it has to confirm with European Union rules. We have the same thing with physical products. You notice this if you buy a new toaster or a new TV, there would be a little label attached to it with a little symbol on it, which indicates that it meets EU specified technical and safety standards as well. It should be the same with online standards.

So yeah, I mean it’s a new area. It’s a new thing. But it’s not impossible and we have analogies and precedents, but it’s very, very important. In fact, the five recommendations that you mentioned, if somebody put a gun to my head and said “You can only have one” that would be the one that I would choose, because if companies accept and acknowledge that they have a duty of care to their users and that that’s made explicit, then many, many other things follow. Shall I explain now?

Neil Fairbrother

By all means – I was going to ask the question, which was, if this duty of care was established in law and in principle and could be demonstrated to work, would it help these electronic service providers preserve their legal immunity from civil or criminal liability of third party content hosted on their platforms?

John Carr OBE

You hit the nail on the head there. The rules of the internet were set in the United States in 1996 in the Communications Decency Act, probably the most inappropriately named piece of legislation in the history of legislation around the world. Right? And basically what it said was if you are an internet service provider and a third party does something on your platform that you have no knowledge of, you can have no legal liability for anything that that third party did.

And we in Europe, this certainly includes the UK, essentially copied that. We didn’t copy it exactly, but more or less. In other words, at the moment, all of the big platforms, all of the big companies that matter on the internet at the moment have no legal responsibility for stuff that goes on using their networks or services unless and until it’s drawn to their attention.

If the duty of care is accepted and established then that changes. What that would mean is the companies can still preserve their legal immunity, I don’t have a problem with that, but on one condition: they demonstrate, they show to the satisfaction of a court if necessary, that they had taken all reasonable and proportionate steps to implement, or honour their duty of care by trying to anticipate the risks to children in this case, that’s what we’re talking about, and do something to mitigate them. So using clever tools, using scanning, using different methods, if they can show that they tried their best to eliminate or reduce the possibility of harm being done to children, they can keep their immunity from liability. And if they can’t, they remain liable.

Neil Fairbrother

Okay. And is this what you mean by your second recommendation that transparency is essential? Because to establish the fact that they have done as much as they can to avoid these risks of harm to children, transparency is needed to understand how they’ve done it?

John Carr OBE

Up to now, we’ve essentially been working on the principle of self-regulation. Back in the early days of all of this stuff, essentially what big tech said to governments was; trust us, leave us alone, just stay out of our way, any problems that come along, we’ll deal with them. And the problem was in those early days, so late 1990s, early 2000, very few people outside a rather narrow circle actually understood the technology at all. They were all dazzled by the economic benefits of it, by the manifest sort of changes, beneficial changes, it was promising to bring about. So they basically said, okay, guys, we will trust you. But what that meant was the companies were put under no obligations to report what they were actually doing. And they refuse. To this day they continue to refuse.

Now, some of them are beginning to publish these transparency reports, but they’re making the decisions about what they want to be transparent about. They’re the ones who are essentially saying, look, this is what’s going on. But nobody else, no third party, is validating it. So we’re still back in a position at the moment where we’re asked to trust tech and after what’s happened in the last 25 years or so, there is no basis for that.

So we have to establish a clear and transparent regime, so that members of the public, governments, children, everybody can see that if a company says they’re doing something to protect children, or it’s doing something to promote digital literacy or whatever, that what they’re saying is the truth and that they’re doing it in the best possible way. So transparency… let’s say a gun is put to my head and I’m allowed two things. Transparency would definitely be number two because at the moment there is zero transparency.

Neil Fairbrother

Okay. Now your third recommendation that you made was to revisit the GDPR, and this area starts to get a little bit complicated in some respects. So let’s start with the basics. What is the GDPR?

John Carr OBE

GDPR is the General Data Protection Regulation and it governs every EU member state which still includes UK and even when we leave the UK, it will carry on to apply unless and until we change it and it’s unlikely that we will anytime soon. But one of the things that happened when the GDPR was going through was the politicians made an absolute and complete mess of dealing with the question of age of consent for children.

Neil Fairbrother

Now, this is not the age of sexual consent, right? This is the age of consent for sharing data.

John Carr OBE

Politicians had to decide at what age should the child be considered competent to make a decision for themselves about whether or not to share personally identifiable information with internet services and app providers who would then use that data to serve them with advertisements, so in other words, for commercial purposes.

The original proposal before the actual draft regulation was published was to make it 18. Only somebody who was very lazy would have come up with 18 because 18 is the general rule for consent for contracts and they didn’t think it through, but that is what they were proposing. Anyway, there was a big storm, they then changed it to 13 and they published the document and simply said, we think it should be 13 because that’s what the Americans already do and that’s why it’s already in use in most of Europe at the moment.

Neil Fairbrother

And that’s based on the US COPPA…

John Carr OBE

Yeah, US Federal law, COPPA, Children’s Online Privacy Protection Act adopted in 1998, right? Based on research done in 1997 and early 98 when no social media services existed at all, Mark Zuckerberg was still in high school, but it’s possible that Friendster was either just being created or was about to be created. But the point was, that law was adopted when social media, as we know it today, simply did not exist and it became operative in 2000.

Neil Fairbrother

Okay. So we’ve got GDPR and so far they started at 18, they reduced the age by five years to 13, at least that had the benefit, I suppose, of being in line with the US COPPA law. What happened next?

John Carr OBE

When it finally went out to the politicians for decision, somebody, and then it was done in a thing called a Trialog which technically is meant to be confidential, but I know what happened because I spoke to three people who were in the room. It was the French and the Germans who said to the Commission officials, why have you suggested 13? And they said, because that’s what the Americans do. And that’s why it’s already a commonly established standard in the European union. So they said, hang on a minute, just because the Americans are doing something that’s not a convincing argument in and of itself without more evidence. So they said, let’s make it 16. I mean, by the way, that is pretty much it. That was the depth of the research and thinking that went into it, they just said, let’s make it 16.

Okay. So they published the proposal the next day and it was in all of the newspapers, the EU’s going to make 16 the rule for Facebook. All around Europe, newspaper outlets, the BBC, and so on were carrying stories, saying the European Union is about to ban Facebook and Instagram and all of these apps from children under the age of 16, bearing in mind 13 was the established rule. Okay?

So the next day they all come back in the same room together and they say, well, that didn’t go down too well, did it. And then they said, what can we do instead? So they said, well, look, let’s keep 16 as the rule, but let’s give individual countries an option to adopt a different age if they want to, as long as it’s not lower than 13 or higher than 16. And do you know what? That was just disgraceful.

They didn’t talk about it, they didn’t look for research. They didn’t look for evidence for anything to do with children’s competence, which is fundamentally what they should have been doing. And they didn’t consider the implications of having on the same platforms at the same time, children subject to different age limits and age regimes. And since some of them also require parental consent in some countries, in fact in most of the countries, they didn’t consider the implications of children being on these platforms where different standards of verifying parental consent, where it had been used. It was a shambles, it was a disgrace.

Neil Fairbrother

OK, but this is not a really a law about being on social media at all is it? This is, as you say, it’s a law concerning the sharing of personal data, which the social media companies, certainly as far as the US COPPA is concerned, have adopted as their standard to determine who should be on their platforms, according to their terms and conditions, it’s a separate issue.

John Carr OBE

A completely separate issue. But the fundamental point we make in that document that you’re referring to is, nobody gave proper and thorough consideration to what was in the best interests of children. It was a political model, a political fix that was handled badly. And that’s not the only thing that they got wrong. The other ones are even more obscure and difficult to explain.

Neil Fairbrother

Well, yes, but nonetheless I think there were two other points which are worthwhile exploring at least briefly if we can. One is that you say that the legitimate interest, or the use of “legitimate interest”, has been manipulated in respect of children, and the implication is by social media companies, to avoid this age issue. What is the “legitimate interest”?

John Carr OBE

Okay. So your age only matters if the social media company or the app is using consent as the basis on which they collect personally identifiable information. So in Germany, for example, they’ve adopted 16. So if you’re 15, 14 or 13, and you want to go on an app, technically the company has to get your parents’ consent to you going on it.

But if the company says, no, we’re not going ask for your consent, we have a legitimate interest in collecting information about your behaviour and about what you do when you’re using our app, so we’re not going to ask for your consent, we’re not going to ask for your parents’ consent, we’re just going to collect your data and use it because we have a legitimate interest in doing so.

Now in one fell swoop that makes the whole issue of consent irrelevant. You can argue about whether consent is a good thing to collect or a bad thing to collect, the fact is that’s what happened. And Facebook in particular use the legitimate interests legal provision to create a whole new class of membership.

So to go back to Germany, where 16 is the minimum age, you can in fact open up a Facebook account if you are below the age of 16, your parents don’t have to give that consent and Facebook justify that by saying, we’re doing this on the basis of us having a legitimate interest in collecting the data. The service they provide isn’t identical to the one that you would have got if you’d got parental consent, but I’m not absolutely sure what the serious differences are.

Neil Fairbrother

Okay. And do you know what the grounds are for their claim that it’s a legitimate interest?

John Carr OBE

Their claim is that we are a company that makes its money from advertising and from, you know, from providing services to people which are otherwise free and so we have a legitimate interest in not just allowing you to come and use our service and get nothing back for it, we have a legitimate interest in collecting your data so that we can keep our business model going. I mean, it’s currently being investigated by the Irish Data Protection Authority…

Neil Fairbrother

Yeah. Well, we’ll come back to Ireland in just a short while. So very quickly, if we can deal with this last area, which is even more obscure with GDPR. You refer to the “travesty of WhoIs”. What is WhoIs, and what is the travesty? I know this gets rather obscure John, but…

John Carr OBE

I’ll try and give you the ten second version. Okay. If you want to buy a website, so I want to create a new website called johncarrfunkydancing.com. Okay. First of all, I have to check if anybody’s already taken that name, almost certainly they haven’t, then I have to buy it. So I would buy it through a registrar, [such as] Go Daddy. Go Daddy is the biggest and best known registrar in the world. What GoDaddy is meant to do is check that I am actually who I say I am, that I live at a certain place, that I can be contacted so on and so forth. That information then goes into a global directory, which is called “WhoIs”. So in theory, if I start doing criminal stuff or bad stuff, it should be very easy for the cops or say you if you want to sue me, to get hold of me and claim your legal rights or enforce the law.

The problem is these companies are not checking everybody’s name sufficiently, and WhoIs, the last time it was checked by an English lawyer, by the way on behalf of ICANN on a global basis, only 23% of the entries in the WhoIs directory were accurate in the way that they were meant to be. So what that’s meant is that there’s all kinds of criminal abuse and scams that are taking place that shouldn’t have been.

And the tragedy of GDPR in the EU was that when the GDPR was being considered, they didn’t even discuss it. They simply missed it out. What they should’ve done is said specifically, if you wish to maintain a database called WhoIs, then you must ensure that the data in it is accurate and up to date and if you don’t, penalties will be extremely severe. Because if we have accurate information in WhoIs, who in their right mind would give their real name and address, knowing that it was on a database somewhere, and then publish child sex abuse material from that web address? Nobody. But that is what’s happening. And it was a scandal and I’m afraid that EU came out of it rather badly.

Neil Fairbrother

Okay. Well maybe that part of GDPR could be amended under some future provision or mechanism. Now you mentioned also, in your fourth recommendation, that the operation of the what’s known as the Audio Visual Media Services Directive should be scrutinized. We don’t have time to go to this in a lot of detail, but one of the points you make in there is that “…member States should be explicitly empowered to limit access to their part of the internet, to any EU-based site, which fails to take adequate steps to a strict children’s access to unsuitable content, for example, by introducing Age Verification”. And Age Verification is of course a bit of a hot potato, certainly in the UK. Where are we with Age Verification?

John Carr OBE

Well, I think Age Verification will happen in the UK, that you’re right, there’s been some political shenanigans and stuff going on, but it will happen and the logic of the argument for Age Verification is to me, completely clear, it’s inevitable and desirable.

But the problem with the AVMSD, which is a European-wide rule, is that it doesn’t apply to companies outside the EU, outside their jurisdiction. So all I was saying was that this isn’t going to work because most of… I mean PornHub and most of the most graphic pornography sites are not in Europe at all. So what countries need to be able to do is introduce barriers to those companies based outside the EU in order to protect children from within their boundaries and at the moment, they don’t have the power to do that and it’s not included in the powers of the AVMSD.

Neil Fairbrother

In your fifth and final recommendation you mentioned a number of structural changes, but one of the ones that stood out for me was that “…a new role should be created within the Commission directly reporting to the President and the post holder must have the authority to require that no further action is taken until the position of children has been properly addressed.” So this person would have a role with some significant clout. What role would that be?

John Carr OBE

Well, if we go back to the beginning of this conversation when we were discussing the mess up with the limits on automatic tools being used to protect children. That would never have happened if there had been somebody high enough, you know, kind of “nose bleed” high, within the Commission whose duty it was to ensure that every policy that was going through the European institution machinery was properly assessed and regarded in relation to its impact on children.

The fact is nobody of that kind exists. There is somebody, and she does a great job, who’s a kind of child rights coordinator with the Commission, but she’s very low down the food chain. She’s very talented, very skilled, highly dedicated and energetic, but she is in one single Directorate, right? We need somebody reporting directly to the President, who’s got an explicit responsibility and oversight role that can say, stop. That measure that you’re bringing out of DG Justice, or DG Home or DG Connect or whatever it might be, that has the following implications for children, so don’t let that measure go any further into the legislative process until you’ve considered that and come up with a satisfactory answer. Otherwise, we’re going to keep getting the kind of messes that we got earlier with that other regulation.

Neil Fairbrother

Okay. We talked about Ireland a short while ago and Ireland’s Data Protection Commissioner, or Data Protection Commission, was in the news recently regarding what’s known as a “preliminary order”, which was handed down to “..stop the transfer of data about European customers to servers within the US (so outside of the EU and into the US) over concerns about US government surveillance of that data.”

Now, in response to this preliminary order, Facebook Ireland’s Head of Data Protection and Associate General Counsel is reported to have said that “…it is not clear to Facebook, how in these circumstances, it could continue to provide the Facebook and Instagram services in the EU.” And that comment has been widely interpreted as a threat to pull out of the EU, for Facebook to end their service provision within the EU. Now, is that an empty threat? Is that Facebook’s brinksmanship trying to gain public sympathy or would they in fact up sticks and leave?

John Carr OBE

It’s certainly bullying that’s for sure. There’s a very good a guy called John Norton, Professor John Norton at Cambridge University, who’s also Irish by the way, who wrote a great piece about this and he thinks that in 10 years’ time, you know, if he’s still around, if we’re all still around, that this case will still be going through the courts in Ireland. And this is what big tech does all of the time. They’ve got deep pockets, they can snag you down, they can delay you in the courts almost indefinitely.

There’s 140 people who work for the Irish Data Protection Authority, they’ve got responsibilities for hundreds of different companies who have got their headquarters in Ireland. Facebook’s probably got more lawyers in a single department that the whole of the Data Protection Authority in Ireland.

And remember for Facebook and these big tech companies, delay’s the same as money; if they can just delay having to make changes for five years, six years, that’s a lot of money for them and they would do it for that reason alone. I heard what Facebook said. I heard what you read out. It’s definitely bullying. The question is whether or not the Irish will give into it. Personally, I very much doubt whatever the outcome of this is, that Facebook will cease operating in Europe. They make too much money from Europe to do that, but they will find a way to adjust or accommodate it.

Neil Fairbrother

Okay. Now, one final question, John, as we’re running out of time. Commissioner Johansson in a recent webinar on “Preventing and Combating Child Sexual Abuse and Exploitation; towards an EU response”, said that “…we must also deal with encryption. Military grade encryption that’s easy to use, but impossible to break, makes paedophiles invisible and hides evidence of their crimes from police”.

Now, we all know and I think we can all agree that strong encryption is absolutely necessary for legitimate daily activities and I think few would argue for “leaky” or weak encryption, back doors and the like, but encryption on its own while it provides private spaces, does not necessarily provide safe spaces online, especially for children. So how can safety be ensured? How can safety features be implemented in an encrypted and private world?

John Carr OBE

Well, there are forms of encryption that you can use on your service to keep communications private, if that is what you’re worried about, that will also allow illegal images to be detected within the encrypted environment. The problem is that the methods that are available at the moment are not scalable, they will only work in small batches and on small scale. Somebody said to me, if you try to make this work on Facebook’s platform, for example, the amount of computing power that would be needed would be greater than the current capacity of the world’s systems to generate electricity and will probably lead to the extinction of all human life on the planet earth. Now, even I agree that is not a desirable outcome.

So my point is very, very straightforward. The increased use of strong encryption in large scale messaging environments and other places is a threat to the rule of law. It’s creating spaces where court decisions can be ignored, where police subpoenas and summonses have no effect.

So society must take a view. Are we okay with that? Are we okay with it? It’s not just about children, although that’s my particular concern. It’s about drugs. It’s about guns. It’s about terrorism. It’s about our whole setup. There is no single legal instrument anywhere in the world, or international treaty, that has ever said that privacy is an absolute and unqualified right. It isn’t, but strong encryption threatens to make it one. And private companies, private geeks, techies do not have the right to make a decision to create an environment of that kind. They don’t. And so we’ve got to say, hang on, let’s wait until we can find alternative forms of encryption, that will allow the police and the courts and the rule of law to continue working as they have them up to now.

Neil Fairbrother

If you look at the UN Convention on the Rights of the Child, the UN CRC, Article 16 says “No child should be subject to arbitrary or unlawful interference with his or her privacy, family, home or correspondence nor to unlawful attacks on his or her honour and reputation.” I think most shall we say privacy advocates or encryption advocates say, ah, the UN CRC says right of the child to privacy, which is true. But as I’ve said, the rights of privacy, and as you said, is not absolute. It goes on to say “…shall not be subject to unlawful attacks on his or her honour and reputation”, but these enclosed private encrypted spaces as we know, from the IWF and the International Justice Mission create spaces where those attacks on honour and reputation are in fact allowed to happen. Undetected.

John Carr OBE

The fundamental point about the UN CRC says the overriding principle is whatever is done, should be in the best interest of the child, but all of the other articles have to be read in the context of that overriding principle. As you say, there’s no way in which the use of strong encryption makes sense if all it means is children can be raped and abused and their pictures can be circulated and swapped between paedophiles ad infinitum, because the courts and the police can’t get in there. And similarly on the matter of reputation and so on; if nobody can scrutinize this behaviour who knows where it will end and what damage will be done. It’s just not acceptable just because something is technically possible, just because it’s technically possible to create environments that are impregnable, it doesn’t mean we have to use them.

Neil Fairbrother

Okay. On that note, John, I think we’re going to have to leave it. Thank you so much once again, for your time, it’s been a fascinating and interesting discussion as ever with you and good luck with your non-flying around the world.

John Carr OBE

Cheers! See you!

1 thought on “Safeguarding podcast – Innovation is illegal, with John Carr OBE”

  1. Pingback: Unrolled: John Carr cites LinkedIn as a reason that Apple won’t be coerced in China – dropsafe

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top