Safeguarding Podcast: 2021 Review – What a Year!

By Neil Fairbrother

Our 2021 review of Human Rights Safeguarding Podcasts includes many of this year’s amazing guests. Topics include Addiction, Apple’s NeuralHash CSAM, AgeVerification, John Doe vs Twitter, WhoIS, County Lines, General Comment 25, pack-hunting predators, and the TriChan Takedown:

https://traffic.libsyn.com/secure/safetonetfoundation/SafeToNet_Foundation_podcast_-_2021_review.mp3

Welcome to another addition of the safe internet foundation’s safeguarding podcast with Neil Fairbrother exploring the law culture and technology of safeguarding children online.

Neil Fairbrother

2021 turns out to have been a significant for online child safety. The UK’s draft Online Safety Bill was published. We had at least two Facebook whistle blowers. We’ve had the publication of General Comment 25 from the United Nations. We’ve had continuous COVID lockdowns and Apple announced child safety features for iOS. We’ll be bringing you a podcast all about the Online Child Safety bill in January, but here’s a review of some of the other topics that we’ve covered this year.

I’d like to thank SafeToNet for their continued support of these podcasts, and I’d also like to thank our guests for their amazing generosity of time and knowledge, and thanks also to our substantial and growing international audience for taking the time and trouble to listen, download and provide feedback. None of this would’ve been possible without you all.

We started 2021 discussing Internet Gaming Disorder with Dutch neuroscientist, Michiel Smit, who explained that the complex structures of the brain and the impact of dopamine:

What is internet gaming disorder?

Michiel Smit

Players who play so excessively that it is now categorized as an addiction and that means that they have lost control over their game play, despite adverse consequences that they give up many of life’s other domains in favour of playing as much video games as they can. And that they basically classify according mostly to GSM5 criteria for gambling addiction, but they borrow quite a bit from other more traditional addictions. They classify as being addicted to video games.

I’m spending a bit more time in the reward system because it’s the system that underlies addiction, mostly. What all addictions have in common, be it substance abuse or behavioral addiction like gambling or now gaming, they all hijack the activity in the reward system and substance abuse does this by the substance you take forces dopamine, which is the currency which is basically the currency in the reward system that it uses to assign valence, but it forces the dopamine to kind of flow out almost like a fire hose.

The easiest way to envision this is like when someone snorts cocaine, the cocaine is a stimulant and it forces the dopamine to be released in the reward system, so you get a huge peak in the Dopaminergic activity within the reward system, which for the brain is a signal like, wow, what I’m doing right now, this feels amazing! Give me more of this, because this is really relevant!

So traditional drugs can hijack the reward system and give the user a feeling that they’re doing something highly significant. I mean, that’s basically just the signal that has been evolved for the brain to learn like, Hey, more of this please. And then that starts a whole cascade of processes like enhanced neuroplasticity, growing of like more synaptic outgrowth, building of habits, sensitization and yeah, whole circus starts to spin around this sort of really high dopamine signal.

Now the question is can games do the same thing? What we see already in the previous millennia, like 1989, there was a seminal research done about a little like a 2D tank game with a Positron emission tomography, which is a different imaging tool, but they already saw it back then, this is 22 years ago, that playing a simple tank game could boost dopaminergic levels in the reward, them to levels comparable to injecting amphetamines.

Neil Fairbrother

We discussed the impact of COVID lockdowns on gambling and gaming with Lucy Gardner and Kevin Cleland of YGAM:

Now you mentioned COVID there. And I think we can’t really have a podcast about online gambling and gaming with about talking a little bit about COVID. Have you noticed any difference in addiction rates or behaviors during the COVID lockdowns?

Lucy Gardner YGAM

Yeah, there’s been a little bit of research that’s been done by Ipsos Mori looking at kind of gaming attitudes, gambling attitudes, has produced some findings. So typically levels of gambling did go down, but this, when we look at it in the broader context, it’s not really surprising because betting shops were shut, most live sport was canceled. So lots of the forms of gambling, which people might have traditionally done, they weren’t able to do, but online gambling did go up. So online slot machines, for example, they went up by 25%, online poker was up by 38% and the amount of money that was lost kind of within that lockdown period also increased a lot. Now we’re not saying that suddenly, you know, suddenly everyone was gambling because it was something to do, but people who traditionally gambled or who were potentially at-risk gamblers, they started gambling more than they had done originally.

And also research has been looking at gaming as well. COVID had a big impact on gaming and the amount of time that young people are spending on online. So I think at the end of 2019 children, 10 to 16 year olds, were spending an average about 11.6 hours to 12 hours a week on gaming that then increased about 13.6 by the middle of last year. Obviously, you know, children were at home so it was something to do. So it’s not really surprising that increased there.

But there was a report of spending a little bit more money online as well on things like loot boxes, which I know you’ve mentioned earlier. There’s a really interesting piece of research that has been done by the University of Glasgow which also highlighted the importance that gaming has had. And parents, when parents were questioned in this research, they said the gaming has had such a positive impact as well, both, you know, helping their children stay interested and stay chatting to their friends and staying connected to their communities, but also for helping with education. So there have been some potential negatives, but also some real positives as a result of COVID.

Neil Fairbrother

And here we have an honest and frank Tony Kelly talking about the impact that gambling had on his professional footballing career:

Okay. So we discussed something called Online Gaming Disorder in quite some detail in a previous podcast and in particular, the impact of dopamine on the reward network inside the brain. Are there similarities, do you think, between that and gambling, whether online or offline, and even goal scoring? Can goal scoring become an addictive sensation?

Tony Kelly

Interesting. Yeah, because dopamine’s a powerful chemical and when we’re talking about convergence of gaming and gambling there’s without a shadow of a doubt in my opinion, and in my organization’s opinion that, you know, it can lead and does lead to young people leading onto adult gambling.

The nature of gaming in terms of risk and reward elements. So if we talk about loot boxes, for instance, that risk and reward element it’s the same, the repetitive behavior, the excitement, you know, the frustration of losing or winning, the frustration of not getting the item that you might want, whether it’s a loot box or whether it’s FIFA, in Fortnite, you’re not getting the best player in the packs. All these kind of elements are all geared to having a similar outcome in terms of where you could be as a 13 year old in terms of when you get to 18 because you have built up this familiarization with gambling and gaming. So I think the convergence is actually huge anyway, in the UK, I think we’ve got five hundred thousand gamers in the UK, now, children.

Neil Fairbrother

Elizabeth Letourneau took us through some of the unintended consequences of badly through laws and the harmful results of child sexual offender registers:

 

Indeed. So to get some idea of, of scale, I believe there’s around 40 states in the US that put children on registries. But in 19 of those 40, there there’s no minimum age meaning that even prepubescent children may well be on the same list as even adult offenders are. And I believe a lawyer, Eric Berkovitz wrote in the New York Times that it appears as many as 24,000 of the nation’s 800,000 registered sex offenders of juveniles, and about 16% of that population, which is about 4,000 just under, are younger than 12 years old with more than one third being 12 to 14. Those numbers seem extraordinarily high. Is that the case, is that your view, is that your experience?

Elizabeth Letourneau

So I’ve been doing research on the impact of subjecting children to registration since about 2003. And one thing is, is that states and national government really don’t advertise how many people are on registries for behavior they engaged in when they were children. And in fact, I’ve seen federal level government officials really try to minimize that number. I think because, you know, it’s horrifying that we put any child on a registry, particularly now that we’ve got good science that shows that these do nothing to improve public safety. And they cause among the worst types of harm where they’re associated with the worst types of harm for the children to whom these policies are implied. So we’ve got good data out there now, you know, again, about 18 years of publications, all very consistent there’s no, there’s no wiggle room in the findings. We only find no helpful effects, no public safety effects and very, very, very harm effects on the children that they’re applied to.

So there’s not good information on how many kids are on registry. We’ve had access to registry data for multiple states, and we know it’s in the thousands. It would not all surprise me if it was 25,000 people, it could be higher, but those data are not publicly available. Anytime the child has engaged in harmful sexual behavior and it comes to light, it has to be reported in the United States. And so there is often a very robust juvenile justice response. And you know, kids can and often do go to prison.

I’ve seen prison sentences of five years, and some can even be committed to sex offender, civil commitment facilities in the US. Definitely. And we know of people who have been committed to sex offender, civil commitment facilities. These are maximum security facilities that are run in 20 States and by the federal government. We know of many dozens of cases where people have gone in and never come out. And so they’ve been there since, you know, for decades for offenses committed as children.

Neil Fairbrother

Age verification is one of the most frequent topics that we discuss Antonia Bailey of the UK’s DCMS explained the VoCo Manifesto and how this will have consequences for all of us, not just for children:

Okay. Now the underlying assumption is that my recognising children as children, services and service delivery can be altered or more targeted perhaps to provide a safer space for children online. As we said at the outset, that’s the kind of underlying hypothesis behind all of this. In the VoCo report, there’s no mention of the practice of catfishing where adults pretend to be children and in a VoCo-enabled world, if an adult poses as a child and subsequently gets verified as a child, doesn’t that increase the risk of harm to other children? Isn’t this like the fox in the hen coop who has been verified as being a hen? So do we also need to have age verification based on hard metrics, so to speak for adults to filter adults out, as well as age assurance or age estimation using perhaps a combination of softer metrics to filter children in?

Antonia Bayly

Yes. So I think where we say that we would like to see platforms, age-assuring users, they would have to age assure everyone to identify which users are children, therefore users that are assessed as being adults, shouldn’t be able to access the kind of environments that have been deemed safe for children. And one thing we would like to see is platforms considering how they, as I said earlier, tweak their features and functions so that it is much harder for maybe an anonymous user, maybe a clearly adult stranger to contact a child.

And I think there are two things here. Obviously, there is a risk that a determined offender will try to trick the system and appear to be a child so is able to get around any additional safety measures that a platform has in place, which is why you need robust age assurance checks at the start, if you are providing features that do present that risk. But also that these things are ongoing check that you combine ongoing assessment of user behavior or you kind of have key milestones for users to once again provide evidence that they are the age that they are.

And some of the big platforms are already doing elements of this. They do consider they, they do monitor suspicious behavior. They do identify when they think that an account isn’t actually an adult account and is actually a child or vice versa. During the VoCo research, when we were talking to children one child was reporting that they put their age, I can’t remember what it was now, but clearly not a child and think she was under 10. And she kept on being kicked off this large social media platform because they were identifying that her behavior online or on the platform was clearly not the behavior of an adult. It was a child and they do that quite a lot. And they do that also in terms of identifying suspicious behavior that might be indicative of illegal activity.

Neil Fairbrother

Second time contributor Steven Balkam, CEO of the Family Online Safety Institute or FOSI, took us through their report “Tools for Digital Parents” and explained the different attitudes that three different generations of parents have to online child safety:

So that then places some responsibility on parents and one of the interesting points that came out from the report is the generational impact of parents, the different ages of parents, whether are Millennial parents, Gen Xers or Boomers will impact their attitude and practices towards keeping their children digitals lives safe. Could you define for us what a boomer is, a gen Xer is, and a Millennial is, and what are their different attitudes when it comes to keeping their children safe online?

Stephen Balkam

This was probably the most fascinating part of the research. We asked them two different questions. One was, who has most responsibility for keeping kids safe online? And the other is what are the top concerns? So on the most responsibility one, Boomer parents, what are boomers, what 1946 to 1962, I guess, the older folks <laugh> if if you will, the older parents, 57% said that parents had the most responsibility for keeping their kids safe online. Gen X, it drops down to 43%. But dramatically Millennials, only 30% said that parents had the most responsibility. They were half as likely to say that as Boomers. They thought that the tech industry and government, and even schools shared responsibility for online safety. So that was one question.

The other one was, what are your top concerns? Boomers overwhelmingly talked about outside threats, particularly predators. It was a real, you know, fixation with the idea of the boogieman coming to get your kids. Gen X was more about harmful content like adult porn, but Millennials, interestingly enough, focused on bad behavior, including their own kids possibly being the perpetrators of cyber bullying or sexting, or, you know, intimidating others or whatever.

And I think that’s fascinating because of course, Millennials are ones who’ve grown up with this technology and may themselves have participated in some of this behavior, whereas Boomers, you know, and I speak as one, you know, the internet happened halfway through my life. So I didn’t grow up with that as a teen. Whereas Millennials see it from a very, very different perspective,

Neil Fairbrother

The Dirty Dozen report from NCOSE named and shamed 12 online service providers. In this extract, NCOSE’s Haley McNamara takes Amazon to task:

Just before we leave Amazon, I would just like to spend a couple of minutes talking about the Amazon store that everybody is familiar with. Now you referenced in your report, the Creeper Act. What is the Creeper Act? And what’s that got to do with Amazon and child abuse?

Hayley McNamara

So the Creeper Act is a bill that’s been proposed in the United States. It has not yet passed, but it’s essentially to outlaw childlike sex dolls or a better term for them would be child sex abuse dolls. And that ties into Amazon because Amazon has, we have over the number of years found that they’ve been selling childlike sex dolls and adult sex dolls on their site. Even more they’ve been hosting a number of incest-themed, written pornography sometimes called erotica, and really, really disturbing. You can actually often even find this by just searching “stepdaughter stepdad”, and on the first page, there’s a number of incest-themed eroticized sexualized stories, which is also especially problematic considering incest child abuse is a really serious problem that’s plaguing our society right now. So, sex dolls, child, sexual abuse dolls, and they even host some books by pimps that are instructing people on how to groom and coerce women in the sex trade.

Neil Fairbrother

Okay. Now these sex dolls, these childlike sex dolls are manufactured in Japan, by an organization called Shin Takagi I think is the correct pronunciation and they reportedly argue that these dolls help prevent pedophilia towards actual children. Is that the case, do you know?

Hayley McNamara

It’s really not, unfortunately, you know, there’s this old trite phrase called neurons that fire together wire together, basically, it’s the idea that you can’t practice a desire away. So when pedophiles are practicing sexual acts on a doll, they’re actually encouraging that behavior. And there’s been a number of individuals who’ve spoken out on this. Many people who’ve actually been caught with child sexual images or child pornography, or have been caught sexually abusing a child, are found in possession of childlike sex dolls. So in reality, it’s just not true that practicing sex abuse on a child will somehow discourage you from doing it in real life.

Neil Fairbrother

Twitter’s CEO may have stepped down, but the legal case, John Doe versus Twitter, continues as described by Lisa Haba partner at the Haba law firm and Peter Gentala senior legal counsel for NCOSE:

Okay. And on learning of, of the resurgence or reemergence of this material that John DOE had assumed was consigned to history, what impact has that had on him?

Peter Gentala

Well, it’s it was a drastic impact. And you know, perhaps the, the best expression of it is the way John Doe’s mother found out that the materials were being circulated on Twitter. She found out because a friend called and said, are you aware that your son is talking about taking his own life? Up to that point she had no idea of the situation he was in and then she learned that these images of, of John Doe were circulating throughout the school community that many people knew about it. And John Doe was enduring just tremendous anguish, as you can imagine. He was not being treated well by some of his peers and in his community, he was subjected to ridicule and bullying and it was a very difficult situation.

But this is also where the story takes an encouraging turn because the family rallied together. John Doe’s mother, and really the entire family supported him and they resolved together that they were going to find anyone, someone, anyone who could help them. And so they began a very concerted focused, committed campaign to take these images and what it was Neil was a series of images that were spliced together and placed into a compilation video. So that’s what was being circulated on Twitter by at least two users. And so the family worked together to try to get Twitter, to remove these images.

Neil Fairbrother

Okay. Now Jane Doe, John Doe’s mother, as you rightly say, got involved and both of them corresponded with Twitter on a number of occasions through this month-long period from the 25th of December, 2019 through to the 30th of January, 2020, and the replies that they received from Twitter read like they are simple, automated replies. They don’t appear to have been written by a caring person who received them on the other end, a caring, thoughtful person who was planning on taking action. They read like an automated scripted reply. Now, I don’t know if they are or not. Were they, or was someone actually behind those responses from Twitter?

Lisa Haba

No, Neil, I don’t know that we can answer that question right now without going through discovery. Obviously we’re gonna have to seek the answers to those very questions you just asked. We have been asking as well, and we’re waiting to find out the answer. So I, I don’t know that we can answer that question right now.

Neil Fairbrother

Okay, well, that’s fair enough. But one of them does say if there’s a problem with a potential copyright infringement, please start a new report. Why would they be talking about a copyright infringement issue if someone had actually bothered to look at the complaints raised and the material that they were being complained about?

Lisa Haba

Well, I feel like you were in some of our discussions! I’ve asked the same question as a speaker. <Laugh>, you know, honestly, at this point, without knowing I can’t delve into the mind of Twitter and their employees and what they, you know, intended and intended at this time, but hopefully through the discovery process will get some answers to these very important questions.

Neil Fairbrother

The internet is a complex thing, but one area that should be straightforward is the WhoIs database. Rick Lane of Iggy Ventures explains why it isn’t:

So what is WhoIs, and what relationship does WhoIs have with your domain name that you’ve bought from your local domain name registrar?

Rick Lane

Sure. There had already been in existence a database to help manage who actually had signed up for different domain names, like for.com for example. And so there was this idea that you would have people like to call it like a phone book. I disagree with that analogy. I look at it as more like land records, you know, who actually has control over that land and manages that land on behalf of an entity. So the idea back in 1998 was to have the WhoIs database, the issue of WhoIs became a critical issue for trademark owners and cybersecurity experts, as well as child protection groups. And the question was, should it be dark or should it be accessible?

Neil Fairbrother

What do you mean by dark?

Rick Lane

Should it be accessible by anybody on the internet to find out who basically is on the other side of the screen, who is the entity that has control of a website or a domain name?

Neil Fairbrother

Okay. If I in the UK wanted to buy Neilfairbrother.com the theory is that all of my details name, address and so on, would go into the WhoIs database so that people could track down the owner of the website, Neilfairbrother.com?

Rick Lane

Correct.

Neil Fairbrother

Okay. That sounds like a reasonable plan. What happened?

Rick Lane

Well, It does. And the idea behind it was really consumer protection and the WhoIs database from a consumer protection side is critical to know who is on the other side of a website that is collecting your information. You wouldn’t go into a store that has no markings at all. You always wanna know who owns that store that you’re going into because if you have a problem with that store, who are you going to contact?

And especially here in the United States, you know, if you have a business, you have to file with the State, you have to file with your, you know, local governments, you have to file with the IRS for tax purposes. If I buy land here in the United States, you know, there’s a ownership. I can go to the land records and it has all the information about who owns the land. And those are all consumer protection tools that are used by here in the States, the Federal Trade Commission and consumer protection agencies. It’s used by the justice department, it’s used by cybersecurity experts. So the WhoIs, is really a consumer protection mechanism to ensure that you’re not being fleeced by someone who you can never track down.

Neil Fairbrother

Okay. What is the problem then with WhoIs?

Rick Lane

So the problem started from the very beginning. There has always been a debate about the accessibility and accuracy. You’re supposed to have accurate information in the WhoIs database. So there was a controversy between so-called privacy advocates, 1) it’s a cost center to maintain the WhoIs database and they don’t like that. But second, it allows their competitors to see who their customers are. They don’t like that. But in fact, in 1998, they said that’s actually a consumer benefit. So you can have competition in the domain name space, and the other is it creates potential liability for them because if, you know, if they don’t know who is on the other side, who is the entity that is creating harm online, then they don’t have any type of responsibility to take any action because they will say to you, we don’t know who it is, we can’t do anything, go to law enforcement. And then law enforcement tries to get the information and then they can’t get it because WhoIs, is dark. So it’s a very vicious circle.

Neil Fairbrother

Glen Pounder, Chief Operating Officer of the Child Rescue Coalition, took us behind the scenes to look at some technology that they provide law enforcement, which finds CSAM users, even in encrypted environments:

Okay. Now talking about law enforcement I believe that your technology has aided in the arrest of something like 12,000 predators and rescued more than 2,700 abused children over the last six years. Is that correct? Is that an accurate assessment?

Glen Pounder

Yes. It’s something over 13,000 now and over 3,000 children. And you know, I think people who listen to your podcast probably know this already, but this is not some, you know, dirty guy sitting in a basement. You know, we’ve had pediatricians you know, teachers of very young children arrested, who’ve been consuming this horrific material and also you know, direct victims who were being, being raped in the home. And again, this is something that’s not regularly talked about, but most studies show that most children who are being physically sexually abused are abused by somebody they already know, right?

Neil Fairbrother

Yeah. And another type recently reported by the IWF was the self-production or self-generation of intimate images by children themselves, often in the locked family bathroom.

Glen Pounder

Yeah. I mean, what a huge challenge. And again, that’s not a law enforcement challenge, that’s a society challenge, right? Where, even for some of those self-abuse images, there can be parents heard shouting in the background. I mean, really, you know, the children are effectively taking horrendous risks online and sometimes as the parents are just downstairs. So children are experimenting ways that just weren’t possible even 20 years ago, and then effectively sharing some of this material with somebody they might suspect is the same age as them. But, you know, as we’ve seen in many cases sometimes are not. So that can lead to that material obviously being shared on some of the networks we monitor, and we’ve seen that. That can lead to the child being forced to produce ever more horrific material and even in some instances where that child is then compelled to abuse a younger sibling,

Neil Fairbrother

Amanda Lenhart, of the Data and Society Research Institute, discussed how teens are an afterthought in the product design practices of social media companies:

Okay. The third key finding is that many negative health impacts stem from what companies choose not to know about their users, and you call this “strategic ignorance”, which sounds very concerning indeed. And you say that there are three main ways that ignorance can be strategically designed into company structures. And the first is through not collecting data. What do you mean by that?

Amanda Lenhart

So some of the companies we talked to, often under the guise of privacy protection for users, do not collect data about their users. They specifically describe that they don’t collect particular data. And I think it’s particularly salient around issues like age, where for users under a particular age, you have decided that they’re not on your platform. You’ve created age gates that are basically ineffective and allow young people to easily work around them. So you have this very fictional set of information that you’ve collected about some of your users.

So we heard from a number platforms, well, we don’t know if we have kids on our platform because we don’t collect that data, or we don’t know, we expect, we probably do, but we don’t know that we do. So there’s this sense of like, if we don’t explicitly collect information, then we won’t have knowledge that these people are on our platform or we won’t be sort of obligated either morally or legally to engage with them, create a better experience for them.

And so we did hear that from a number of platforms. Now, some platforms don’t do that. What’s also, you know, very interesting in contrast is that even on these platforms that don’t collect a lot of data about users there is also sometimes a collection of sort of experiential data from a person’s profile to deliver ads to them. So there’s a real tension, right? In that we both collect some data based on your use of the platform that we can use to deliver ads to you but we also say that we don’t collect other kinds of data about you to preserve your privacy. And some of this is different platforms to different practices, but some of this is sometimes on the same platform.

Neil Fairbrother

In a discussion about Apple’s NeuralHash CSAM proposals, Homeland Securities Investigations Special Agent Austin Berrier described how predatory pedophiles hunt for their prey in organized packs:

The feature of child sexual abuse through online streaming platforms that I found most intriguing and really quite frightening was crowdsourcing. The three different roles that members of a crowdsourcing child sexual exploitation team have are hunters, talkers, and loopers. Could you go into some detail there? What does a hunter do? What does a talker do? And what does a looper do?

Austin Berrier

Some of these roles, of course can be mix and match. Somebody might have multiple roles. They can rotate and stuff like that. But for example, the hunter, their job is to go out and find children at risk. One of the ways they can do that is on, let’s say on live streaming platforms, they may be out on pick your platform, BeGo, Periscope, LiveMeet, and they’re just looking for kids that are doing at risk behavior. And what’s at risk behavior? If you have some young girls that are maybe doing like their cheerleading or dance routine and their mother or father are also in the frame and they’re in the family room and everybody can see what they’re doing, perhaps that’s not at risk, right, because there’s adult supervision. But if you take those same young girls and you have them in their bedroom behind closed doors, and maybe they’re doing that routine in their underwear now that’s a child that’s at risk, right? They’re doing something that perhaps their parents wouldn’t approve of and they’re willing to take risks. So that’s the Hunter’s role, to go out and find those kids. It’s almost sounds horrible, but like they’re harvesting, right. They’re harvesting children that are ripe for the picking. And that sounds terrible, but that’s the truth.

The talkers or the chatters, that’s what they do. They’re the ones, their glib, right? The silver tongue devil, it’s their job to figure out the “in” with that kid. In my experience, you catch more flies with honey than vinegar, right? So if they can befriend that child or be kind to that child, perhaps they are the ear that that child can share their secrets to, they can be the shoulder to cry on, they can be that understanding friend, or perhaps they build friendship through video games or dance routines or music, but they build rapport with that child victim and that way, when that exploitative behavior is requested or, you know, comes up in conversation, we do things for our friends that we wouldn’t do for strangers. And at this point, you know, that child may feel that, that that person they’ve been talking to for days, weeks or months as a friend.

And finally, the loopers, their job is to desensitize children. So one of the things that they’ll do is they may, while they’re talking to the kids or, or while somebody else is talking to the children, they may stream adult pornography. They may stream even something that’s not pornography, just suggestive, some type of Erotica you know, movies that are again, while legal, just have very, you know, have sexual innuendo. And then eventually, you know, they may start streaming child sexual abuse material to show the shorter and that look, Hey, you know, other kids are doing this, you know, you’re not any different than anybody else everybody’s doing it.

Neil Fairbrother

Dr. Grace Robinson took us through the online and offline impact of COVID on County Lines:

Are County Lines the 21st century equivalent of bootlegging from the prohibition era, and if so, what lessons can we learn from prohibition, if any?

Dr. Grace Robinson

I’ve never thought of that before. No one’s ever put that to me before, but I guess, yeah, absolutely. It is the same methodology, isn’t it? And there’s violence and exploitation and yeah, absolutely you could say that. I think if we can learn any lessons from it, it’s that prohibition doesn’t work and I’m a real advocate for decriminalization. I think if we are to tackle County Lines, we need to really provide support networks for the customers, for the drug users. Because if you say, you know, we focus so much on tackling supply, I mean, why not tackle demand? If there’s no demand, there’s hardly any supply, right? And there are so many other countries doing amazing things, decriminalizing drugs, having safe spaces for drug users to take the drugs that they need still, less stigmatization and, you know, really supporting people instead of alienating them and disenfranchising them.

Neil Fairbrother

Thomas Mueller of ECPAT International discussed the aftermath of the Vietnam war on child sex trafficking in the tourism and travel industries:

  1. What’s ECPAT?

Thomas Mueller

Well ECPAT is a global movement of organizations that address the sexual exploitation of children in the nineties, very much sexual exploitation of children in the context of travel and tourism. So there was a very, you know, obvious problem in Thailand and parts of Southeast Asia, where as a result of the Vietnam war, a rather big sex industry had established, where soldiers that were deployed in Vietnam and Laos and Cambodia, they came over here to entertain themselves. And after the war ended, the industry was established and stayed.

And then, and gradually tourists came in from around the world with their hard currency, had opportunities over here that were unthinkable for them back home. So they took advantage of it and this industry then more and more developed towards the travel and tourism sector. And that became so obvious in the late eighties, early nineties, that a number of organizations started addressing it.

ECPAT in those days stood for End Child Prostitution in Asian Tourism. We are still with the secretariat here in Bangkok, but also the issue of sexual exploitation of children has quite evolved over the years. And it has gone well beyond the travel and tourism sector and it’s now dealing also with trafficking of children for sexual purposes, child early enforced marriage, the online obviously environment and sexual exploitation in that context.

Neil Fairbrother

Howard Taylor, Executive Director of the Global Partnership to End Violence Against Children interpreted General Comment 25 for us:

Okay. thank you for that Howard. So when it comes to General Comment 25 then what is General Comment 25 and why is it so important?

Howard Taylor

Yeah. So General Comment 25, it’s a bit of jargon, isn’t it? Let’s be frank, General Comment 25. I mean, it’s not going to be obvious to the person in the street what General Comment 25 is about. So let me just back upstream a little bit to speak briefly about the UN Convention on the Rights of the Child, which is the most ratified human rights treaty globally. And what’s exciting about General Comment 25 is that for the first time, it embeds children’s rights online into that global UN children’s rights treaty. And the reason why we, and others who are champions and advocates for children’s online safety, see that as such an important development is it’s both the signal that sends, but also the substance that’s embedded into General Comment 25.

So the signal is that for the first time, this very important treaty, this very important convention, is taking account of the digital era, the digital world, and the fact that children are engaging digitally and they are at risk digitally. So that signal is just fundamentally important across governments, across regional bodies, et cetera, globally.

And then because substantively what the comment embodies, which includes an anticipation of future trends, because as I’ve already mentioned a couple of times, this is a fast evolving space. And so what General Comment 25 tries to do is to anticipate future trends, but also to underline the criticality of safe, empowering online environments for children leading towards preventing violence and abuse against children online before it happens. So whilst response and justice and healing is absolutely vital for children who have experienced it through online abuse and exploitation, you know, coming upstream and trying to prevent it happening of course, is the goal.

And then reconfiguring the internet with children in mind because the internet wasn’t created with children in mind and General Comment 25 also leans towards that reconfiguring of the internet, which in other spaces, people refer to as safety by design and making sure that digital services, digital platforms, the internet is made safe for children. So we see it as really exciting, as I say, it’s General Comment 25, you know, is what it says on the tin, you’d scratch your head, unless you’re an expert, but actually dig open the tin and dig beneath and actually it’s a very exciting development.

Neil Fairbrother

Hany Farid took us through Apple’s CSAM hash-matching technology, NeuralHash:

This seems to imply that Apple’s NeuralHash technology that’s looking for the hashes of CSAM material won’t be triggered unless you have 30 or more matches that you’ve tried to upload to iCloud?

Hany Farid

That’s correct

Neil Fairbrother

Apple I think justify this as a safety margin, but here’s the thing. One image is the digital image of a offline crime scene and is illegal. And if these images have already been hashed and it’s the hashes that they’re looking for, why is Apple saying that you’ve got to have at least 30 files to give less than a one in a trillion error rate. It makes no sense.

Hany Farid

Yeah, I found this quite offensive. So let let’s talk about what the concern is. The concern is no technology is perfect. Legitimate emails sometimes get spam filters, attachments sometimes get ripped out. Nothing’s perfect. And so you need to put safeguards in place to make sure we’re not making mistakes and the way you do that is put humans in the loop. So with PhotoDNA every single report is eventually seen by a human.

People are saying, well, what happens when somebody goes to jail for this technology? That is just the most patently absurd argument because the number of human beings that will have to look at this content before, in fact anybody is contacted, are the safeguards we have in place.

Now here’s my guess, I don’t have no internal knowledge of how Apple did this, is they built yet another hashing algorithm, NeuralHash, it’s a variant I suspect of PhotoDNA or at least in principle. And it’s, you know, it’s not perfect. It has a one in, pick a number between 1 billion and a hundred billion false alarm rate, and if you say, well, if I get one in a billion times, that might be a mistake, but if I get two and those two images are different, well maybe that’s now, you know, one in a billion times a billion, right? So that’s a really small number. And what they did is they set this really high threshold of 30 images, which I think is preposterously high. My guess is they did that to appease some of the privacy folks who are jumping up and down on their head saying you are invading privacy. And this was one of the safeguards.

They didn’t have to do that because what they can do is just simply every time there’s a hit or maybe make it two images, is that a human would review the image. And if in fact that image is CSAM then you make the report to NCMEC as you are obligated by law, but that does mean a human being has to look an image. And if the mistakes are on the order of one in a thousand, that means there’s a lot of humans putting eyes on your photos. And some people will find that invasive. If the error rate is one in a thousand, you shouldn’t be using the technology.

So I would argue that if the error rate is one in a billion or 10 billion, and occasionally somebody at Apple or NCMEC has to look at an image that is a fair price to pay, to protect children online. So I think they went overboard in their caution to say one in a trillion 30 images, I think they could have done better than that. And this is part of the reason why I don’t think we should celebrate Apple. First of all, we could have developed, deployed this technology a decade ago. Second of all, it only deals with images and not video so it’s only half the problem and not for nothing, you can just circumvent it by turning off iCloud syncing.

Neil Fairbrother

Professor Michael Salter from Sydney and Canadian Center for Child Protection’s CTO, Lloyd Richardson took us through the TriChan Takedown:

Okay. Now I think at one point of the service providers in the Netherlands where this stuff was being held, I think referred to as a service provider P5 in your paper, they came back and said that they weren’t hosting any of the content, they were only acting as a proxy, which sounds a similar argument to the CDN argument?

Lloyd Richardson

Yeah. And that was exactly it with this particular provider. They were reverse proxying the content. So, and it’s a similar argument, maybe has a little more ground, but you’re knowingly doing something that you’re not supposed to be doing. So if you’re gonna reverse proxy something like that, and you’ve been told that you’re reverse proxying child sexual abuse material, you have sort of two options there either. Number one, you stop doing it, which is the preferable one, instead of just saying that, you know, I can’t be involved, I’m just reverse proxy or B you let us know what the backend IP address is, well, if you’re reverse proxying it, you tell me where the server’s located. So there’s some back and forth, and eventually they disclosed to us the backend server IP address. So we were actually able to find out who the poster was and it was another related provider in the Netherlands at the time that was closely related to the original provider they were sitting on. More fun and games.

Neil Fairbrother

Okay. Now, Michael, just want to clarify something here. This content is illegal. There are no privacy carve outs for hosting this stuff. There is no freedom of speech carve out. There is no 1st Amendment carve out. There is no constitutional carve out. There is no excuse, I believe whatsoever for anyone to be legitimately hosting this stuff. Is that correct?

Michael Salter

I agree with you in principle Neil, but it it’s not correct in practice. You know, the bad network that Lloyd’s describing was clustered in the Netherlands. The Netherlands has a notoriously laissez faire approach to internet governance that prioritizes freedom of speech, but effectively what freedom of speech in this context means is that it prioritizes the financial and corporate interests of the technology sector over public safety and child protection.

The Canadian Center for Child Protection this year released a major report looking at the jurisdictions in which the child sexual abuse material that Arachnid has detected recently looking at those jurisdictions where it’s hosted and the Netherlands was found to be hosting 50%. So half of all child sexual abuse material detected by Arachnid over the last couple of years. And so the Netherlands in theory does not tolerate this material, but in practice what we’ve found and certainly what this case study illustrates is that there are certain jurisdictions where bad actors know that they can operate and they are essentially operating with impunity.

Now, law reform efforts are underway in the Netherlands. But we need to be very clear that there are jurisdictions in Europe and around the world where child sexual abuse material can be hosted. There are also corners of the technology sector and private sector actors in the internet infrastructure stack, delivering services and so on, who knowingly collude in the trade of child sexual abuse material. Lloyd referred earlier to premium file downloading services where CSAM offenders can store large amounts of CSAM and essentially sell access to that material now premium, downloading services.

There’s clear evidence that some of those companies know that they have cornered the market in pedophilia, and they choose not to proactively screen the content that sits in their service. So, you know, we have a technology sector that has been largely unregulated now for decades and you know, CSAM offenders have become parasitic on the internet to such an extent that, you know, we are now facing a global child protection catastrophe, I think.

Neil Fairbrother

And we end this 2021 review with a contribution from Finland’s Protect Children who researched perpetrators on the dark web:

Now, another reaction that I get when talking about this topic particularly on social media platforms is that most abusers of children are already known to the child, but if any abuser anywhere can contact any child at any time, is this actually the case, or is this a common misconception that most abusers of children are already known to the child?

Anna Ovaska

Well, we do see the link between for example, CSAM using and direct contact offending. One of the risk factors for moving from simply viewing CSAM to contact offending is actually having access to children. So this does increase the likelihood of an offender actually directly contacting and directly offending against children. But I would say that having this sort of direct access to children, which could mean knowing the child personally, that’s just one factor. I wouldn’t say it’s the sole factor. And definitely there are many, many cases of abuse where strangers are contacting children.

Tegan Insoll

And I think it’s increasingly prevalent now with the developmental technology, you know. Children are on these platforms and unfortunately we know that where the children are, the offenders are as well. And through social media platforms, through all of these very popular platforms, we know that even total strangers can contact children and abuse them. So it’s definitely the case that physical proximity to a child is no longer necessary.

Neil Fairbrother

Thanks once again to SafeToNet for their continued support, our guests for their continued contributions and also our global audience for their continued listening. If you would like to support our podcasts, please email me: neil.fairbrother@safenet.com.