Safeguarding Podcast – OSTIA & the Strategic Inflection Point with Ian Stevenson

By Neil Fairbrother

In this safeguarding podcast we discuss the fledgling Online SafetyTech Industry Association, OSTIA, with its Chair Ian Stevenson. Can a UK tech collective effectively tackle the Online Harms that US social media companies can’t, or won’t? Has “the internet” reached a Strategic Inflection Point and what would that mean? Is the UK Government breaching State Aid rules in its support of OSTIA? Will the next British Unicorn be an OSTIA member?

https://traffic.libsyn.com/secure/safetonetfoundation/SafeToNet_Foundation_podcast_-_OSTIA_and_the_Strategic_Inflection_Point_.mp3

There’s a lightly edited for legibility transcript below for those that can’t use podcasts, or for those that simply prefer to read.

Neil Fairbrother

Welcome to another edition of the SafeToNet Foundation’s safeguarding podcast, hosted by me, Neil Fairbrother, where we talk about all things to do with safeguarding children in the online digital context.

Child safeguarding in the online digital context is at the intersection of technology, law and ethics and culture and it encompasses all stakeholders between a child using a smartphone and the content or person online that they are interacting with.

Too often it seems the debate about online safety is focused around what cannot be done, what is technically impossible and the conflict that technology sets up with other rights, such as privacy and freedom of speech. Joining me today is the Chair of a new industry organization that is set up, they say, to provide an alternative voice. Ian Stephenson, Chair of the UK’s Online SafetyTech Industry Association, or OSTIA, welcome to the podcast. Can you provide us with a brief resumé so that our listeners from around the world have an appreciation of your background?

Ian Stevenson, OSTIA

Yeah, absolutely. I’ve been working in the tech industry for about 25 years now. I started out writing software and then slowly got more and more interested in not just building the technology, but how people actually use it and making sure you build the right technology. So a lot of my career has been spent around startups that are bringing a new technology to market. A few years ago, I went to look at a university project developing some research to see whether it had commercialization potential and I found I liked the team, I liked the tech and I helped to spin the company out and that became Cyan Forensics.

Neil Fairbrother

What does Cyan Forensics do Ian?

Ian Stevenson, OSTIA

So we help law enforcement, social media and cloud companies find and block harmful content from paedophiles and terrorists. Most of our early work has been directly with law enforcement, but we’re increasingly moving into the online space now.

Neil Fairbrother

As well as being CEO of Cyan Forensics Ian, you are also as I said in the introduction Chair of the UK’s Online SafetyTech Industry Association. What is OSTIA?

Ian Stevenson, OSTIA

So there are a significant number of companies in the UK that build technology to help make the internet a safer place for its users and until recently there wasn’t a place where we could come together to share knowledge, share experience and help educate the wider world about the work we do. OSTIA exists to serve that purpose.

Neil Fairbrother

How did OSTIA come about? What’s the history behind it? You say that there wasn’t a place for you all to come together. What have you done to resolve that?

Ian Stevenson, OSTIA

So I’ve worked in a lot of industries over the years, and usually there are events or organizations or conferences where people come together and you just get conversations with lots of other people working in the same space as you and build efficient networks. And when Cyan Forensics started taking an interest in the online safety world, I find that I was having lots of really good quality, one-to-one conversations with people that actually really ought to be happening much more collectively.

So we worked with our friends at Public in London to put together a round table to bring some people together from industry and other stakeholders, from government and charities, to have a conversation about online safety. And, you know, the outcome of that was we decided that there should be an industry association to help with these collective conversations.

Neil Fairbrother

And who typically are the members? Obviously Cyan Forensics is one of them, but you I think I have 70 or so at the moment. We can’t go through them all, but can you give us a flavour of who your members are?

Ian Stevenson, OSTIA

Sure. I think we’ve got 17 members at the moment. 70 is the number of online SafetyTech companies that DCMS has identified in the UK, we’re aspiring to build to that number. Our members range from companies that do textual analysis to identify grooming, companies that produce applications that parents can install on their kids’ smartphones to protect them, right through to companies that help other companies improve their own online safety, or companies that produce social media platforms for use in schools or in other environments where children congregate that have safety by design and built in. So really the common factor is we’re all trying to make the internet a safer place for its users.

Neil Fairbrother

Okay. If we look at the range of online harms, the things on the internet your members are trying to protect people from, they all boil down to communication, it seems to me. Communication between a child and another child, for example in the case of cyberbullying, between the child and adults, whether the other adult is disguised as a child or not in the case of grooming, or between a child and perhaps an online game where the game developer is trying to influence the child to buy something, are your members in essence, in their specialist fields, simply trying to disrupt conversations?

Ian Stevenson, OSTIA

So that’s a big part of it, but it’s important that isn’t it in its entirety. So disrupting conversations that are happening especially if you’ve got an adult grooming a child, for example, disrupting that conversation is probably a good thing to do. But there are a couple of other important parts of that whole experience. So firstly, why would an adult and a child be communicating with each other on a social media platform in the first place? Should that platform have been designed to either make sure that those conversations were taking place in public, or were taking place between people who knew each other, or people whose identities were known, or people who would have similar age groups? A lot of these conversations take place in games that are aimed at children so, you know, why are adults allowed into that space at all? Or why are children and adults not kept separate?

So there’s a piece about safety-by-design, which tries to prevent inappropriate conversations from happening in the first place and then there’s also, you know, stopping the conversation or disrupting the conversation. That’s an important step, but in some cases you also need to think about, well, hang on, should this conversation be reported to law enforcement or should this user be blocked from the platform if they’re a persistent offender?

So there’s a whole process from designing platforms in so far as possible to minimize or eliminate harm through to detecting ongoing harm and disrupting it and then whether there are follow up actions that need to be taken,

Neil Fairbrother

Okay, now the 2019 Online Harms white paper referred to a significant number of different online harms, one of which was cyberbullying, which it simply classified as “a less well defined harm”. It didn’t take the opportunity to define cyberbullying. If a harm isn’t identified, how can tech solve the problem?

Ian Stevenson, OSTIA

So the simple answer to that is that tech can’t solve problems that aren’t well-defined, but there are more general answers to that. So just because the white paper didn’t define cyberbullying, it doesn’t mean that there aren’t some common sense definitions out there. And again, it depends what you’re doing to disrupt conversations. So one of our members, for example, produces an application that you can put on a child’s mobile phone that will detect things that might be associated with bullying, so abusive language, certain tones of voice, certain times of sentiment. It doesn’t actually block that message from being sent, but it does flag it up to the child that it might be aggressive in tone or using inappropriate language so that they can make a better decision about what to do. And if they’re consistently using language that might be construed as bullying, and there’s plenty of academic research on the nature of bullying, then that might be reported to their parents.

So, you know, online safety is partly about the UK legislation. And I think there are many who would view the white paper that came out as being world-leading in its reach, although it certainly doesn’t solve every problem in this space, but it’s not all about legislation and it’s not all about the UK. So it depends on context, how tight of a definition you need. If you’re a regulator trying to regulate a social media site, then you need a pretty good definition, but if you’re producing technology to try and make conversation more civil in certain environments, you don’t necessarily need a white paper definition on your side.

Neil Fairbrother

Okay. Now you mentioned the SafetyTech report from DCMS. What is SafetyTech?

Ian Stevenson, OSTIA

So DCMS has a specific view of what SafetyTech is and it’s all around preventing harms that will occur to the user,rather than to their computer or their bank account. So we all talk about cyber-security. That’s largely about stopping computers from interfering with each other, interfering with data, stealing data and so on. Cyber-crime expands to online fraud and so on, but these are all really tangible harms that we’ve already got good ways of classifying and dealing with. The Online Harms white paper and the sector report that followed it are specifically about harms that might come to the people who sit behind the computer and as such, they’re fundamentally person-to-person crimes rather than technical or financial crimes.

Neil Fairbrother

Well, they may not be person-to-person because the, an online harm might be caused by an online game, for example, that is influencing a young child to spend money in a nefarious way. They have these loot boxes, for example, which have no guarantee what the child will receive and can cost up to 70 quid per loot box.

Ian Stevenson, OSTIA

Yeah. So I agree with that. And that fits in to the broader definition of online harms. It also fits into the safety-by-design agenda because that clearly is encouraging unsafe behaviour. We don’t see a lot of companies in the online safety sector specifically targeting that because really it sits within those individual applications. And I think it’s likely to be the subject of separate regulation at some point.

Neil Fairbrother

You think that credit card payment companies have a role to play in that particular instance?

Ian Stevenson, OSTIA

I think that both the games designers, the app stores and the credit card payment companies need to really think about whether the users understand what they’re buying and whether that transaction has been properly authorized.

Neil Fairbrother

Okay. The SafetyTech report had some interesting stats in it. Most social media companies used in the West are US-based and typically California-based and their absolutely huge. Facebook as everyone knows is massive. And yet the online SafetyTech industry in the UK accounts for approximately 25% of the global market share, how is that possible?

Ian Stevenson, OSTIA

So the simple answer is that most of the big social media companies you’re describing aren’t classified as online safety companies, they’re primarily social media companies, or really in most cases, advertising companies that simply have an obligation to make their experiences is safe. So the sector study was looking at companies whose primary focus is building technology that can make the internet safer. So that tends to exclude social media companies and tends to exclude some of the moderation companies that provide manpower for safety. This is specifically looking about companies that are producing safety technology.

Neil Fairbrother

Okay. What is the SafetyTech Innovation Network Ian?

Ian Stevenson, OSTIA

So the SafetyTech Innovation Network has only just been announced, but it will be a network that helps connect innovators, whether those are academic researchers or people in businesses of all size with the problems that need to be solved in online SafetyTech. And we hope ultimately there will be funding to help with some of that necessary innovation.

Neil Fairbrother

COVID-19 has impacted all of us and many of my podcasts, in fact, all of my podcasts are now being recorded by Zoom rather than face-to-face. And one of the things that I think OSTIA we’re planning to run was a SafetyTech Expo. Is that still going to go ahead? Is it going to go online virtually or has it been pushed back until people can move around more safely?

Ian Stevenson, OSTIA

So it will happen. It’s planned for the tail end of this year, the tail end of 2020. I think we’re still waiting to see what the world is going to look like in the latter parts of this year. It will certainly happen online, whether it has a physical presence as well, remains to be seen.

Neil Fairbrother

We’ve been talking about DCMS, the Department of Culture, Media, and Sport, and they have quite publicly pledged to support the UK SafetyTech industry and as you say, a large proportion of the SafetyTech industry are OSTIA members already. How can DCMS, the government department, provide support without contravening state aid rules?

Ian Stevenson, OSTIA

That’s a great question and I’d love to give you a really sophisticated answer to that, but state aid rules are a mystery to most mortal human beings! There are a variety of approved schemes that allow governments to invest in strategic areas, whether that’s by supporting investment activity where the market isn’t yet meeting a need, or whether that’s by supporting early stage innovation or R&D activities. So there are a significant number of schemes already that the government can use to support businesses in different areas. A lot of those that are administered through Innovate UK grants, for example, for innovation and research and development activity and those can take place without contravening state aid rules.

So I think DCMS is going to seek to influence various other parts of government, including those who decide what projects go out through Innovate UK. So we hope to see some online safety focused calls for projects from Innovate UK and also to look at how the investment market is functioning for SafetyTech in particular, and look at whether there are things that can be done to encourage or support investment there, much in the same way as it has in the past with FinTech or cybersecurity.

Neil Fairbrother

Okay. And for those who are not familiar with it, what is Innovate UK?

Ian Stevenson, OSTIA

Innovate UK is one of the key mechanisms government uses to provide grants for innovation research and development. It’s a government organization that is guided by strategic thinking on what R&D the UK economy needs and puts out calls that companies can respond to with proposals to do development projects

Neil Fairbrother

On your website you refer to a working group within OSTIA, which comprises of a number of your members, there’s Cyan Forensics, Crisp, SafeToNet, Securium, and Yoti. What is the purpose of the working group that OSTIA has?

Ian Stevenson, OSTIA

So the organizations that have formed the working group are really the organizations that founded OSTIA. At the moment most of the volunteer work that’s going in to make OSTIA happen is coming from these companies and the working group also serves in place of a Board. So we’re just in the process of formally incorporating the Association. At the moment it’s operating on an informal basis just now. And once it’s incorporated the working group will become the Board. At some point in the future, we will open up the Board to election from other members as well.

Neil Fairbrother

Okay. Now on your website, there’s a launch video, when you launched you put together a video and you mentioned that public trust in tech companies has slipped. Is there a danger that OSTIA members who by definition are tech companies also won’t be trusted, will they be tarred with the same brush as the big tech companies that you’re trying to safeguard people from?

Ian Stevenson, OSTIA

So I’m not sure we’re trying to safeguard people from the big tech companies, although we are trying to safeguard people from other people who are using their platforms and I think when people think of the big tech companies, they’re really very often thinking of the big social media companies and that’s where trust is it an all-time low.

I think already we can see the public rewarding companies that demonstrate they’re trustworthy. So there are some search engines, for example, that aren’t subject to the same advertising and tracking like Duck Duck Go that are increasing in popularity. That’s not in the online safety space. A lot of companies in the online safety space are not yet well known names, but I think the public is quite good at discerning which companies are making positive efforts towards their privacy and safety online, and which are not

Neil Fairbrother

In this video you said that you think we’re in the midst of a “strategic inflection point”. What did you mean by that? It sounds very interesting.

Ian Stevenson, OSTIA

Yeah. So there’s a little bit of business speak creeping in there. There’s a gentleman called Gordon Moore, who’s perhaps best known for Moore’s law. But he was one of the founders of Intel, so, you know, a fairly well-known figure within the tech world coined the term “strategic inflection point”. And it’s the idea that whilst nearly always things change slowly, often it takes many years for things to change from one state to another, a strategic inflection point characterizes the sort of change where there’s very clearly a before and after in hindsight, even if it wasn’t necessarily completely clear at the time. So I think COVID, to come back to that topic, will represent a strategic inflection point. I think the world will be different afterwards and before, and people will talk about COVID events as happening before or after COVID.

And I think that’s what we’re seeing now in online safety. So three or four years ago, there really wasn’t much public discussion of online safety and, you know, social media companies very often founded on a very liberal ideals, very positive perspectives on human nature, you know, thought we could create an online world that had none of the problems of the real society at large. And unfortunately that’s proved not to be the case. So social media, the internet, has brought out some of the best in human nature, but it has also brought out some of the worst. And I think in the last few years there has been a fundamental change and people are starting to hold social media and internet companies to account for that harm that they’re enabling, even if they’re not causing it. I think we’ve seen that in government starting to legislate, we’ve already mentioned the Online Harms white paper in the UK. I think we’re seeing that in public opinion, especially in response to things like Cambridge Analytica.

At the world economic forum in Davos this year, a large group of advertisers said, Hey, you know, our money is funding most of these big social platforms and some of them are really doing enough to keep their users safe and we shouldn’t be spending our money on enabling dangerous things to happen to users, so we’re going to ask better of the social media companies.

And the investment community is increasingly looking to be associated with companies that are doing good rather than companies that are enabling harms. And I think that if we look back, you know, four or five years from now, we’ll see the period that’s come before now as being the sort of Wild West idealistic era of the internet and we will see where we’ve got to as being the internet having grown up and been regulated and taking responsibility for the harm it causes as well as the good it does, you know, in much the same way that most maturing industries end up being regulated at some point.

You can’t buy a car that doesn’t have seatbelts, you can’t buy an electrical appliance that hasn’t been safety tested, you won’t be able to use an internet service that doesn’t meet certain standards for safety. So I think at the moment, we’re in the midst of that strategic inflection point from the idealistic Wild West to a slightly more grownup safer internet.

Neil Fairbrother

The social media lobby is extraordinarily powerful and at the moment they have legal exemptions from liability for what is posted on their platforms. And there are moves to change that one way or another either to change Section230, as it’s known in the States, directly or add additional laws or acts such as the EARN IT Act by which they have to earn their exemption from liability through demonstrating that they are running best practices. Is this a feasible proposition do you think, or is it simply not possible to get the social media companies to be accountable for all of the billions of user generated posts that take place on a daily basis?

Ian Stevenson, OSTIA

It’s not possible to eliminate every possible harm from that volume of user generated content. But there are some things like, for example, child abuse material that are relatively easy to detect, and there’s simply no excuse for those being shared on the internet. So I don’t think there’s a one size fits all solution to the problem.

What I would say is this exemption that social media companies have enjoyed is essentially built on the idea that they are not publishers of content, they’re merely pipes. So they serve the same sort of function as a telephone or a fax machine, and nobody would expect the telephone company to be responsible for the fact that you know, a fax was used for fraudulent purposes or contained some harmful content. Some of the purest forms of social network can perhaps still claim that they’re simply a platform that’s enabling conversation between people, but that’s not where the money is.

So if you look at most modern social platforms, they invest a huge amount in building engines to show content to users that will drive engagement. And in driving engagement, they make more advertising revenue. That’s the business model. And I think there’s a very strong argument that if they are selecting, whether it’s in an automated way or not, content to put it in front of people and suppressing content that might otherwise get in front of people, because it has low advertising, low engagement value, then they’re not actually like a fax machine, they’re not a neutral means of communication. They are in fact editing the content that they show to a particular user, they’re curating it and they’re publishing it. And if they’re doing that, then they have a responsibility to not show, to not curate, to not amplify content that’s harmful. So I think the debate on this is moving.

In the early days of the internet, everybody was very optimistic and desperate to encourage innovation and, you know, it’s really important that we don’t suppress innovation. But if you are a platform operating at large scale, that is editing content for commercial purposes to promote engagement, then really you should be taking some responsibility for what content you’re promoting.

Neil Fairbrother

The DCMS SafetyTech report included some non-UK companies, such as Sweden’s NetClean run by Anna Borgstrom. Do you see a space in OSTIA for non-UK companies, so you can have a global voice? Would that give you more power, so to speak, more oomph behind your message?

Ian Stevenson, OSTIA

Yeah, absolutely. We’re still figuring out exactly how to do that right. So in the first instance we focused on UK companies simply because you have to start somewhere. There are some logistical issues around how you bring international companies into the fold without complicating issues. So for example we’re working with the Department of International Trade on how they can help us to export. That’s clearly a service that is focused on UK companies, not international ones. So we’ll probably end up having another tier of membership so that we can bring international companies into the conversation without complicating some of these domestic logistical things.

Neil Fairbrother

Okay. Now there is another acronym I’d like to ask you about, and we’ve kind of touched on it without naming it, and that is FAMGA. What is it? And how can you prevent its influence on what you do as an organization?

Ian Stevenson, OSTIA

FAMGA is one of a number of different acronyms that are used to talk about the giants of technology, in this case it’s Facebook, Apple, Microsoft, Google, and Amazon. And really these are companies that for most people on a day-to-day basis define their experience of technology and the internet. They are not online SafetyTech companies, so they’re not going to be members of OSTIA, but clearly they’re pretty important stakeholders when it comes to any conversation about online safety and we’ll look to engage with them.

But I think it’s also interesting to look beyond some of those big companies. So they may dominate our experiences, but for example, right now there will be lots of kids who are online. We’ve already talked about games from the point of view of goodie boxes, but a lot of these games also have messaging services. Now these need to be safe as well.

And to some extent that the FAMGAs of this world have the ability to build their own online safety technology, although we’d like to think that industry in the UK and elsewhere already has a lot to offer them. But a lot of these smaller companies that are for example, games producers, or are producing tools specifically for education you know, they don’t have the resources of FAMGA.

So yeah, absolutely. We’ll seek to engage with FAMGA. We would like them to contribute to the conversation about what technology is needed, about what technology is available and about how it can be effectively deployed. But there are literally hundreds of thousands of companies around the world that are providing interactive user experiences with user generated content to huge numbers of users on a day-to-day basis, collectively probably more than FAMGA are. And many of them don’t have the sources of FAMGA to deal with problems. So they’re really interesting collectively to OSTIA as well.

Neil Fairbrother

Okay. Now you say on your website that you want to work closely with regulators and in the UK that would appear to be Ofcom, from an online homes perspective. How do you see that relationship working?

Ian Stevenson, OSTIA

I think it’s really important that regulators can only regulate where there’s a practical solution. So, you know, if the UK regulator turned round and said to Facebook, right, all of your Directors are going to prison if we find one piece of child abuse content on the site, that will be pretty unreasonable. If you think about the scale at which Facebook operates and the technical challenges, that would clearly be an unreasonable thing to do. However, very often these conversations get complicated and you know, quite rightly social media companies point out that it is really hard to solve the problem of child abuse material appearing on your site for example.

What we want to do is make sure that the regulator is well-informed about what is practical. So rather than having a generic conversation about the whole problem and either risking unreasonable regulation or no effect of regulation, let’s have a conversation about what we can do. So for example, there are good technologies for blocking known child abuse material, there are good technologies for detecting grooming conversations in chat. They don’t solve the whole problem. They’re not infallible, but since the regulator is going to be based around meeting a duty of care, that duty of care implies that you’re achieving some level of best practice. And we want to make sure that the regulator understands, you know, what best practice might look like what technology is available today.

Neil Fairbrother

OSTIA as we said stands for the Online SafetyTech Industry Association, but you do have some non-technology members, for example the NSPCC, the WeProtect Global Alliance and I think also the IWF, the Internet Watch Foundation. Why do you have some non-technology organizations, these NGOs, in OSTIA?

Ian Stevenson, OSTIA

So I think as we grow, we’ll probably have more than one category of membership. That’s something we’re working on right now. We certainly see NSPCC, WeProtect and IWF as key stakeholders in this space. Some of them actually are technology providers in their own right, the Internet Watch Foundation provides some technologies for online safety. But we think they’re an extremely important part of the conversation. They support our work. So we’ll have a means of recognizing that.

Neil Fairbrother

We are supposed to be seeing some legislation coming out of the responses to the Online Harms white paper that was published last year. What legislation would you like to see implemented in the UK out of the Online Harms white paper that would have a global impact?

Ian Stevenson, OSTIA

I think we’d like to see some see regulation that follows the good pattern set by GDPR of requiring that people live up to a standard of best practice. They take responsibility for what they’re doing on their platform. GDPR, I think is fundamentally good legislation because it’s not prescriptive about how you achieve data protection and nor should regulation for online harms be prescriptive about how you prevent harms, but it should require organizations to demonstrate that they’re taking reasonable steps. And if the regulator identifies organizations that are failing to take reasonable and proportional steps, given their sizes and organization, given the number of users, given the demographics of their users, then that regulation should have teeth, whether that’s the power to impose fines, whether that is making Directors individually liable or potentially criminally liable for some of the worst possible infringements or whether that is the ability to block access to some of these sites within the UK.

Neil Fairbrother

Online Harms are called online harms because of the damage that they can do, particularly to children’s mental wellbeing and even their lives. We’ve seen tragic cases where even very young children have committed suicide from the harms that they’ve been subject to online. Does the responsibility for trying to address these problems weigh on you as an individual?

Ian Stevenson, OSTIA

So, so firstly, I think it’s children are a hugely important part of this space, but there are all sorts of other vulnerable groups who are particularly impacted by online harms. And, you know, as many high profile figures have discovered, if you for whatever reason provoked the ire of the internet, you can become the subject of some truly horrible treatment on the internet. And that can affect literally everyone. So children are hugely important, but it’s about more than children.

I think from a personal point of view it’s really important to me that as a business and as an industry association, we’re working on a problem that when we get it right we make the world a better place. We can’t solve all of the problems, but this is why we think it’s so important that the story of good safety technology is told, and we do our best to make sure that it;s deployed and used widely to protect as many people as possible.

Neil Fairbrother

We are unfortunately as usual overtime, we’ve run out of time. But I would like to just ask you one final question, if I may, Ian, and that is the SafetyTech industry report that we’ve spoken about said that the next UK unicorn, a billion pound or billion dollar company, could come from the SafetyTech industry, which could be one of your members. Will it be your company Cyan Forensics?

Ian Stevenson, OSTIA

I’d like to think Cyan Forensics will achieve that one day, but there are quite a few companies in OSTIA that are considerably ahead of us on that journey and I look forward to them leading the way to online safety unicorn status.

Neil Fairbrother

Ian thank you so much for your time. It’s been fascinating to talk to you about OSTIA. I wish you all the very best of luck for your industry association and good luck with the SafetyTech Expo later this year,

Ian Stevenson, OSTIA

Thank you very much.