Safeguarding Podcast – Lydia Grace of the Samaritans
By Neil Fairbrother
In this safeguarding podcast we discuss with Lydia Grace of the Samaritans their industry guidelines for managing self-harm and suicide content on their websites and how to keep vulnerable people safe online. What was the motivation behind these guidelines? Is self-harming an effective coping strategy? What’s the difference between “helpful” and “harmful content”? Is AI useful in this context? Why are children worried about reporting content they’ve seen and why does platform transparency about prevalence rates cause a problem?
https://traffic.libsyn.com/secure/safetonetfoundation/SafeToNet_Foundation_podcast_-_Lydia_Grace_The_Samaritans.mp3
There’s a lightly edited transcript of the podcast below for those that can’t use podcasts, or for those that simply prefer to read.
Welcome to another edition of the SafeToNet Foundation safeguarding podcast with Neil Fairbrother exploring the law culture and technology of safeguarding children online.
Neil Fairbrother
This episode is dedicated to the late Paul Vodden who was a tireless campaigner against bullying, whose own son Ben took his life at just 11 years old.
On World Suicide Prevention Day, Thursday, the 10th of September, the Samaritans published some guidelines to support sites and platforms hosting user generated content on how to safely manage self-harm and suicide content and keep vulnerable people safe online. To discuss this topic and explain the guidelines I’m joined by Lydia Grace, the Samaritan’s Policy and Research Program Manager for Online Harms. I should say at this point that if you or someone you know, is affected by this podcast, then please seek help from a trusted friend or organization.
Welcome to the podcast Lydia.
Lydia Grace, the Samaritans
Hi Neil really great to be here. Thank you for inviting me.
Neil Fairbrother
My absolute pleasure. How has COVID treated you?
Lydia Grace, the Samaritans
Yes, it’s been a challenge for us all, has it? But no, and especially, it just emphasizes the importance of the online environment, making it so much easier for people to access support online, but also making, just highlighting the importance that it’s really important to create those safe spaces where people come find support online.
Neil Fairbrother
Indeed, indeed. Okay. Could you give us a brief resumé please of your background so that our listeners from around the world understand a little bit more about you and indeed explain who the Samaritans are for those who may not be familiar with the organization?
Lydia Grace, the Samaritans
Definitely. So to start off with I started out as a researcher looking at kind of memories and how those positive and negative events from your past influence your sense of self and wellbeing. But I really wanted to just get closer to people with lived experience and make more of an impact so then moved over to the charity sector working for a really big mental health organization. But as part of that, I worked on an online peer support community. So really started to understand a lot about the benefits of sharing your experiences online, but also those kinds of potential dangers also associated with this.
And then last year I moved over to the Samaritans to lead on their Online Harms program. So we’re working in collaboration with government and tech platforms such as Facebook, Instagram, Google, for example, creating this hub of excellence around suicide prevention. So trying to create those industry guidelines that you spoke about, but also helping users to have the tools that they need to post safely and find support online.
Neil Fairbrother
Okay. And the Samaritans as an organization does what?
Lydia Grace, the Samaritans
So we’ve got a helpline, we offer support 24/7 to anybody who needs us. We’re always there to listen. So if ever you’re struggling, you can just pick up the phone. And so we also have a dedicated email address, joe@samaritans.org and our helpline number is 116 123. As I said, we’re open 24/7 for anyone needing support.
Neil Fairbrother
Okay. You’re very much almost a fourth emergency service.
Lydia Grace, the Samaritans
It’s really important to just be there for that person in the time that they need support the most.
Neil Fairbrother
What was the motivation behind the creation of these guidelines? You’ve somewhat alluded to that already, but perhaps you’d go into a little bit more detail.
Lydia Grace, the Samaritans
Yes, definitely. So the online space is just so important and we know that it’s such a hugely vital source of support for individuals experiencing self-harm and suicidal thoughts or feelings. We do want to make the internet a safer place. So making sure that people can access those benefits, but whilst also being protected from harm. So the guidelines are really a way of helping platforms to minimize access to that harmful content, but also increase access to the content that could be really helpful and supportive.
You know, we are noticing that platforms are making steps in the right direction, but there is still a long way to go. So the guidelines really for us were just a great way of bringing kind of experts in the field together, whether that’s academics, government tech platforms, third sector organizations, and people with lived experience to develop these best practice industry guidelines, making sure that they reflect the latest evidence-based emerging issues and the latest in platform technologies.
Neil Fairbrother
I believe that your guidelines were informed by work that you did with the University of Bristol. Is that correct?
Lydia Grace, the Samaritans
Yes, definitely and some of the work that we did showed that users were using the internet in relation to self-harm and suicide attempts and this was particularly young people. Perhaps that’s not surprising given that the internet now is just part and parcel of our everyday life. You know we all go online if we want to find something out, so it’s not surprising that you all using the internet in this way. But we also found that the way you use the internet might differ depending on your level of distress.
So for example, if you’re in low levels of distress, you might be browsing around trying to find support opportunities or looking for people with similar experiences and if you get kind of signposting and supportive content, you’ll be really pleased about that and interacting with that content a lot. But those individuals with higher levels of distress actually were really purposeful in their browsing and it was much harder to distract them out of that with signposting and options for support.
So we really need to develop our understanding of what interventions can help users, and when, and I think the guidelines are based on this evidence and also try and explore the latest evidence in what is working and what users are finding helpful.
Neil Fairbrother
Okay, well, let’s explore the guidelines. I believe there are 10 in total and the first of them is “Understanding self-harm and suicide content on line.” Now you define self-harm here. What is your definition of self-harm?
Lydia Grace, the Samaritans
So I think the definition we use is kind of any deliberate act of self-poisoning or self-injury without suicidal intent, so that excludes things like accidents, substance misuse, or eating disorders. But we do recognize that there are different definitions of self-harm and also there are many reasons why someone might self-harm. So [for] many people, it won’t be with the intent to die, it’s just a way of coping with really difficult experiences or feelings. But the problem is that the evidence doesn’t really show that that is an effective or healthy coping strategy so it’s really important that we’re able to provide support to those individuals.
Neil Fairbrother
So to the individual that is self-harming, it might seem like a valid strategy to cope, a coping mechanism, but in actual fact it isn’t. Is that correct?
Lydia Grace, the Samaritans
Definitely. People can find it a real help for a coping strategy, but yes, long-term actually, it’s probably quite unhelpful. And we need to make sure that those people are finding support an early stage.
Neil Fairbrother
Now you say that the research conducted by the Samaritans and the University of Bristol found that 26% of young people who had presented to hospital for self-harm or indeed a suicide attempt, had used the internet in relation to this act, and you also drew a distinction between helpful and harmful content, which you, again, alluded to earlier. What’s the difference between helpful and harmful content in this context?
Lydia Grace, the Samaritans
I think it is a complex area. When we’re talking about helpful content, we’re typically referring to that which helps the user better understand their experiences and that there is hope and things will get better and helps them find support. So it could be for example, messages around hope and recovery or messages encouraging the user to seek help.
But when we’re thinking around harmful content, it’s that which encourages somebody to hurt themselves or graphic depictions of self-harm or suicide or information around methods, for example. But then what we also find is that as this huge grey area of content, where we’re really not sure about the negative and positive effects of that. So that includes things like quotes about self-harm and suicide and drawings, for example, and this is the kind of content where we really need to develop our understanding and really chip away at what makes that content harmful and to who.
Neil Fairbrother
Yes, it must be quite a difficult balancing act. I believe the 1961 Suicide Act, under that act, it’s an offense to encourage or assist the suicide or attempted suicide of another person. So those websites and services that are providing what they think is helpful content must be absolutely sure that it is helpful content and can’t be construed as harmful.
Lydia Grace, the Samaritans
Yes, definitely. And I think when we’re talking about that kind of illegal content, we’re talking specifically about posts that are encouraging or assisting suicide. So for example, this might be through online challenges in games for example.
Neil Fairbrother
The second guideline is “Establishing accountability and structure.” Now, how can companies set about doing that?
Lydia Grace, the Samaritans
Well, I think it’s about ensuring that that responsibility is embedded throughout the whole organization, but also that accountability is held at a really senior level. The platforms do need to have this really good understanding of the safeguarding responsibilities, but also when they’re required to pass the information onto emergency services, you know, companies really do need to be taking this seriously and making a commitment to protecting those vulnerable users from harmful content. But not only is it important to put these responsibilities and policies in place, but also to constantly review and update them to make sure that they reflect the kind of latest evidence-base, but also the evolving environment that we’re in.
Neil Fairbrother
Okay. Now in this guideline, you identify three types of users who may require additional safeguarding and I’d just like to pick out one of those if I may, and that is children, those under 18. How do you identify or how do platforms identify who those people are, without some form of age verification?
Lydia Grace, the Samaritans
Yeah, I think it’s a really difficult area and there are lots of different age verification processes that they can put in place, but I think what’s really important as well is having really clear guidance on sign up about the age restrictions, but also why that’s the case, and also making sure that the platform is signposting younger users to more suitable platforms in the cases where they are signing up.
Neil Fairbrother
Okay. The third guideline is “Developing and implementing self-harm and suicide content policies.” Now this could be quite a lengthy answer, but what considerations should be factored into such a policy?
Lydia Grace, the Samaritans
I think like you say, there are so many. I think the key things from our side are like the functions of the site. So for example, whether that’s instant messaging or live streaming and the particular risks that might be associated with those specific functions. Platforms also need to be thinking a lot about the vulnerability of their users, so for example like children, like we just spoke about it’s really important that the more vulnerable the audience is, the kind of higher the thresholds are for those safety policies and the more support that they should be providing.
[It’s] also really important to be thinking about the capacity and the resources of the platform, because it’s all very well having an incredible policy around responding to this type of content, but it’s just so important that they’ve also got the resources such as the moderators, for example, to make sure that that policy is implemented really effectively.
Neil Fairbrother
Yes, it needs to become woven into the fabric of an organization rather than just a tick list item. Guideline number four was “Implementing user-friendly reporting processes for self-harm and suicide content.” Now on the face of it, this sounds pretty straightforward, but you do sound a note of caution about the word reporting. Why do you have a concern about the word reporting when you’re using it yourself in your own guideline?
Lydia Grace, the Samaritans
Yes, I think in our guidelines, we refer to reporting because it’s commonly understood within the industry, but we do hear from so many young people that they do worry about reporting somebody online because report can sound really negative and the person might worry that they’ll get that user into trouble. So as a result, they might not report the content that they see and try and deal with it on their own, which can then put this massive amount of pressure on them and feelings of responsibility. So I think other wording that kind of we’ve suggested is things like flagging content instead so that the users can still being encouraged to report that content without it sounding so negative.
Neil Fairbrother
I’ve been a user of Facebook for more years than I care to think about and I’ve never seen such a reporting function. Is it actually there, is it hidden away? Should it be more visible and obvious?
Lydia Grace, the Samaritans
Yes. So I think on most platforms you can report content, but even the fact that you’re saying that raises the big issue that platforms need to make their processes around reporting much easier to understand and much more obvious so that all users, when you go on there, know exactly how to report something on now that we’re with you. It’s just so important that we know how to report and making it really visible when people sign up.
Neil Fairbrother
Guideline number five then was “Implementing effective moderation for self-harm and suicide content”. And here you say that AI, artificial intelligence and presumably other tools, can be used to help, but shouldn’t be relied on exclusively. Why is that?
Lydia Grace, the Samaritans
Yes, definitely. I think it can be a really fantastic tool at detecting and responding to really high volumes of content especially during those anti-social hours, but we can’t underestimate the importance of human moderators and they’re just so crucial in this area of work. Human moderators understand the nuance in language around self-harm and suicide, which often AI can fail to detect. They also understand the context of a situation so much more and then can therefore provide much more of a personalized response and support to users, which can be really effective. But of course we do need AI for those larger platforms to identify and respond to that content really quickly and effectively. I think the balance is getting AI and modern and human moderation in combination.
Neil Fairbrother
Well, the benefit of using technology on this is that people aren’t in fact involved, you know, moderators and not involved and dealing with this kind of content must take its toll on those moderators themselves. So who looks after the human moderators that there must be an impact there? Surely
Lydia Grace, the Samaritans
I think that’s such an important question. They’re obviously viewing a lot of potentially harmful content on a really regular basis so it’s so important that they do have support available to them. The content itself might be distressing, but it might also become quite normalized for them after a while if they’re viewing such high quantities. So I think emotional support is definitely such an important factor here, but also reflective practice sessions and specialist training in mental health awareness and suicide prevention, not only so that they can support others, but so that they also have the tools and knowledge to look after themselves.
Neil Fairbrother
And presumably that would also be reflected in the response that moderators give in terms of it being consistent?
Lydia Grace, the Samaritans
Yes, definitely. So the hope with training is that that consistency is maximized. And it’s so important to have that really clear boundaries as well with users to make sure that those moderators are providing consistent support. And also the moderation should be reviewed regularly to show that they are implementing those policies correctly and also have the support that they need to do that.
Neil Fairbrother
Okay. Now one of the purposes of moderation in this context is to alert perhaps the emergency services, but if the identity and location of an at risk user aren’t known, how can moderators affect that?
Lydia Grace, the Samaritans
Yeah, I think just encouraging that user to reach out for support from somebody they trust. So whether that’s a friend, a family member, their doctor, or a trusted support organization definitely encouraging them to contact emergency services if they do need urgent help or to go to their nearest A&E department. But also asking them what support they think would help right now, or trying to ask them to reflect on what support helps them for in times where they felt similar.
Neil Fairbrother
So guideline number six was to “Reduce access to harmful self-harm content and suicide content online”. One of the points you raise here is to ensure that site algorithms don’t push self-harm and suicide content towards users, which is exactly what happened in the case of Molly Russell. Now we may not be able to comment on the Molly Russell case here, I appreciate that, but in the general case, should children be exempt from algorithmically driven content and did social media sites make any changes at all after that most tragic event?
Lydia Grace, the Samaritans
Yes, I think we definitely saw the platforms did make positive steps in the right direction but as I said before, that there is still a long way to go with this and we hope that the Online Harms regulation along with our guidelines will push them even further with this. Platforms do need to be constantly reviewing how the algorithms are working, making sure that they’re not pushing any harmful content towards users, but instead understanding how they can promote positive or helpful resources instead.
But there’s also a lot of other things that they should be thinking about in terms of how they can reduce access to that kind of content. So for example, exploring how it appears in search results and making sure that the helpful content appears first and also trying to implement kind of user functions where they can have more control over the content they see, so potentially filtering out any content containing certain themes, which might be triggering or distressing for them.
Neil Fairbrother
Okay. Now another point you’ve raised is using age or sensitivity content warnings which could include alerts over content, either checking your users’ age or warning users that content may be distressing, as it mentioned self-harm or suicide. And again, this seems to reflect or reinforce the argument for Age Verification.
Lydia Grace, the Samaritans
I think Age Verification is a really important thing. We know that age and sensitivity content warnings can be really helpful in making potentially harmful content harder to find. The sensitivity is a really important in giving the user more of an informed choice about the content they see, which is really important. But we do need more research on understanding how effective this is at reducing access to self-harm and suicide content. And we also have to be incredibly mindful about how we censor content, ensuring that we do it in a really sensitive way and avoid censoring people’s experiences so that they feel stigmatized.
Neil Fairbrother
Now you do refer to “nudge techniques” and you seem to imply that nudge techniques are not a good thing. What are nudge techniques?
Lydia Grace, the Samaritans
I think nudge techniques are just reminders or prompts that can appear when you’re online. So like a really common example of this would be like reminders when you’re online shopping that you’ve got items still in your shopping basket, for example. So when you’re thinking about online harms we’re thinking about pop-up messages when users are actively seeking self-harm and suicide content, for example encouraging them to just pause and read about the support services and click again in order to continue browsing, or it could be things like popups when you’ve been viewing content for a long period of time and prompts to view another piece of content. So actually they can be really helpful encouraging positive behaviours online.
Neil Fairbrother
So guideline seven is “Supporting the well-being of users online.” What do you mean by wellbeing?
Lydia Grace, the Samaritans
Different things affect our mental health and wellbeing at different times as they do our physical health. So thinking about our well-being in relation to online activity is really important. So being really conscious of what you’re spending your time doing online, the things you’re looking at and how you’re also interacting with others and platforms can do a lot to support the wellbeing of users.
So whether that’s sign posting users to support, providing information on self-care or how to stay safe online, whether that’s kind of how to support friends who are struggling or how they can post safely. There’s also lots of different options around providing in-app support for users so for example instant chat functions and it’s just so important as well that sites are finding ways to promote that kind of positive content, whether that’s for example wellbeing or help seeking campaigns.
Neil Fairbrother
Okay. Now you’ve mentioned the word “signposting” a few times now, and you also refer to something in this particular guideline as “gold standard” signposting. What do you mean by signposting? And what’s the difference between regular signposting as it were and gold standard signposting?
Lydia Grace, the Samaritans
I think all platforms should be directing people in distress to support. So that could be a helpline like Samaritans, for example, or emergency services if they need urgent help. But when we’re talking about gold standard signposting, we’re really talking about kind of a more personalized approach. So personalized to the specific issues that that individual is facing.
Also signposting to support in a variety of different formats. So for example, somebody might just not feel comfortable picking the phone up, so it’s so important that they’ve got other options there as well. And also just being careful about how many signposts you provide to a user in one go like when you’re in a lot of distress or going through a difficult time, it’d be really hard to be bombarded with lots of different options. So being as targeted as you can with that support in really helpful and providing kind of that step-by-step support.
Neil Fairbrother
Guideline eight is “Communicating sensitively with users in distress”. And here you say, “sites should ensure that they are using safe and empathetic approaches, remembering that the user could be experiencing high levels of distress and is in need of support.” How can this be done on an industrial scale that’s needed by these massive sites? Is it simply to just employ armies of moderators?
Lydia Grace, the Samaritans
No, I think this kind of sensitive and empathetic approach is needed to be embedded in everything such as right from the policy and how right through to how you message users and communicate with them. So it needs to be driven by experts in the field, including people with lived experience. A classic example would be the removal of content, for example, that breaks community guidelines. So platforms get in touch with a user to say their content has been removed, but it’s so important that they do that in a really safe and sensitive way. It could be the first time that user has reached out for support and therefore it’s so crucial that they are really supportive, explain why the content has broken the rules and also help them try to post again in a safer way so that they can continue to get support from the community.
Neil Fairbrother
So the people designing the service really need to be aware of these issues and factor [them] into their flow charts and diagrams of how the service works?
Lydia Grace, the Samaritans
Exactly, which is why it’s just so important that they’re working with experts from the start in designing the processes and the platform to make sure that it is safe for people, but also working with people with lived experience is just so crucial to make sure that it is safe and also reflective of their experience.
Neil Fairbrother
Okay. Now smaller sites and smaller companies may not have the resources to do this. Is this simply a necessary cost to doing social media business, for example, which would mean that if your business model can’t cover these costs, then you simply don’t have a business.
Lydia Grace, the Samaritans
I think it’s an important for all platforms of all sizes. I think in some ways the smaller platforms can do it slightly better in the sense of smaller platforms often provide more human moderation for example, and therefore can provide those more personalized responses. So in a way, sometimes they have the ability to be more kind of personalized and sensitive rather than sending automated responses. But again, it’s so important that they are getting the right training and being informed by subject matter experts in order to ensure that they’re doing it in those safe and sensitive ways.
Neil Fairbrother
Okay. The penultimate guideline, guideline nine is “Promoting online excellence in suicide prevention, through collaboration and transparency” and here you say that the industry should collaborate and share learning. These organizations are quite often competing against each other, so are you saying they should put their competitive differences aside for this specific purpose?
Lydia Grace, the Samaritans
Yes, definitely. I think we understand that platforms are competing against each other, but when it comes to online suicide prevention the industry really does need to come together to tackle this. It’s so important that they’re sharing insights with each other, especially those more established companies, sharing those insights with newer platforms, with less resources. It’s just so important that the platforms aren’t continuing to make the same mistakes and that they’re sharing best practice about what’s worked well and what’s not worked. And users ultimately are using multiple platforms so it’s beneficial to work in this collaborative way to help users understand what is and isn’t accepted on these platforms and also what they should expect from that platform.
Neil Fairbrother
And that would have the benefit of providing the same user experience for this kind of content across multiple different platforms.
Lydia Grace, the Samaritans
Exactly. Which paints a much clearer picture for the user about what kinds of content they should report, for example, and also what is and what isn’t acceptable within those community guidelines.
Neil Fairbrother
Now, I think you have almost a footnote in this section that says that “…transparency by reporting prevalence rates of self-harm and suicide content has a risky side”. What is that risky side? What is the downside to this transparency?
Lydia Grace, the Samaritans
I guess it is so important to be transparent, but you just need to be very cautious about who you’re being transparent to. For example it’s so important for platforms to be transparent with users about the practices and the way in which they respond to self-harm and suicide content, but also be transparent more broadly with people with a legitimate interest about the prevalence of content on their platform.
So what we wouldn’t want for example, is a platform to be really open about having really high levels of self-harm and suicide content that’s harmful and really low levels of removal, because if that became really public knowledge, the worry is that it might attract users to the site to try and find this harmful content. So of course platforms do need to be transparent, but they just need to do it in a way that’s safe.
Neil Fairbrother
Okay. And the final guideline, guideline 10, is “Supporting the well-being of staff working with self-harm and suicide content”. We’ve sort of touched on this before but here you talk about promoting a positive wellbeing culture in the workplace. How do you set about doing that, particularly when we are now all working from home?
Lydia Grace, the Samaritans
I think it’s just about looking out for one another, making sure that you’re supportive of each other and just checking in with someone if you do think they’re struggling. I think the most important message is that it’s okay to ask for help and I’ve seen some really good examples of this recently, throughout COVID, especially senior leaders setting a really good example, making sure that it’s clear within the organization that it’s okay to ask for help, but also meeting up with making sure that there’s that support available from the organization when people do actually ask for that help.
Neil Fairbrother
Okay. What has been the industry reaction Lydia to the publication of these guidelines?
Lydia Grace, the Samaritans
I think it’s been really positive in all honesty, and we’ve had a huge amount of engagement since with a lot of different platforms. And I think what’s really clear is that platforms do want to improve their policies around this and they do want to understand that it’s such an important area of work. And it’s really exciting to be thinking that we can work with the platforms to make the internet a safer place for vulnerable users.
Neil Fairbrother
Okay, Lydia thank you so much. I think we’re going to have to call it a day there, really interesting insights into these guidelines. Thank you so much for your time and I wish you all the very best of luck with the guidelines.
Lydia Grace, the Samaritans
Thanks so much Neil, t’s been really great talking to you.