Safeguarding Podcast – The Unseen Teen with Amanda Lenhart, Data and Society Research Institute

In this Safeguarding Podcast with Amanda Lenhart of Data & Society Research Institute we discuss how Teens are an afterthought in social media product design, despite adolescents needing special attention. We discuss “screen time” and what wrong with it, imaginary users, the impact legal teams have on child-oriented product design, Strategic Ignorance, and the role that journalists and trade unions can have on adolescent digital wellbeing (whatever that is).

There’s a lightly edited transcript of the podcast below for those that can’t use podcasts, or for those that simply prefer to read.

Welcome to another edition of the SafeToNet Foundation’s safeguarding podcast with Neil Fairbrother exploring the law culture and technology of safeguarding children online.

Neil Fairbrother

Digital wellbeing is often touted as being a good thing, especially for online children, but what is it and how can tech products help deliver it? Is the legal profession inadvertently creating barriers to achieving it and are service designers and product managers equipped to design it? Are journalists complicit in obfuscation by reporting on screen time and what could trade unions do to help create it?

To help unpick all of this and to take us through their latest report, The Unseen Teen, I’m joined by Amanda Lenhart, Program Director for Health and Data from Data and Society Research Institute.

Welcome to the podcast, Amanda.

Amanda Lenhart, Data and Society Research Institute

Great! Thanks for having me, delighted to be here.

Neil Fairbrother

Thank you. It’s a pleasure to have you as a guest. Please, could you give us a brief resumé, Amanda, so that our audience from around the world has an appreciation of your experience and background?

Amanda Lenhart, Data and Society Research Institute

Sure, I’m delighted to do so. So as you mentioned, I’m the Program Director for Health and Data at the Data and Society Research Institute. It’s sort of an alternative academic research center based in New York city. I started my career at the Pew Research Centre, which many of you may have heard of. It’s a US-based think tank that focuses on doing impartial. mostly survey research, in the public interest. And I was a part of the internet team, in fact, one of the founders of the internet team there. And I spent about 16 years at the Pew Research Centre.

And then I branched out after that. I came initially to the Data and Society Research Institute and did a piece of research there. My background is in survey design research, but I have branched into mixed methods. I love good qualitative as well as quantitative research. I’ve also worked at a research centre that is predominantly funded by government grants in a collaboration with the Associated Press in the United States. And and then eventually returned back to Data and Society. So yeah, I think hopefully that’s enough for your listeners!

Neil Fairbrother

Amanda we’re here to discuss your latest report which is called the Unseen Teen. What is the purpose of the report?

Amanda Lenhart, Data and Society Research Institute

So the report is the culmination of almost two years’ worth of research. And the point, the reason we undertook the work was we wanted to understand what was happening within tech companies. We’ve done tons of work talking to users specifically, we’ve talked to adolescents, we’ve talked to their parents, we’ve talked to individuals. But there hasn’t been a lot of work done figuring out and talking to the people who make the products that we use to try to understand what they’re trying to do when they build these products for us.

And so we decided to reach out to about 25 workers in tech companies, companies that you’ve heard of though, I can’t disclose which ones as we offered full anonymity to our participants. And with an anonymity, we had some I think participation that was really, I think, quite wonderful.

We asked them about particularly this idea of digital wellbeing. Trying to understand how do people think about harm to users and user benefits when they’re starting to build their products. How much they think about it? Do they think about youth in particular? Do they think about other perhaps minoritized populations? And so we started out asking a series of questions and conducting some lengthy interviews with folks to try to find out answers to these questions. And that is what motivated us to, and sort of what underpins this report today.

Neil Fairbrother

Okay. Now you’re very much focusing on the digital wellbeing of the adolescent. How do you define adolescent?

Amanda Lenhart, Data and Society Research Institute

For the purposes of this research, we focused on young people between the ages of 13 and 17. That’s how we define it. However, I will say in our interviews, we wanted it to be available to lots of different definitions of adolescent. So some of the people that we talk to in their work would have defined adolescent as anything from 10 to 24. Other people use a narrower definition. And so it really did vary. Some people, when they were thinking about young people, thought about youth or young people, which is a much broader category. But for our purposes, we focused on 13 to 17 year-olds.

Neil Fairbrother

Okay. And presumably 13 was chosen because that’s typically the age of digital consent, where children can join social media networks without necessarily having their parents’ permission to do so?

Amanda Lenhart, Data and Society Research Institute

Exactly. It’s because of the law in the United States, in particular the COPPA regulation that has really codified 13 as a sort of a watershed year in a person’s life around their digital ability to consent, to digital, to access to digital platforms. So, yeah, that’s where we focused there.

Neil Fairbrother

Okay. Now we’ve got a nice definition there of the term “adolescent” for the purposes of your survey. What about digital wellbeing? You say in your report that “…digital wellbeing is hard to define, and it won’t mean the same thing to every adolescent or population”. Isn’t this really the nub of the matter. If you don’t have an agreed definition, how can there be a consistent design for, or indeed any kind of design for it?

Amanda Lenhart, Data and Society Research Institute

Well, that’s in fact, one of the things that is really was so difficult about this project as a whole, and that when we went in, we wanted to hear what the definitions were. And we found both in the literature that we reviewed prior to beginning the work, as well as in the process of doing the interviews with the individuals is that there was no consistency in the definitions.

Definitions can range to focusing exclusively on mental health. Some of the definitions focus on a wide range of sort of impacts around self-esteem as well as mental and physical health. Some are thinking about exposure to ideas as a part of a factor of digital wellbeing. And then we even had participants in our study who questioned the idea of whether there was a thing that was a digital wellbeing that was separate in any way from just wellbeing.

But this lack of this lack of consistency in fact becomes an important point in our project. Partly because what it means is that the companies themselves, when trying to think about digital wellbeing, because it’s a term that gets thrown around in Silicon valley all of the time as though everybody knows what we’re talking about, so companies try to focus on the thing that they can measure, right? Companies are often asked to quantify what they’re doing, especially for people who are giving them money or who are supporting them. And so what is the thing that you can easily quantify that people often point to around digital wellbeing and that’s screen time. And so what ends up happening is that screen time becomes the default measurement of digital wellbeing. Whereas I think, oh, go ahead…

Neil Fairbrother

I was going to say, well, we’ll come on screen time in a short while, although I do understand it’s used as a proxy for other things.

Amanda Lenhart, Data and Society Research Institute

Absolutely. And you know, I think what ends up happening is, you know, the question is what is screen time? What is your screen time about, right? Are you engaged with other people? Are you positively engaged with the people? Are you using your screen time in a support group that makes you feel better? Or are you using screen time where you’re interacting with other people and you’re fighting about a political topic or you’re fighting about something in a way that diminishes your sense of wellbeing or your sense of happiness.

Are you passively engaging with content? Are you actively engaging with the game? Are you stimulated and happy? Are you sort of wallowing in the things about your life that you don’t like? So there’s so many ways that screen time can be variable. And so ascribing screen time and creating it as a monolith really does a disservice to all of us, because there’s so much more going on there than just than just something that can be captured with a number.

Neil Fairbrother

Indeed. Now you do have your own definition of digital wellbeing which is “a sense that use of an online space can be a positive force in a person’s mental, social and physical life and that the person has agency to manage their use of a platform or feature”. And to me, there is a striking similarity between that and the definition of harm as used by her Majesty’s government here in the UK, in their draft Online Safety Bill, which says that “harm is a significant adverse, physical or psychological impact on a child of ordinary sensibilities”. So you’ve kind of got two sides of the same coin here, but are we really talking about the same thing where harm is the lack of wellbeing?

Amanda Lenhart, Data and Society Research Institute

I think it’s more complicated than that because I think, you know, there’s probably a neutral state in which you are not harmed, but that you are not perhaps well. I think some of wellbeing is about aiming for a more optimal set of experiences. But I will be truthful in that we created that definition partly because we felt we had to have one for the purposes of the report, but it was one in which I think we still thinking about whether that is in fact exactly the right way to define it.

I think because there’s so much there’s so little agreement in sort of any part of these conversations, I think we felt we had to plant a flag, but I feel like that’s a part of the report where I’m interested in continuing the conversation about whether that’s in fact the exact right way to be defining this.

Neil Fairbrother

Okay. Now you have a number of key findings, five in fact, which I think are well worthwhile exploring, so I’d like to spend a little bit of time on those. And the first key finding that you have is that “Adolescents are an afterthought at most social media platform companies”. What do they mean? Or what do you mean, I should say, by an afterthought?

Amanda Lenhart, Data and Society Research Institute

So, you know, we talked to people across a variety of roles in these companies. We talked to some people whose job it is in fact to think about teenagers. But we also talked to people for whom they’re not specifically deputized within the organization to do that, but these are people who build products. They build products that are used by adolescents, used in fact by a majority of adolescents in the world, but who do not necessarily think about adolescents at all when they are building these products.

And so I think when the reason why we ended up saying that partly was because we first allowed our interviews to kind of unfold where in which we didn’t specifically ask about adolescents at all, sort of waiting to see if it would come up. And then when it didn’t come up in almost every single interview, we specifically asked about it and had people tell us, well, you know, we don’t really think about them. We’re mostly focused on adult users.

Because I think the thought is, and we can talk more about this, is that that’s somebody else’s problem, particularly in large companies, where there are a few people who are usually asked to be specifically thinking about young people. It allows other people to escape having to think about including young people in this sort of group of people that they’re designing for. And unless a platform specifically defined the bulk of its audience as adolescents or young adults, there just wasn’t a sense that this was a group that we needed to make some, you know, design specifications for specifically.

Neil Fairbrother

You do make the point in your report that “adolescents deserve special attention” in social platform, product design. Why do you say that they need special attention, and doesn’t this totally go against the key finding that adolescents are an afterthought?

Amanda Lenhart, Data and Society Research Institute

So I think they are an afterthought, but I think that needs to change. So I would say that there are, as I said, some companies that think about these individuals and have people who are deputized to think about them, particularly some of the largest companies that we talked to. But those people don’t have a lot of power in the company, so they can do lots of great work, write memos, often they’re invited into product design processes very, very late in the process. And so they come in into a fully formed product and they sort of wave their hands and sort of yell and scream and essentially say, Hey, this product is going to hurt children, right? This product is going to is going to be damaging in some way. This product is going to be really problematic. This product is going to be used in horrible ways by users and you haven’t designed fixes into this product for that reason.

And so often in these conversations, those people are not listened to, and products roll out and the harms blow back and the company ends up having to sort of patch things up after the fact. But because of the focus, and we can talk more about this, the business models that are really about exponential growth and about revenue growth, anything that is perceived as contradicting any kind of, you know, massive growth is something that gets basically shipped to the side. And it means that these folks, even though they often have, I think, very important moral and sort of ethical things to say, they just simply aren’t paid attention to.

Neil Fairbrother

Okay. Now the second key finding that you have in your report is that “Companies design for their imagined average user, which leaves out youth and others. And by others, I assume you mean minority users perhaps. Now a key part of modern product and service design is what’s called a persona, and it’s also part of marketing in fact. A persona is a detailed profile of the fictitious person, which describes the characteristics of the target market. So is this approach fundamentally flawed?

Amanda Lenhart, Data and Society Research Institute

So I think you know, a well done persona can get you part of the way there. Though, interestingly, we were talking about this on my team and talking about this idea of personas and many personas are built to try to incorporate the thinking and the sort of needs of some of these more edge case users who we’re talking about, whether it’s people with a disability, people who are not, who have a trans identity, people who are people of colour. And often as my colleague Joan pointed out you know, you’re creating an imagined marginalized user. And I think what our report is trying to say is by focusing on the middle, by focusing on even an aggregate person, you’re missing the details that are happening to people on the edge, you have to talk to them directly. Creating a fictitious version of those people isn’t going to get you all the way there.

It is a definitely an important first step and including personas, which include characteristics of some of your more marginalized or minoritized parts of your population, I think is, is definitely an important step. But in general, when people are starting to vision these products, at least some of the folks that we talked to, they talked about designing for people like them. And in many cases, that is people who love technology and are really, really connected.

So we’re missing a whole bunch of people who are not particularly tech savvy and people who have low connectivity. But they then often would talk about being disappointed when they sort of take their products out to test whether people like it and they wouldn’t kind of land as well. Because they were really designing for people like them and not, you know, as one person told us, people like my Dad, right?

So I think there is this sense that you use the enthusiasm that you have and the things that you know, to build these products. And when you have a workforce that’s not very diverse, it means that you don’t have those kinds of thinking and those experiences, baked in from the very beginning. The personas, as I said, get you kind of part of the way there, but they don’t get you all the way there, either from having the people doing the designing, being diverse, or doing research as a part of the process with those specific humans who have those particular characteristics.

Neil Fairbrother

Okay. And in this section, there’s a quote from one of your interviewees that really reflects, I think some of the thinking that’s going on within these organizations and what this interviewee says is that “…anything that deals with children is really the legal team’s purview”. Now the legal team is a constant theme throughout your report, and they often appear to be a blocker to anything remotely aimed at adolescents, especially at the younger end of that age range. Is that the case?

Amanda Lenhart, Data and Society Research Institute

So for us I think legal teams are a part of it, but there’s also teams within tech companies called “Trust and Safety”, which is sometimes underneath the legal umbrella and sometimes it’s not. It has a slightly more holistic remit usually than legal but is often a part of that as well.

Legal teams are approaching this from a perspective of risk, right? They want to understand where the risk lies. But they know that when they’re called on to talk with a product team that they are perceived as blockers and they don’t want to be, that is a bad thing in the cultures of these companies.

And so you can see repeatedly in the research that we did, in the quotes that we gleaned from a number of different people across the spectrum, in lots of these sort of Trust and Safety type roles, that they really worked hard to be not seen as blockers, to find ways to compromise and work with people and work around people and figure out motivations. So “How can I present the information that I know that I think is so important to help children? How can I present that in a way that this person will understand how important this is?”, or “How can I lay out a menu of choices?

And I think a legal team also, you know, with obviously thinking about from a perspective of risk and a perspective of financial risks of the company based on judgements against the company for illegal activity. And so often actually the threat of that, the threat of regulation, the threat of legal judgment is the card that legal teams can play to block something within a company that is really harmful. But often they’re not listened to and I think that’s a real frustration that people in these jobs have and expressed to us.

Neil Fairbrother

Okay. The third key finding is that “Many negative health impacts stem from what companies choose not to know about their users”. And you call this “Strategic Ignorance”, which sounds very concerning indeed. And you say that there are three main ways that ignorance can be strategically designed into company structures, and the first is through not collecting data. What do you mean by that?

Amanda Lenhart, Data and Society Research Institute

So some of the companies we talk to often under the guise of privacy protection for users, do not collect data about their users. They specifically described that they don’t collect particular data. And I think it’s particularly salient around issues like age, where for users under a particular age, you have decided that they’re not on your platform. You’ve created age gates that are basically ineffective and allow young people to easily work around them, so you have this very fictional set of information that you’ve collected about some of your users.

So we heard from a number of platforms: “Well, we don’t know if we have kids in our platform because we don’t collect that data”, or “We don’t know if we expect, we probably do, but we don’t know that we do”. So there’s this sense of like, if we don’t explicitly collect information, then we won’t have knowledge that these people are on our platform, or we won’t be sort of obligated either morally or legally to engage with them, and create a better experience for them. And we did hear that from a number of platforms.

Now, some platforms don’t do that. What’s also very interesting in contrast, is that even on these platforms that don’t collect a lot of data about users there is also sometimes a collection of experiential data from a person’s profile to deliver ads to them. So there’s a real tension, right, in that we both collect some data based on your use of the platform that we can use to deliver ads to you but we also say that we don’t collect other kinds of data about you to preserve your privacy. And some of this is different platforms and different practices, but some of this is sometimes on that.

Amanda Lenhart, Data and Society Research Institute

And a related topic to all of this age verification, of course, which I don’t think we have time to go into. The second tactic for strategic ignorance that you’ve identified is collecting too much content. How does that impact strategic ignorance?

Amanda Lenhart, Data and Society Research Institute

That points to sort of another challenge of tech companies have right now, which is they have enormous amounts of content being placed on their platforms every single second of every single day. And it’s nearly impossible for the state of machine learning, as well as the ability to have large enough staffs, for any of these companies to have oversight of all of this content, right? It’s extremely difficult. They’re trying machine learning, it’s not perfect, but they’re working on it. However, one thing that doesn’t, I think often get talked about is that it is a distinct choice of the companies to allow content to be placed on the platform in a seamless way that doesn’t have friction.

Previously there have been other platforms where you had to have somebody that content had to be checked. Somebody had to look at it, it wasn’t something that happened quickly. And that’s one of the innovations of many of these platforms, that you can just place content up no matter what. But I think what we really wanted to point out with this is that that’s a choice, right? A company has made a choice to place all that content on the platform.

Now they’re working to try to remediate some of those issues, but by making it so easy to share so much, they have also created a moment where they can’t possibly know what’s there or that it’s very, very difficult to know what’s there. And that gives them an ability to say, “Hey, we can’t possibly moderate all this content because there’s too much”.

Neil Fairbrother

Okay. And the third strategy for developing or protecting strategic ignorance is through unclear lines of responsibility for knowledge, which is an organizational thing, I guess?

Amanda Lenhart, Data and Society Research Institute

Yes. And you know, quickly on that, this goes back to where there’s a particular person who is deputized to specifically think about an issue, but that also frees up everybody else within the company to sort of say, that’s not my responsibility. I’m going to leave it to the Trust and Safety people to tell me when I need to do something, as opposed to having everyone in the organisation having internalized to some sense that I need to be designing for the wellbeing of my users and if I have a question I have a team I can escalate to and I can ask them questions.

But that’s not how it tends to work. In some companies that’s the practice, but in many others, the sort of idea of protecting kids and thinking about children and thinking about issues about harm to users is really siloed on particular teams and is not something that is spread across. And that allows people to not have to think about it.

Neil Fairbrother

Okay. Your fourth key finding you’ve sort of touched on already really, but the fourth key finding is “Tech workers who explicitly focus on user wellbeing often have little power to change product design and processes”. And this really is talking about, I guess, the Trust and Safety teams that you mentioned earlier?

Amanda Lenhart, Data and Society Research Institute

Trust and Safety, and to a certain degree legal teams. When there’s tension between making a change to a product to kind of marginally improve the experience, or to greatly improve the experience of users, but which may drop the number of users using a product, may make it harder for people to sign up to use a product, which may reduce the amount of time people spend on the platform, all of those things are seen as tremendous negatives and can often lead to a company choosing not to make that change that the team is recommending that would benefit the wellbeing of users.

Neil Fairbrother

So it’s revenue at the expense of adolescent wellbeing?

Amanda Lenhart, Data and Society Research Institute

I think that that’s one way to put it and it’s not just adolescents, it’s the wellbeing of I think many users that are put on the line when it comes to revenue. But these are companies and they have shareholders that have enormous expectations about return, or they are funded by venture capitalists who also have enormous expectations of return. Either one of those sort of economic dynamics puts enormous pressure on founders and leaders in these companies to provide this revenue and growth. It also is what pushes people towards ad-supported business models, which also are not particularly optimal for wellbeing either. And so all of these things are I think, pushing companies to make these decisions. These are rational decisions, but they just aren’t particularly beneficial for users.

Neil Fairbrother

Particularly in the case of adolescents and their wellbeing it seems. The fifth and final key finding that you’ve made is that platform self-concept influences the choices that organizations make about how to build and manage their online spaces. What is “platform self-concept” Amanda?

Amanda Lenhart, Data and Society Research Institute

So one of the things we learned across all the different types of organizations that we talked to is that there are some organizations that really think of themselves as digital commons, as places of free speech. They view themselves as a place for political discourse and for, you know, citizenry to come together.

There’s a whole other subset of groups of kinds of platforms that view themselves as spaces for pleasure. They view themselves as a place for joy, for sharing the sort of fun things in life. But they don’t see themselves as a digital commons. And the two different types of groups really often have very different approaches to regulating content on the platforms.

Those that are focused on the sort of the digital commons are much less likely to remove or restrict content. Whereas the, the ones that are more focused on pleasure don’t have as many qualms about restricting different kinds of content and are happy to go in and remove or in some ways make it more difficult to find content that is deemed objectionable, harmful or frankly diminishing the pleasurable experience of others on the platform.

So these two different approaches were really interesting in terms of the willingness of the platform to really deeply shape the kind of content that users could post and the sort of impacts and the way they thought about their responsibility towards user wellbeing. Often  the fun platforms wanted users to have a more positive experience and were willing to take more steps to shape that space for the users to have more of a positive experience.

Neil Fairbrother

Okay. But even the service providers who regard themselves as being more of a digital commons kind of entity where they tend to privilege free speech have a bit of a problem because speech is not free. First Amendment, notwithstanding, in the States you still have libel laws and slander laws and other restrictions on what you can say. And in the UK, we have a rather lovely law from the middle of the 1800s, the 1861 Offenses Against the Person Act applies to online communications. So free speech is regulated speech. It is not the same as unregulated speech.

Are these platforms conflating the two? Unregulated speech where anyone can say anything versus regulated speech, which is actually what happens in the offline world?

Amanda Lenhart, Data and Society Research Institute

Yeah, I think what’s happening is the platforms’ understanding of this and actions in this regard are really evolving. So we collected this data in from April to September of 2020, right? So almost a year ago now, or more than a year ago now. And I would say that over the time since we’ve collected the data, some of these platforms, some of these big digital commons platforms who shall remain nameless, have had to change and have started to think about the different ways in which they are in fact already regulating speech and that they need to, to potentially regulate it more in order to create the commons that they think that they’re creating.

Some of the comments made by some of our respondents talked about that there is, in some ways a sort of shallowly understood understanding of free speech and what it means in Silicon Valley. And so it’s often deployed in a very ham-handed manner. And it’s only now that some of these companies in these very complex environments are thinking about the former President of the United States and the speech that he engaged in. And I think these companies are now being forced to reckon with more nuance on some of these stances that they were taking.

Neil Fairbrother

Indeed. And speaking about shadow interpretations and understandings you do mention in your report, the role of the press in all of this. You say in a section called Negative Publicity and Tragedy can Force Company Changes, that “journalists and other public commentators should take a more nuanced stance in their reporting of the impact of technology on healt”h. What do you mean by that?

Amanda Lenhart, Data and Society Research Institute

So one of the things we wanted to understand with this report is what moves companies? What influences them to change both inside and outside? The things that we heard that influence companies from the outside are external pressure and that’s applied in a couple of ways. It can be applied through regulators, but it’s also often applied through the media. And sometimes it’s applied through the media because something terrible has happened. And we heard from actually a number of individuals we talked to, that real tragedies often with children in relation to the platforms that they work for, made real change inside the platforms. People felt terrible and horrible, bad things had happened, they felt responsible, and they wanted to make change and did make change, based on some of these tragic experiences. And so the media and media coverage of these strategies and media pushback on decisions or on new product roll-outs does have an impact.

But we also heard from companies about moments where they felt that they were sort of being unfairly held accountable for things that they couldn’t talk about for privacy reasons and that they also wanted basically fairness in coverage.

And I think the other part of the nuance that we’re asking for is to push journalists to think beyond screen time, to think beyond some of the narratives that [are] the well-worn paths that we’ve had around children and technology. We’ve spent a lot of time talking about fear and worry, and how can we create narratives that are more about digital wellbeing? Where is this done well? Where is this not working? But having nuance in the reporting I think is definitely beneficial and I think then the companies will listen more to more nuanced reporting in many ways than they will to reporting that just feels like a “gotcha”.

Neil Fairbrother

Okay. Now time is running short. You do have a number of best practices that you have recommended that these companies should follow. So if we could very quickly address these. The first one is “…to create targeted approaches and policies for adolescent users”.

Amanda Lenhart, Data and Society Research Institute

Yeah. And that is essentially, you know, let’s focus on them. Let’s think about them both internally in your internal policies, but also in external public policies that focus on youth and that we need to do it well. I think one of the things that we heard in our study was that some of the current laws like COPPA are not especially effective. They create regulatory zones, they create zones where nobody wants to build any technology for young people because of the regulations that are so burdensome, but at the same time, it’s not actually stopping kids from accessing these platforms. So it’s not really working as intended in any way.

So exhorting regulators in particular to think very carefully and to consult with a lots of different people who understand how the technology works as you’re building the laws to regulate it, because poorly done regulation ends up having, I think, a bunch of negative downstream effects and doesn’t end up fixing the problem you’re trying to fix.

Neil Fairbrother

Well indeed we have that in the UK now, as the government has recently published the first draft of the Online Safety Bill which introduces a whole pile of regulation so there’s a huge amount of debate about that at the moment. Another best practice that you’ve recommended is “…to remind product teams of the humans using the platform”. =

Amanda Lenhart, Data and Society Research Institute

Yeah, that was something that I think sort of flows from a lot of different parts of what we heard, but it’s the moments when you forget that you’re building for real people. And we did in fact hear people talking about like, we talk all the time to try to ameliorate this problem, we talk all time about the people in our lives who will be using these products. We try to humanize the experience of who’s going to be the user and remind ourselves of our relationship to them. I would just say that that needs to be expanded beyond just the networks of the people who are doing the building, again given the lack of diversity in those spaces and they need to be thinking about the real humans who don’t look like them, and aren’t connected to them who are also using these products.

Neil Fairbrother

Well, yeah. I mean, talking about diversity, another best practice that you recommend was “…to hire, retain and empower a diverse workforce”. And something that struck me as being particularly interesting here was that perhaps they should “nurture unions”, trade unions. Another way to build this diverse internal infrastructure is to encourage unionization and famously Silicon valley is most definitely not pro-union.

Amanda Lenhart, Data and Society Research Institute

No, and I mean, in the United States the unions are quite weak, and I think we are seeing some movements towards to unionizing. And I think we see that as about empowering workers, that more empowered people have abilities and ways and modes to speak up about things they see as inequitable or problematic within the companies. And it would give people who see problematic products being developed new ways to talk about them and speak about them without fear of retaliation, because of the ways in which unions often protect workers who speak up about things like this. And so I think for us, that’s the connection that we see. That allowing and empowering unions can potentially also help to retain a more diverse workforce which would also, I think, measurably improve some of the product development.

Neil Fairbrother

Okay. And the final best practice I would just like to touch on is that these companies should require training of all of their employees and indeed management, I guess, in ethics and the humanities.

Amanda Lenhart, Data and Society Research Institute

I would say that, you know, one of the things that we have here is that we heard from a lot of folks that many of the employees in Silicon Valley come out of computer science. They don’t have to engage. Maybe their science curriculum in the United States are pretty focused on computer science and there’s not a lot of room for taking courses outside of that realm.

And it means that people end up in these companies without being exposed to ideas of philosophy. I think that thinking about society, thinking about societal norms and that not having that grounding means that you don’t have that in your toolbox when you’re thinking about the things that you build and that people with a little bit more of a grounding in philosophy and ethics, society, anthropology, English literature, I mean, this is in some ways a plea for the liberal arts, but I think it would ultimately create more people with a broader sense of the world and other people in it, and perhaps have a different toolkit to use when they’re starting to develop their products.

Neil Fairbrother

Okay. Brilliant. Thank you for that, Amanda. Now what is the plan for the rest of the year? What are you up to for the rest of the year, Amanda?

Amanda Lenhart, Data and Society Research Institute

We’re continuing to talk about this report in a variety of in a variety of venues at conferences and events. Obviously we’re going to continue to do some writing about this report as well, often through through articles and blog posts. And you know, I think we’re still figuring out what’s next on this particular topic. We have some other projects we’re working on, but we don’t necessarily have a next step for this project. I think we’re hoping to do a little social media work on this product. I think we’ll see if we can get that off the ground, but those are our big plans.

Neil Fairbrother

Okay. And where can people find the report?

Amanda Lenhart, Data and Society Research Institute

You can find it on our website at data and society.net. You shouldn’t be able to find it in the health and data team project area.

Neil Fairbrother

Well, thank you very much, Amanda, absolutely fascinating insight into the way that social media companies in particular work and their prioritization and how they look at wellbeing particularly of adolescents.

Amanda Lenhart, Data and Society Research Institute

Thank you so much. It’s been a pleasure to be here and chat with you about the report.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top