In this episode we explore the world of trust and safety, those teams at digital platforms and other tech service providers entrusted with ensuring services are safe for users.
These teams have grown in size and influence over the past decade, in response to the growth in the use of social media as well as the emergence of challenges including mis- and disinformation and increasing hate speech.
But they’ve also come under fire – from ideological opponents who see them as biased censors, as well being the target of layoffs.
We unpack these issues and also explore the evolution and challenges associated with the profession.
Listen now
Transcript
This is an automatically generated transcript. We make our best efforts to check that it is an accurate reflection of the episode, but it may contain some errors and unedited content.
Arj
Welcome to this week in Digital Trust, elevenM’s regular conversation about all things tech policy, privacy, AI and cybersecurity. I’m Arj joining you today from Awabakal country.
Jordan
And I’m Jordan joining you from Wurundjeri country in Melbourne. And welcome to a new year, Arj. New year, new us, new topics, new chats. Looking forward to it.
Arj
Yeah, yeah. Another another year of. Tech policy and AI and yeah, can’t wait. It’s gonna be good. Like we did the recap obviously at the end of last year and it was nice to see the breadth of topics we got through and I’m hoping this year is just as varied and different.
Jordan
Exactly. So getting right into it, we’re gonna start with trust and safety for our first episode of the year. Trust and safety, the teams in big tech companies that probably not a lot of people know about, but that really shape what you see online.
Arj
Yeah, I’m in some ways surprised we haven’t spoken about trust and safety as a dedicated kind of topic already because as you say, they’re like, these teams are quite pivotal. But yeah, there’s a few reasons we wanted to get into it. One, because they’re pivotal and there’s a couple of kind of been in a couple of developments over the last year. But just like just to kick it off, just to give a very basic overview of what we’re talking about.
Trust and safety is, you know, it’s in Wikipedia. So that means it’s a thing. And Wikipedia describes trust and safety as the policies, practices, products, and teams dedicated to ensuring that users can trust and feel safe while using a service or participating in an online community. So that’s, you know, that’s a little bit wordy. I mean, I think your colloquial description was, is better. It’s basically there are these teams of people that big tech platforms like Meta, you know, for Facebook and Twitter. And their job is to essentially make sure that the platforms are safe. And they do things like, you know, content moderation, taking down explicit content. They do develop policies and community guidelines and enforce those policies. And then some of these teams can go a bit broader and they can almost even go out into some of the fields we typically talk about, like privacy, like cyber security identity. So that’s kind of the teams that we’re going to be talking about.
Jordan
Yeah. And they’re really interesting functions because they’ve kind of evolved quite a bit over the last, say, 20 years where like in the early 2000s, they were really focused on, you know, just like basically crime and abuse, right? Like, like getting down fraud or scams or like eBay was one of the very early trust and safety teams and they, yeah, they’re really focused on fraud and scams and stuff.
But as social media has become more of a thing, trust and safety has become a much bigger industry, much more central because the experience, the content you see, whether it’s abusive, whether people are being marginalized or able to participate or being hounded by people on platforms or even just seeing content that they’re not interested in or that upsets them.
that’s become like this core part of what social media is, right? A lot of the product of social media companies is really content moderation or the type of content, the types of communities that they build. So those trust and safety teams have become really central to those experiences and to those products.
And then even more recently, since say 2016, we’ve become more and more concerned our platforms influence political and social movements. So 2016 was US elections, Trump getting in and the UK Brexit referendum and various other things. People became more concerned about this stuff, combating misinformation and disinformation. So the teams have these roles there as well. And even more recently, we’re having this kind of new conversation as those trust and safety teams get involved in con- miss and disinformation. We’re having these conversations about, well, are these teams biased? What speech are they promoting or clamping down on? Are they curbing free speech? Are they curbing certain political views that have a right to be heard? And then even more recently, mass layoffs as, you know, interest rates are going up, there’s less cash there’s been a lot of downsizing in those big tech companies and trust and safety is a cost center. It’s, it’s been hit quite hard by those layoffs. So there’s, there’s a bit of like history and context there to shape this, you know, where, where trust and safety is today, right?
Arj
Yeah. Like what I liked about that is it paints a picture of how they sort of the broadening of the impact of these teams. So, you know, that idea of like, So just moderating content and taking down explicit content is very much a sort of like as individual users, they want users of the platform to have a nice experience.
But then you can see that as you start to talk about content moderation more broadly and misinformation and disinformation, and then you’re getting into this much bigger, like how are you shaping the public square? Who are you to shape the public square? All that kind of ideological stuff starts to come out.
And then as these teams have grown, then the sort of the cost factors become more of a factor. So when there’s an economic downturn, you know, we’ve now got these teams of hundreds of people and you know, if they’re not making money for the company, can we get rid of them? And so yeah, you really, you really see that.
Sorry, I just very briefly wanted to say like there was a new scientist article in November that the headline was trust and safety, the most important tech job you’ve never heard of. And I thought that was quite nice just to sort of talk about where these teams and this role has gotten to in the eyes of a lot of observers.
Jordan
Right on. So let’s dig in a bit just starting with, I mean, we’ve started talking about what trust and safety is, but it’s probably worthwhile like digging into like what that means. So they shape the content, they do content moderation, like in practice, what does that mean? And you’ve got a good kind of breakdown or summary of the role from this trust and safety researcher. So in framing that, I’ve got an even more general one, which is like, there are two aspects to what they do, right? One is political, one is operational. They set the terms, they set the rules, what speech, what content, what activity is permitted and isn’t permitted on our platform, what are the rules? And then they operationalize those rules. And they’re quite different roles, right? Like, The rule setting is kind of political and product design. And you know, you’re going to upset people. You’re going to anger governments. You’re going to do all this stuff. What do we do in response in a policy sense to a request to take certain content down or whatever? And then in practice, how do we operationalize it? Are we using AI? Are we using automated mechanisms? Are we outsourcing stuff to some other company?
Do we have a, you know, you’ve got to have a platoon of lawyers who are ready to fight the fight in court if someone’s objecting, all of this like operational practical stuff. But there’s this fundamentally two different things, set the rules, enforce the rules.
Arj
So that description, this is just a bit of a, a little bit of a simplified breakdown. I mean, one of the things that became very aware of in kind of preparing for this conversation was just the effort that’s going into kind of defining and professionalizing this group of people. So the trust and safety community has now kind of, it’s got a professional association. They’ve got an annual conference called TrustCon and they’re doing a lot of work to say like, this is what this role exists. This is we need people to join this kind of community, this industry. And so let’s talk a lot about what it means to be a trust and safety professional.
So Christine LeHain, I hope I pronounced that right, is a prominent kind of trust and safety researcher – she kind of breaks it down as sort of trust and safety is basically identifying the safety needs and pain points of users, developing the policies that dictate what behavior is and isn’t allowed, and then understanding how to prevent abuse and proactively identifying risks, and then thinking about how to build trust with users and society. So that’s that again, that bigger picture view, which I think is a sort of nice breakdown of what they’re trying to do.
Jordan
Yeah, that professionalization point, I think, is really interesting because it’s something that privacy, I think, is a little further along the line with in terms of defining the role and defining a new profession and so on, where maybe 20 years ago there wasn’t that. There were a lot of people who drifted into privacy from a legal or an operational or a risk background.
Um, and there’s been this real effort international association of privacy professionals is one of the key players there, um, to professionalize and communicate about what the job is and, and build a body of expertise around it. And exactly the same thing’s happening in trust and safety.
Arj
And there’s a similar analogy I’ve seen made to cyber as well. People talk about cyber security, 20, 20, 30 years ago, you know, sort of nascent field, there’s a problem space, you know, we need.
We need professionals, we need people to know it’s a career. Um, you know, it’s a cost center, but it’s important. All of this similar, similar sort of things come up.
Jordan
Yeah. I feel like cyber kind of is a little ahead of privacy. You know, cyber’s been doing it for 30 years, 40 years, privacy is like a little bit newer and trust and safety is even newer.
Arj
It’s probably worth just sort of reflecting on like why did platforms and why do tech companies, why do organizations feel the need to have these teams? You know, like what is the benefit? And we’ve sort of touched on it already. There’s that kind of general societal view, which is that, you know, these platforms are getting bigger. They’re being more sort of, they’re more pervasive in all of our lives. They have an impact on individuals at a global scale. Therefore, you know, we need to make sure we manage the risks around safety and we have to have a dedicated focus.
Jordan
There’s a good quote from Julian Inman Grant who’s the Australian e-Safety Commissioner on basically that right where she’s just saying like, it’s pretty, pretty bland, but like companies have a fundamental responsibility to ensure that their platforms are safe. And she does what we have done in the privacy context a lot, but like compare it to other industries, right? We expect car manufacturers to embed seatbelts, we have food standards, technology should be no different.
You know, if you’re making a product that people are using, it’s got to be safe to use. That’s a pretty basic social expectation.
Arj
Yeah. And so that’s a very sort of benevolent user-centric view, if you like. We build these platforms. We should do right by the people that use them. But there are actually self-interest reasons why platforms should have these teams in place. One is kind of the commercial picture. So as you’ve said, the product is a moderated, nice to use product, community, and a lot of these platforms have advertising and advertisers that advertise on their platforms, and that’s how they kind of make their money. And without a well-moderated, well-managed kind of community that’s safe and where the content is appropriate, you’re not gonna have advertisers putting their brand up in those platforms. So there’s a sort of commercial imperative there.
Jordan
Yeah, which we’ve seen quite dramatically with Twitter or X now, right? That it’s never been the most successful social media platform, but it was like thriving, relatively influential, and Elon Musk takes over, and basically like the main change he’s made is changes to moderation. What kind of content, what kind of signals there are on the platform, you know, the changes to like the authenticated badges, I can’t remember, the verified, the blue check marks, right? The changes to those blue check marks. So it’s harder to identify who’s an authoritative voice and all of this. Those are changes to trust and safety and content moderation.
Advertisers see it as a less safe place, users see it as a less enjoyable, less safe place and revenue tax as a result.
Arj
Yeah. So the tech policy press have a figure of advertising revenue decreasing 55% or more since, since Musk’s acquisition and laying off of all the trust and safety experts.
Jordan
Yeah. And it’s because advertisers see the platform and they don’t want, you know, if you’re Coca-Cola or some big brand and there’s like right-wing extremist or violent content on a platform. You do not want your brand to appear like right next to that kind of content in my scrolling because I’ll associate it. Um, so yeah, it’s, it’s yeah, brand safety is, is a real issue there. There’s also from a platform point of view, just like increasing regulatory pressure.
So there’s a lot of movement in a lot of jurisdictions, US, Australia, EU in particular, but like India and a lot of other places as well to push responsibilities for safety and content moderation onto platforms, managing hate speech, having take down powers, the ability to block content or take stuff down really quickly as well. And there’s really significant fines. For example, in in the EU with the Digital Services Act and the Digital Markets Act, there are responsibilities on platforms to take steps to make their platform safe and to do various other things in that regard. And it’s like 6% of global turnover or something that they can be fined. So, yeah, there’s increasingly this really strong regulatory stick as well.
Arj
Yeah, it’s interesting to see how truly global that is as well, as you said, like it’s, you know, the FTC in the US and then the digital markets act and DSA and the EU, our e-safety commission has really ramped up over the last few years and the online safety legislation that’s coming through.
Arj
I mean, one of the things I was reading, that I thought was really interesting is, is, you know, we’ve just focused on US, EU, Australia, just now, right? there is increasingly pressure from the rest of the world, which is the majority of the world population-wise, to push more trust and safety content moderation requirements that are culturally sensitive and relevant to their particular context. So in the past, trust and safety teams and these platforms have focused on their home market, the US predominantly, and wealthy markets, like the EU. large and wealthy markets and they’ll meet those requirements. And like, you know, if you’re any other country, even if, yeah, wherever you are, stuff you, but increasingly though, the rest of the world is kind of coming to the party and expecting culturally and context and language sensitive moderation in their contexts.
Arj
Yeah. There’s a quote from a Twitter employee, an anonymous Twitter employee in an NBC article, which I thought represented that really well, which was talking about how The US teams were very well staffed, but they didn’t have the staffing in places like India where Twitter usage is going up. But it’s really fraught with complicated religious and ethnic divisions and turmoil. And being able to have teams that understand the language, understand the context to manage that content, they don’t necessarily have the resources and the language skills to do that.
Jordan
And coming back to that initial policy and operations divide, that’s really hard on both of those counts, right? Coming to a set of policy rule requirements for content that apply globally, apply in different contexts, different cultural requirements and so on, is incredibly difficult. And then having the operational challenge of having people on the ground who speak the language, who understand the cultural context or systems that are actually able to apply. some kind of global rule set about speech. You know, it’s a wildly difficult job.
Arj
One of the things that’s been really interesting recently, I think, is just seeing the tension around these teams as they’ve become more influential. As we’ve seen the sort of shift from, as we’re sort of talking about, kind of focusing on kind of user safety and individual level through to like, oh, I run a platform and I need to be managing kind of geopolitical tensions and kind of democracy in the public square and, and this ideological tension like, which is embodied by Elon Musk essentially, and what the way he’s kind of taken an axe to the teams through through Twitter saying like, we want free speech, we don’t want and this is not his words, but this is the sort of sentiment, you know, I, like a narrow cohort of people with left of center views, lawyers and you know, lefties with feelings kind of managing the platform we want it to be free and unencumbered.
And it’s also a mindset that you see comes out of Silicon Valley. So, you know, again, we quote him a lot on the podcast, well, I do anyway, but Mark Andreessen, who’s the founder of Andreessen Horowitz VC firm, he wrote an essay in October titled the techno optimist manifesto and basically labeled trust and safety teams by name as an enemy.
These are our enemies and So it’s interesting to see that there’s been this kind of upswell, you know, in parts of the, particularly the tech sphere against these teams and the value and the purpose that they bring.
Jordan
Yeah. Which is really interesting because you hear from the trust and safety industry and academic research on this, that there is a profit motive to these teams. They’re not, so, you know, it’s, it’s weird to me that, you know, that, that antitrust and safety because they build long-term value, they build trust, they’re part of the product. And again, you see with Twitter that you take the trust and safety away from a platform that’s otherwise bubbling along fine and the innards fall apart. The revenue, the attention, the value of the product just goes away.
Arj
The value, the commercial value.
Jordan
It seems a strange position. I mean, as a privacy person. It’s interesting looking into the trust and safety space because privacy, you always have difficulty because if I want to limit the use of data by a big internet platform, it usually goes directly contrary to their business model, right? They’re about collating, collecting, aggregating, tracking. But in trust and safety, there’s that alignment that getting trust and safety right produces a nicer environment, particularly for advertisers.
One of the tensions there though, in that profit motive, is that advertisers and platforms have a motivation to appeal to the broadest possible audience. If you’re a platform, if you’re an advertiser, you want broad audiences. One of the tensions that trust and safety has to manage is in moderating towards the broadest possible user preferences, are they marginalizing certain sub groups? So for example, Facebook and Instagram in particular, you’re not allowed to show nipples, right? It’s notoriously forever on Instagram, which then, you know, if you’re a breastfeeding mother, you know, your content gets ditched for, gets banned for, you know, nudity or for being a bit too risque, you know, other queer communities get their content disproportionately moderated out. And so that, that commercial pressure to appeal to advertisers does have a tendency to kind of disproportionately impact certain groups and, and, and moderate their content out to a greater degree than it does others.
Arj
Yeah. Which I think, you know, is why also with, you know we’re seeing a sort of strong kind of push and call for more transparency in these in these teams and how they operate these kind of reports on you know Why content is taken down what the policies are which so I think that’s you know I think that’s a good thing and I think it’s a it’s a fair thing in my in in view of what you’re saying but also I guess as a bulwark against kind of These accusations of you know some sort of bias, you know being the driver of these teams.
You know, one of the interesting things I thought was the Twitter files, which was when Elon kind of gave a whole bunch of internal documents to some handpicked journos to sort of talk about how the old regimes and the trust and safety teams had been overtly kind of censorious around COVID, around kind of political decisions like taking Trump off the platform.
I think for me, what I took away from a lot of those documents was like, these teams are actually, it’s really hard what they do, you know, making these decisions and these judgment calls around like, is this content inciteful? If someone uses a phrase like locked and loaded, is that a colloquial expression or, but if it’s the president and there’s a mob that are angry about an election result, is that kind of inciteful? And therefore, should we take that content down on, you know, ban the person. The Twitter files to me revealed a lot of tough decision making and nuanced kind of difficult conversations. And I think that transparency is good, because I think it’s important to understand that, yes, these teams do have an impact, particularly on marginalized communities who use the platform, as you’ve said. But also, just to start to kind of exposed the fact that some of these decisions are hard and there’s this like the least bad decision you’re trying to make.
Jordan
Yeah. And there is a bit of convergence on the kind of high level principles here. There’s a thing called the Santa Clara principles on transparency and accountability and content moderation that a lot of the big tech companies signed on to. And they’re about the kinds of things that you’re saying, right? Transparency, explainability, human rights and cultural competence, you know, awareness of those dog whistles or terms or things that are, that have particular meanings. Um, but yeah, having, having clear rules and applying them consistently is, is the real challenge. So yeah, it’s hard. I feel sorry for them.
Arj
I feel sorry for them in the sense that I think it’s a, it is a difficult task and I think it’s, it’s thankless, but you know, Yeah, it’s one, I think it’s similar. I think there’s a lot of parallels to kind of privacy in cyber where, you know, there’s, there’s sort of an internal passion that a lot of these teams bring to wanting to improve their products and, um, yeah, more power to them.
Jordan
Yeah, there’s, there’s a great, oh, we’ve got a wrap, but I highly recommend this read, I think I’ve mentioned it on the podcast before, but it’s an article in the verge called welcome to hell. Elon by the editor of Virginie L’Hopital that they published right when Elon finalized the purchase of Twitter. And it really is just digging into just how difficult being in charge of content moderation and trust and safety is. And that everything you do is going to upset people and there’s never any right answer. And it’s just impossible. So I revisit that every time I’m a bit tired of privacy. thinking I should get, it’s like, no, no, I’m in the right place. I’m good.
Arj
I’ll add to that and I’ll put this in the notes as well. There’s a, um, an article with Dell Harvey, who was the former head of trust and safety at Twitter and left kind of a couple of years ago, but was there for a decade or so before that. And so her journey through that and how difficult it was.
There’s a quote where she basically says like, you don’t go into trust and safety because you’re like, I enjoy getting praise for my work. So I think that’s, you know, that’s a nice one, but it’s a really good read and it talks through quite a historical kind of perspective of it. Yeah, I’ll check that in the show notes as well. But yeah, I think we’re out of time, Jordan.
Jordan
Brilliant. Yeah.
Arj
First one for the year and a good topic. Indeed. Here’s to many more.
Jordan
Have a good week, Arj.
Arj
You too. See you.
Jordan
Catch you later.
