This week in digital trust » Episode 84

#84 Shame! The fragile power of social license

24 October 2023

This week we deconstruct the idea of social license in tech, starting with the story of a technology that Google and Facebook didn’t dare release, but which is now available to everyone.

Originally coined in the context of mining and extractive industries, ‘social license’ refers to community acceptance of a company’s business practices. For some companies, maintaining social license can be an effective check on behaviour, but for tech startups like ClearviewAI and PimEyes, well, not so much.

Listen now

Transcript

This is an automatically generated transcript. We make our best efforts to check that it is an accurate reflection of the episode, but it may contain some errors and unedited content.

ARJ
Welcome to this week in Digital Trust, 11M’s regular conversation about all things tech policy, privacy and cybersecurity. I’m Arj joining you from Awabakal country.

JORDAN
And I’m Jordan, joining you from Wurundjeri country in Melbourne. And Arj, today we’re talking about getting away with stuff. Social license, what you can, what you can sneak through, what people will accept, what you can get away with.

ARJ
Hahaha

ARJ
Okay, so you’ve put on the cynical hat, like right off the bat, you’ve just gone straight into… straight into cynicism. This is a wonderful term, social license. Yeah, yeah.

JORDAN
Yes! Immediately cynicism. Yep. It’s funny, it’s, yeah, cynicism. I mean, it’s something we often talk to clients and people about, right? This kind of idea of social license and… It’s funny now you mention it, when I’m talking to businesses, it’s often with a cynic hat on, you know, you’ve got a profit motive.

What can you do with data that people will still and still bring people along with maintain their trust? Often from a government point of view, it’s with, you know, in the public interest hat that like there’s this really important research you wanna do or there’s really important objective that you wanna get to. Digital identity is an example that we’ve talked about a lot that you need people to trust you enough to do the thing. You need that social license.

ARJ
Yep.

JORDAN
you’re acting in the public interest, but you need that social licence to do the thing for people to trust that you’ll do it in a way that’s not harmful. So yeah, it can be, I don’t know, I wear a cynic or an optimist hat depending on who I’m talking about, I think.

ARJ
Yeah, it comes…

ARJ
Which is actually like you, I think you’ve gotten to the heart of, you know, the issues that, you know, we’ll probably end up talking about because it does have that kind of dual purpose thing. And there’s probably a fair discussion to have about what is it actually, because, yeah, you’re right, like, you particularly in the tech context, you start to hear about it in this like, what can we get away with kind of way. But the reason that’s come up for us as a topic, the reason we wanted to talk about it was,

JORDAN
Hmm.

ARJ
There was this piece last month, I think, in the New York Times by the tech reporter, Cashmere Hill, on facial recognition.

And her piece describes how the tech giants like Meta and Google had been developing this facial recognition tech, particularly around this kind of augmented reality kind of context where you can kind of point your kind of camera or, you know, say your augmented reality glasses at someone and facial recognition might tell you, you know, who that person is.

ARJ
companies like Meta had been playing with this stuff from 2017, they’d developed it, but then, you know, here we are six years later and they’ve decided actually, we’re not putting it out there. And Google similarly have worked on technology like this going back as far back as 2011. And there’s a quote in this article from Google chairman, Eric Schmidt saying, you know, as far as I know, it’s the only technology that Google built. And after looking at it, we decided to stop. And it tells this story of these tech giants

building this tech, but for reasons which we’re going to use, we’re going to describe as being social license, you know, a lack of social license felt like they couldn’t put it out there, which when you think about these companies and how quickly they develop and deploy technologies, it’s something worth thinking about. They were given pause about

this particular technology, but under this kind of, I guess, rubric of, you know, social license being the thing that held them back.

JORDAN
Yeah, yeah, we’ve relied on these big tech companies’ sense of ethics, fairness, privacy, reputation risk, I suppose, is another part of it to hold back that technology. It’s a really interesting piece. It’s from a…

a book I think that Kashmir Hills recently written called Your Face Belongs to Us, a secretive startup’s quest to end privacy as we know it. And the book’s really about Clearview AI, right, and other startups, I think there’s another one called PMI’s that’s, you know, in some ways even worse, that are building these facial recognition databases based on publicly available photos that they’ve scraped from social media websites and so on.

and building a face search algorithm so that you can identify people. And her point in recounting these, Meta and Google deciding not to release this functionality is that sure, that was fine, we could trust them. They made responsible choices in this case, but.

As the technology develops, it’s now available to smaller startups with fewer scruples, or you know, just there are more, and so they can make different decisions. And so you’ve got your Clearview AIs and your PMIs of the world just smashing through those taboos and just going for it, releasing the things. So yeah, I thought it’s a really, the other reason I think it’s a really interesting conversation to have right now.

is that I feel like we’ve just seen exactly this happen with ChatGPT and large language models that before OpenAI released ChatGPT in November last year, almost all of the big tech companies had some version of something like a large language model just in there in internal testing, playing around

JORDAN
Google especially had such things and had decided not to put them out yet because of all of the problems that we’re very familiar with now, right, in terms of hallucination and supercharging disinformation and damaging the information environment. There’s all these concerns that we’ve had. But then OpenAI publishes ChatGPT. And over the last six months…

Everybody you know it’s been a free for all right everyone’s gone wild because there’s such consumer demand to play with these things and so I think that’s a really interesting example of. The the moral position or the ethical position shifting because one party one startup has decided to you know ignore the taboo.

And that shifted expectation or that’s exposed that there’s maybe a consumer demand for this thing. And that’s really changed the ethical calculus. And so, yeah, I think that facial recognition example that Kashmir Hill’s pointing to and the large language model are two really good examples of potentially shifting ethical positions or just how, you know, the large companies view of

ARJ
Mm. Ahem.

JORDAN
what’s responsible is upset by some startup that just goes for it.

ARJ
Yeah, shifting ethical positions or, you know, I guess bringing in the closest scrutiny, like what do we actually mean by social license? Like, is it just, is it just?

JORDAN
Mm.

ARJ
a sense of like the taboo, you know, like people aren’t ready for this yet. There’s a taboo. Or does it mean, you know, something, something more meaningful? Um, it seems not. I mean, just to quickly, I guess, do like talk about the term and where it came from. Cause I was like, you know, I’m familiar with it in this kind of tech context, but I was just curious, like to understand a little bit more about where it came from. And it seems like it has greatest currency in the sort of mining industry or these extractive industries.

JORDAN
Hmm.

ARJ
this idea that you have a, you can get a mining licence, but you shouldn’t do anything with it until you also have the social licence to operate that mine, you know, you should, you know, think about the potential impacts on the environment on local communities. And you know, unless you as an operator of a mine, go and get all of that kind of licence from those community groups and that social licence, it’s fraught with danger to go ahead and kind of do the thing. And that seems to be the

the sort of where the terminology was coined like in the 1990s apparently and then it seems to be now has become a thing that we talk about in tech and yeah like I mean to your comments earlier like about you know like do we put our cynics hat on and it does feel like a bit of a branding and reputation thing you know like it does in the way that it’s used in the tech context like you know

there was like I was reading that they’re even now.

The former federal MP, Dave Sharma is now head of an advisory, works in an advisory company, helping advise companies on how to win social license and build narratives to sort of get politicians on side and soften the ground for new tech companies. And when you talk about it in that way, it does feel like it’s very much a branding exercise.

JORDAN
I think in the cynical view is that it’s a branding exercise, right? That it’s not, you’re not actually worried about generating value for people. You’re not actually worried about the product or the activity being a positive net influence on people’s life. You just care about the level of public pushback that you’re going to get and the level of damage to the brand. And that’s what you, you focus on. I think.

done well or the opt optimistically social license is a lot more than just brand and reputation right it’s about genuinely presenting a value proposition to all of the people who are affected by your product i mean we’ve doing been doing more and more work around kind of ai ethics assessments and um ai governance and

ARJ
Mm.

JORDAN
in that context I think there’s a real argument for that, you know, again, done well, those kinds of assessments take into account all of the potential impacts on all of the potential users and people affected and even the ecosystem around that and how the technology works to benefit and disadvantage your various different stakeholders. And if you build social license by

addressing the harms and getting out ahead of them and making sure that, you know, bringing everyone along in that project so that everybody benefits. I think it can be a really beneficial concept, right? But yeah, like you say, I think for some it’s a cynical exercise of just what can we get away with?

ARJ
Well, yeah.

ARJ
Yeah, well, I mean, it also serves a useful role in the sense of it’s kind of commonly in practice executed as being the thing that you need to do beyond legal and regulatory compliance. So, you know, it accepts this reality, particularly in the tech kind of sphere, that there are things we can do and get away with that are legal.

JORDAN
Yeah.

ARJ
but should we get or do them and get away with them? And the social license is really that second part of the question. It’s like, yes, you can legally do this, but do you have the license to do this? Is it the right thing to do? And I think to your point around AI and facial recognition, which we sort of started the conversation with is another great example. It seems the legal frameworks aren’t quite right in terms of…

JORDAN
Mm.

JORDAN
Mm.

ARJ
mapping onto what we’re comfortable with or what’s legal, sorry, you know, so we have these examples like, you know, the use of facial recognition in retail settings that have, you know, shown that there’s a great kind of community adverse community reaction to those people aren’t comfortable, the social license doesn’t exist.

but the laws don’t quite kind of, you know, get us to that point. Like you kind of, you slap a kind of notice on the wall and you say, well, this is how I’m getting consent, but it doesn’t account for how people feel uncomfortable with, you know, being identified in a public’s face and their biometrics being used and all of that sort of stuff. And so in that sense, I think the social license plays a useful role to get organizations to think, okay, yes, I’ve ticked all the legal boxes,

JORDAN
Hmm.

JORDAN
Mm.

ARJ
something more over and above this that I need to investigate, you know, about how the community feels.

JORDAN
Yeah, I think that’s a really good example of where the community expectation does not match the strict legal obligation as well. I think you look at the, for example, the Office of the Australian Information Commission’s recent Australian Community Attitudes to Privacy Survey, and the levels of discomfort with things like automated decision making, facial recognition, data trading.

All of these things that you can technically do legally, but yeah, as you say, when you present it to people, a technically compliant approach, they’re really, you know, it’s really strongly rejected. I think that’s a, yeah, they’re really interesting examples of where I would argue that the law has lagged behind and that we’ve relied on social license.

ARJ
Yep.

JORDAN
we’ve relied on public reactions to things as a way of policing behaviours that as a community we don’t really like. We don’t really like facial recognition, people react really badly to that in a stadium or in the front of a store, but it is technically lawful if you do it right.

ARJ
which I was just gonna say, which you can start to sort of have some sympathy for organizations with around novel technologies, because it’s like, how do you really assess what the public expectation is around something that’s kind of very new or not been applied in exactly that context in exactly that way. You can kind of get general barometers for how people feel about certain technologies, but.

in your context in the thing that you want to do. If it’s kind of around that customer feedback, you kind of you kind of need that data to be able to make those calls.

JORDAN
Yeah, yeah, you do. And it’s something it’s something that’s kind of baked into the Privacy Act as it is, right? There’s all these standards in the Privacy Act that you take reasonable steps to protect information or to provide someone with notice about a collection that you do things that are fair or not unreasonably intrusive.

that you abide by people’s reasonable expectations. All of these things are kind of language that’s actually in the Privacy Act. And so the requirements oftentimes are like pegged to this general idea of public expectations. And it puts companies and governments in this often quite difficult position where you actually need to guess or I mean…

Better not guess and actually go ask people, but consult and engage on exactly what people expect you to be doing with their data, what people understood from the advertising you showed them or the notice you provided them. The ACCC, the Competition and Consumer Commission, is increasingly getting involved in this as well when a company kind of promises to do one thing with data and then ends up doing something quite different. They’ll get…

ARJ
Hmm.

JORDAN
deemed for misleading and deceptive conduct. And that’s kind of another way of policing, you know, you set the expectation, sure, but it’s policing compliance with social expectations. So there’s a lot of ways, I think, where that social license is kind of encoded into laws that, you know, these requirements to handle personal information in a way that’s kind of consistent with people’s.

expectations.

ARJ
Hmm.

JORDAN
Yeah, sorry. Yeah, sorry. I didn’t. Yeah. Did you want to jump in or not?

ARJ
I was going to move on.

ARJ
I was just going to take on to take it to something else. But so if you did you

JORDAN
Yeah, yeah, go for it. Nah.

ARJ
Yeah, I think that’s right. I think, I mean, one of the interesting…

ARJ
aspects of describing social license in this way, I think is in the, I mean, in that kind of mining example, it seems to me that the social license is almost like about an externality. It’s like, you know, it’s not necessarily.

you know, when we talk about in this way, in that technology context, it’s about people’s tends to be about like people’s expectations in relation to how their own information is going to be handled. And the law says you can do up to this point. But there’s actually a further kind of point you need to go beyond that, which is the sort of the social license, which is kind of the person’s expectations, the customer’s expectations about how they will be treated. And what I think one, I think that’s one way of thinking about social license is like what over an

above do you need to do for me beyond what’s legally required. But then when you think about that kind of mining context, it’s often not their customers or even necessarily their direct stakeholders that the mining companies are thinking about when they talk about social licenses, this kind of externality on the environment, on local communities that might be affected by the mining activity and having to think about that and get licensed for that. And that’s where I also think some of that like AI, particularly large language model stuff is interesting

you’re starting to kind of, you know, talk about.

you know, the extracting and repurposing of public data, you know, or kind of the uses of automated decision making in ways that are like, unfair in a very broad sort of sort of political sort of sense, you know, sort of applications of social policy, is this the way we want our kind of country to work? Is this the way we want to, you know, police welfare fraud, or so forth. And so that’s, you know, some of these technologies, I think, are also starting to broaden out what

JORDAN
Hmm.

ARJ
in that technology context beyond just does an individual want more from me than what I’m legally required to give them.

JORDAN
Hmm.

JORDAN
Mm-hmm. Yeah, I think so. I think it’s also broader in a temporal sense. I suppose it’s about, one of the things I’ve often talked with government folks about is the need to consider social license in a long term, as a long term proposition and tied to trust. You know, if you

JORDAN
service where you might, you’re exerting power over people, you might, they have to trust you. You need to have already given them reason to trust you. We’ve talked about digital identity, we’ve talked about law enforcement or facial recognition, or the tax office goes and collects all this data about you in order to pre-fill your tax return. You get a benefit from that, you trust them.

every time they screw up, it makes you more likely to ask, hang on a minute, should I trust you with all my data? Oh, I don’t actually want you to pre-fill my tax return because I don’t trust that you’ll get it right, right? But like if they get it right every year and it’s easier every year, then the next step, the next innovation, the next improvement, they’ve already built the social

JORDAN
you let them do it. Whereas you know, you look at the big tech companies and it’s like exactly the opposite, right? That like every time I think, or like everybody, we look at them rightly, I think, with quite a degree of scepticism because it’s rare that they implement things altruistically in order to benefit me. It’s always with a profit motive. They often play down the negative externalities. They often play down.

ARJ
Yep.

JORDAN
they’ve lost that trust and so they’ve become a subject of suspicion and a punching bag from a public policy point of view.

ARJ
One of the things I was surprised to see was that it sounds like a fluffy concept like social license, but like

given its history in mining, there’s actually been a whole body of kind of research and work on what it actually means and how do you get it. And you’re starting to see that in terms of tech as well as like, how do you get social license for AI? And so like, Boston Consulting Group have kind of got this kind of model for like, how do you get social license? And it mirrors effectively what you just said, like, so you know, you should probably should go knock on their door and say, you know, where’s my royalties? But they’ve, they’re basically like, say, like, there are three elements.

JORDAN
Where’s my relatives?

ARJ
One is sort of a sense of responsibility. So, you know, you’re publishing kind of principles around fairness and how you’ll be transparent and that you can be seen to be responsible. But the other one was around.

like that social contract and trust. It’s like, how do you over a long period of time demonstrate that you can be trusted to do that thing, to use that technology? And that’s exactly what you’re saying is like, it’s a track record. It’s not just a, it’s not like a one-off kind of statement on your website. It’s a long kind of long earned, hard earned kind of thing that sort of to have that trust and that social contract. And then the third thing, which you also spoke about is like the benefit. It’s like, if I do this, the benefits accrue to all of us,

ARJ
all stakeholders. So in your example, you know, there’s a convenience factor from pre-filling my information. So, you know, that social license is kind of, you know, derivative of the fact that people trust it, that you’re getting a benefit and that there’s some transparency. And I think like, I think organizations is like a, it’s almost like a…

JORDAN
Hmm.

ARJ
very big kind of shift towards being much more open and transparent about the way they do things to get to this point to achieve all these things. Like, you know, like, think about all the stuff that we do with organizations around kind of the work that they do to kind of…

think about the design of systems or new initiatives and run them, do these rigorous assessments, the privacy impact assessments or the ethical impact assessments. Just, we’re only just starting to see, but getting to that point where you can be much more open and proactive about the fact that you’re doing those, the results of doing those, I think that’s where you’re gonna earn the social license is to be able to tell some of that story out externally and say,

ARJ
we’re doing this stuff where we know that there’s risks, we’re looking at them, we’re assessing them, we’re mitigating them in this way, we’re trying to balance trade-offs this way, there are benefits over here, there are benefits over there. All of that needs to, I feel like be much more kind of public and out there in order for that social license to start to be earned.

JORDAN
Mm.

JORDAN
Yeah, I think so. And particularly, I particularly like the idea that you just mentioned about sharing benefits or emphasizing communal benefits. It’s something that privacy and cybersecurity is often framed in terms of risk, right? And so we’re often talking about downsides and managing those risks and downsides.

ARJ
Yeah.

JORDAN
AI, but also around privacy impact assessments and things like that. I think there’s a real opportunity to, you know, if we’re doing these kind of cost benefit analysis to be more front footed about, you know, the way that we’re doing it, doing things in a way that actually benefits people or including in a project an element that’s actually not just.

profit motivated. We’ve done it in this particular way because it benefits other people or it benefits the community. And yeah, that wasn’t a necessary part of the project, but we’ve added it on because we care and we’re socially responsible and so on. One of the things, the last thing I wanted to say about social license is kind of tying back to that Cashmere Hill piece in the New York Times.

JORDAN
just about how weak a or how fallible a mode of regulation it is. So relying on ethics and social license works for large organizations who want to have a ongoing or long-term relationship with their customers. It doesn’t work for brand new tech startups for example and so

JORDAN
what we’ve seen again in the facial recognition context, in the large language models context, and what we see quite often in just in a tech context more broadly is that the big responsible players make responsible choices and then the new startup just wheels on in and get stuff done and in a way that kind of can be quite shocking or quite problematic. And so I just wanted to…

emphasize the need like the social license alone is not relying on companies doing the right thing is obviously not the way to ensure that we say avoid global facial recognition databases right that there’s also a need for when we get to the point where we all agree that we don’t want a particular technology let’s just write it down and

ARJ
Yeah.

Thank you.

JORDAN
make it a regulatory requirement, right? Rather than just relying on companies to do the right thing. If we can all agree on what the right thing is, well then let’s codify it.

ARJ
Yeah.

ARJ
Yeah. Yeah, it’s an inherently kind of frail concept. It seems like the you know, it’s almost like a

like an adoption curve for sort of the principles an organization abides by. And then when they’re small and like nimble and startups, it’s like, well, our principal, our main principle is move fast and break things. Because we’re all about kind of customer acquisition and the next round of funding from, from VC or whatever. And then when they get big and publicly listed, it seems like, okay, now we, you know, bit more of a reputation to manage and maybe social license comes into play. And then, you know, like I was even reading that, you know.

ARJ
getting too big, getting big can actually then start to be a detriment for social license. So you know, it’s often talked about that in the banking sector, some you know, like banks tend not to feel the same need for social license because of the monopolistic kind of factors involved, and it’s hard to switch and, and leave and so you know, so it’s got many, you know, it’s got many challenges and, you know, I’m kind of with you that

ARJ
like I was thinking like if we had a really well executed face fair and reasonable test in the law that was well enforced would we need to kind of appeal to like a sense of social license or would you know would it be covered you know like we’ve got it it’s in there you’ve got to do that thing you’ve got to make those tests

JORDAN
It’s a really interesting proposition, right? Cause again, when you’re pegging the legal requirement to those kinds of notions of community expectation, you would, you get to a pretty similar point or you get that like that fair proposed fair and reasonable test is almost like a legislative requirement that you have social license to do the thing you’re doing, right? That people like the general public, the community is okay with it.

ARJ
Yeah.

JORDAN
And I mean, one thing I’m fascinated to see as we start, if and when we get that test into law, which will probably be a little while, but never mind, is whether or not that reasonableness test will shift based on who’s doing it. So I can imagine a future where the Australian public thinks it’s fair and reasonable for

I mean the easy example is law enforcement for government or law enforcement to use facial recognition in order to solve certain crimes and not others. But then to think that Facebook, the same technology, shouldn’t, you know, they shouldn’t be using facial recognition and that a retail store shouldn’t be using facial recognition. Or even if there’s like, you could imagine a situation where a beloved brand has facial recognition in their retail stores.

and a hated brand has exactly the same facial recognition. And the community sentiment, the community reaction to those two things might be very different based on the social license that each brand has built in order to do that thing. And so you might be in a position where one is fair and reasonable because we trust you and the other is not fair and reasonable because we don’t trust you.

ARJ
It’s… yeah, that’s fa-

Right, well, yeah… Yeah.

It’s so fascinating because we I mean, it makes me remember that when we talked about facial recognition, some episodes ago in the in the use of stadiums, the

The social license in Australia that we feel around using facial recognition tends to be that it’s okay for safety reasons, for law enforcement, but not for kind of commercial like convenience to sell stuff. And when we looked at it in a previous episode, it’s exactly the reverse in America. You have social license if you’re a stadium operator to use facial recognition for convenience

ARJ
stadium or whatever, but using it for law enforcement, not you can’t do it shouldn’t you know, never you know, not on the cards. And so that I mean, exactly. I mean, what you’re saying is fascinating. And the other thing is there are, I think there are competing social licenses in a way like one of the one of the arguments made around robo debt was that there was like a stigma around

social welfare recipients and like a sense of kind of, you know, public outrage around fraud. I’m not saying that was right or that it was universal, but there was certainly an element that the government felt like there’s a social license for us to do things to address welfare fraud. And so we’re going to push really hard at that. But in doing so violated this other social license that they, you know, have now come to realize existed as well around automated

ARJ
unfair use of this technology and automation.

JORDAN
Yeah, there was an attempt to build that social license for the enforcement through that narrative of fraud, which I can’t resist mentioning that the actual incidence of welfare fraud is tiny and doesn’t warrant that kind of intervention. But yeah.

ARJ
Right.

ARJ
No, exactly. It doesn’t.

But it is, it goes to the subjectiveness of like social license at some level. Like if you’ve got a misinformed public or, you know.

JORDAN
Yeah, and the story, exactly. And yeah, and the stories that we tell about the technologies and how they work and their reliability and all of these things, I think there’s a really interesting connection there between, yeah, a concrete legal requirement in fair and reasonable that we’re gonna have in the future through to the…

ARJ
Yep.

JORDAN
trust and relationships we have with the particular companies through to the stories that people tell and the way that we understand the way technologies work. I think there’s, yeah, there’s this really interesting connection there that we’re going to have to kind of continue to tease apart.

ARJ
Okay, well, on that very kind of solid legislative note, which is not where I thought a chat about social license would get to, but I’m glad it did. It felt like we, you know, hardened it up. But yeah, on that note, that’s it for us.

JORDAN
Yeah, we’ve gone from a bit of an airy-fairy to quite a clear connection there, so yeah, no, good fun. Yeah, it’s a super interesting topic. Yeah, fun to talk about. Chat to you next week. Thanks, Arj. See ya.

ARJ
Good luck.

ARJ
All right, thanks Jordan, bye.