This week in digital trust » Episode 96

#96 Make it till you fake it – the growing problem of synthetic media

14 February 2024

With more than 60 elections taking place around the world this year, the risks posed by deepfakes was already a bubbling concern – until Taylor Swift took the issue stratospheric. 

Deepfake explicit images of the US musician created by generative AI went viral on X, prompting officials and analysts to start to properly reckon with the problems posed by the technology.

We explore the scenarios in which synthetic media created by generative AI are posing problems, the way platforms like OpenAI are responding, emerging regulatory responses, and discuss where responsiblity for solving this issue truly lies.

Listen now


This is an automatically generated transcript. We make our best efforts to check that it is an accurate reflection of the episode, but it may contain some errors and unedited content.

Welcome to this week in Digital Trust, elevenM’s regular conversation about all things tech policy, privacy, AI and cybersecurity. I’m Arj joining you today from Awabakal country.

And I’m Jordan joining you from Wirandjuri country in Melbourne.
And Arj, most of the world, almost half of the world actually This year is going to an election. So my question for you this week is, what was the last election you participated in? Have you ever run for office?

I’ve run for office in my school days. I put my hand up for the Student Representative Council and for school captain. And I was successful in both. So I got to be the school captain.

Nice, well done. Did anyone circulate? synthetic media about you as part of the election campaign?

If you were to see photos of my hair back in those days, you will realize that synthetic media was not necessary to represent me in an unflattering way. There was plenty of damage caused just from authentic photos of me. So, quite in contrast to my bald, clean shaven look, I had quite an Afro sort of vibe going back in year 11, year 12.

Good times.
The topic for today being kind of synthetic media and how deep fakes and AI might feed into this. I like to imagine these future school elections where, you know, kids are using some web app to make a little, you know, recording of their opponents saying, you know, controversial things that they’re going to, you know, close the tuck shop or something.

If you’re one of these governments who wanted to sort of understand what the threat model looked like and, you know, how to preserve democracy, I’d be going straight out to schools and seeing what the kids are doing with AI. What are they creating? What deep fakes are they creating?

So yeah, so we’re skirting around the topic for today, but the topic is elections and synthetic media or people and so synthetic media, I suppose. I flagged it at the start, but globally more voters than ever in history head to the polls this year.
So at least 64 countries, including some of the largest countries by population are going to the polls. So it’s 49% of people in the world on the planet are voting for their governments this year. So that’s India, the EU, the US, Indonesia, Pakistan, Bangladesh, Russia, Mexico, Iran, the UK, a whole bunch of others.
And at the same time, as we’re staring down this set of really significant elections, it is easier than ever to generate convincing, synthetic content, deep fakes, fake videos or media about people doing things or people saying things that they didn’t actually.
There are a lot of examples of that, which we’ll get into, but, you know, just those things together are a pretty scary prospect.

So those countries you mentioned, like massive populations, obviously, like, you know, India, a billion plus, the US has obviously got 300 million. So yeah, big stakes. And the other one that really kind of like blasted this generative AI deepfake kind of conversation into the mainstream was the Taylor Swift deepfakes that kind of hit the news a couple of weeks ago.
A bunch of sexually explicit AI generated images of Taylor Swift were just all over X. I think they started in Telegram in like a kind of some sort of private group and then sure enough were soon being distributed on X. And I think at the height of it, one of the posts attracted more than 45 million views before of X eventually started to take some measures and bring down posts and also block searches.
But if the safeguarding democracy wasn’t enough, the kind of pop culture impact of Taylor Swift really has just brought this front and center and you’ve sort of got regulators and government sort of saying, okay, you’ve got our attention. I know people have been talking about this for a while, but maybe we want to do some stuff to protect democracy and protect Taylor Swift.

Yeah, absolutely right. I love that Taylor Swift example, because it’s such a good kind of case study of, here’s a case where it’s like one of the most popular people on the planet. It’s just very sophisticated. Armies of lawyers, armies of very internet savvy fans, kind of reacting to this stuff being shared. And you’ve also got perhaps the clearest example of like, synthetic content, no one believes it’s real.
Everyone can agree that it’s abusive, awful stuff to be shared without consent. And so, you know, such a clear cut case against such a powerful person. And still, you know, some of these, like you say, got incredible circulation seen by a lot of people. Twitter was X was relatively slow to respond, which, you know, unsurprising, I suppose, given their approach to content moderation these days.
That’s been pretty hard to get a handle on and tamp down, although that seems to be done now. Imagine though, you know, a less powerful person or a more nuanced, misleading representation of the truth, something that’s not immediately obviously synthetic or fake or something that the person would never do. Imagine it the day before an election poll, all of these like… If we struggle to deal with this such a clear cut case with Taylor Swift, how are we going to deal with the other stuff?

Yeah. I mean, the two examples are kind of neat because they sort of spotlight the different types of things we’re worried about. In the past, if someone said, AI, deep fake, you’d kind of go, oh, someone’s going to impersonate someone and that’s a problem. But what you can see here is on the one hand, in the election case, we’re worried someone might believe something. And then that has an impact, you know, they might go and vote on the basis of believing something that wasn’t true.
And in the Taylor Swift case, it’s like, no, you know, no one believes those images are real, but it’s harmful. It’s harmful to her. It’s harmful and humiliating and everything else, um, for, for those images.
So it’s like, okay, there’s a few different things we need to kind of grapple with here. Um, it wasn’t the Taylor Swift one, but in the sort of coverage of it, Alex Kranz who writes for the verge had sort of written this article about seeing a deep fake of Timothee Chalamet sitting on Leonardo DiCaprio’s lap. And his immediate thought was, if this stupid video is so good, imagine how bad election misinformation is gonna be.
And just like the way that sort of pop culture focuses the mind on what really matters, I think that’s kind of cool.

We could go through so many examples. There’s another great TikTok account called Deep Tom Cruise. I encourage people to go check it out, which is a collaboration between a, like AI visual effects kind of person and a Tom Cruise impersonator. So the person films, the impersonator films, himself doing stuff as Tom Cruise, and then the deep fake Tom Cruise’s face on it. And it is like, it is so good. It’s really quite convincing and really fantastic.
But yeah, so there’s a million examples out there. There’s a whole bunch of examples of like, political related misinformation and disinformation that’s been deep faked as well. A lot of stuff people might be familiar with.
So like one of the first frights after Dall-e and Mid Journey and those image generating AI systems kind of became popular in the last few years. One of the first frights from that was these images of Donald Trump in kind of a prison jumpsuit or getting arrested and so on, which are visually really convincing. But absolutely not real. You know, there was a fake video of President Joe Biden announcing a military draft. There’s a whole bunch of other examples.

Yeah, very recently, like in the last sort of few weeks, so they’ve obviously got the primary elections over in the States at the moment, and there’s sort of deep fake audio of Joe Biden telling New Hampshire primary voters that they don’t need to vote in the primary election, just save your vote for the general election. So stuff like this.
Interesting to see, like in India, Narendra Modi is putting out a lot of warnings about deepfakes because I think their election is coming up and there’s a few examples. He’s called deepfakes one of the biggest threats, which is interesting because the BJP, his party and its ministers have been involved, according to several reports, in promoting this information in previous campaigns. So it’s interesting that he’s seeing all of the shoes on the other foot. It might not be so good.
But yeah, a lot of political examples abounding.

And I mean, it’s so easy and common that it’s even allegedly happening by mistake. So locally, I think last week, Nine News published a photo of the Victorian Animal Justice Party MP Georgie Purcell as part of a bulletin on duck hunting in Victoria. And it very much appears, I think, as the world doesn’t just appear in the image, they’ve altered the original to expose a part of her midriff and make her breasts look bigger.
First of all, imagine publishing an article where you’ve photoshopped the state MP to make her breasts look like, like how mortifying that. Nine News says they did like they didn’t intend to. They blame the product, Adobe magically Photoshopped it. There’s debate about whether that’s a plausible explanation online.
So there’s Photoshop people say that that’s not particularly believable, but whatever. Um, but yeah, you know, they, they say they accidentally, but this kind of like. Alteration of what’s real in media is just completely pervasive. It’s, you know, it’s been going back intentionally from, you know, the day Photoshop or probably before even digital technology, right? You can fake stuff, but it’s more and more pervasive, more and more easy to access and easier to make completely, you know, new stuff like this deep fake stuff.
You can make a completely new video of someone, you know, or put a head on to a video in a way that previously wasn’t possible.

Yeah. And like, even if you were to accept the nine explanation, it’s sort of, it also gives an insight into the where the sort of the harms can be introduced as people who deliberately set out to use AI to create this kind of explicit imagery or these kind of humiliating images.
And then even if Nine’s explanation was plausible, it only says, all it tells us is that this can also happen because the tools themselves are innately kind of trained to reproduce things that are biased and stereotypical and harmful. So it doesn’t, it doesn’t, it doesn’t sort of explanation that makes us feel any better about anything, you know, in the sense that the problem is either inherent in the technology or the people using it. So there’s a real challenge there.
But that example maybe drifts a little bit away from sort of that political disinformation into an individual who is a politician, but an individual being kind of humiliated and exposing her image and making kind of, you know, making use of her image in a way that’s not favorable.
And then you can kind of broaden that out to, you know, a growing concern around child sexual abuse material being kind of created synthetically through AI. And that’s a growing focus for governments around the world as well.

Even short of the child sexual abuse stuff, just like abusive sharing of intimate imagery, synthetic and intimate imagery, right? Yeah. It is pretty easy if you are so inclined to go on the internet, find these tools, little groups where you can, you know, turn a handful of images into a, an intimate image or a sexual image about someone. So, you know, the scope for abuse, their scope for harm there is spectacular.

Yeah. There was a tech policy press had a article or thing during the week, about 550% increase in deep fake videos relating to kind of pornographic content.
The last category just that we haven’t mentioned, just sort of slightly tangential, but around this kind of deep fake stuff is just around like fraud. You know, the use of deep fake voices and images to perpetuate more fraud.
And there’s this great story that just broke in Hong Kong, a finance worker was tricked into paying out $25 million to fraudsters because he was called in on a video call with his chief financial officer and it was a deep fake video – not just of the chief financial officer, but a bunch of people in his team.
Like he was on a conference call with me, with like more than one person, all of them were deep fake images. And he thought, well, you know, I’ve got to make this payment. It was a meeting.
So just, just a sort of little kind of extra thing there on top of political disinformation on top of, you know, individual abuse, there’s kind of this fraud, which, you know, is a massive problem for organizations. So lots to, lots to respond to.

Yeah, for sure. And so there is, you know, we’ve got a response in the last few weeks from OpenAI just on the election bit. So you know, OpenAI make chat GPT. They also have that and that’s the technology underneath a lot of Microsoft’s AI offerings. And they also make Dali, which is one of the popular image generators.
And so they’ve recently published a blog post and changed some of their policies in anticipation of this big year of elections. And what they’re saying there is that they’re really recognizing the scope for their tools to be used in misinformation, disinformation. So they’re putting some limitations on the way chat GPT can be used by lobbyists and political activists.
So there’s a blanket ban on using chat GPT for political campaigning or lobbying. A ban on chatbots that pretend to be real people or institutions. So there was an example of like a US politician who’d made a, you know, chat with Bill kind of chatbot as part of their election platform. Not allowed, that’s banned. So that got, that got ditched. There’s also a ban, I mean, pretty obviously on applications that deter participation in democratic processes. So, you know, if you’re trying to make a chatbot that provides false information about how to vote or where to vote, that’s not allowed.
So, you know, they, they’re, they’re kind of shifting their policies or explicitly saying in their, in their use policies that they’re trying to basically stay away on the chat bot side from, from any kind of use in, in politics, which, which I think is wise.

Did the other stuff they’ve talked about is things like watermarking. So digital watermarks basically so that people can, can verify and know that something is synthetic basically. Um, so there’s various digital credential systems that they’re playing with. And that’s been something that we’ve heard mentioned in various regulatory contexts as well.

Yeah, there’s like, can I just add to that? There’s two sides of that, right? One is making, you know, Dali or image generators watermark or, you know, encode, this was made by this system in an image. But there’s also the like on the detection side, right? Like if someone else, someone, AI makes an image, is there a way of like detecting? And that’s like a open question area for research for how do you detect stuff that’s artificially generated if it’s not properly watermarked?

The other stuff is I guess less technology driven, more about kind of process and information. So they’re sort of redoubling their efforts to sort of steer chat GPT users towards reliable sources of information about elections and democracy.
So if you want to know about, you know, how to vote or something, they’ll point you into towards official sources rather than, you know, landing somewhere that might be spurting out some falsehoods. And then the final thing is around improving their processes for, you know, allowing users to notify of problematic content and for them to respond to that stuff.

Yeah, which I mean, and that, I mean, that’s one of the criticisms, which maybe we’ll talk about, but, um, is that this is just policy stuff, right? This is, and it’s not stuff that they’re obviously enforcing upfront. It’s stuff that you know, particularly with the like, you can’t use a chat bot for a political cause, that stuff that they’re largely relying on users to flag with them and then they react to rather than. Yeah, so you know, it’s and that’s yeah, that’s not the not the most effective regulation.
So, you know, and all of these also they already have on the individual abuse side, they already have controls around, you know, they try to prevent people from making kind of sexually explicit images and so on. So there are some controls there.
There’s also some regulatory responses or legal responses to this stuff. In Australia and the US, I guess, but in Australia in particular, it’s actually pretty hard to regulate the election use of misinformation, disinformation, especially when it’s coming from like diffuse actors and stuff, it’s…
You know, it’s really hard to police the truth of stuff in an election. There’s a bias in favor of freedom of political communication. You tend not to clamp down too hard. We do have electoral laws about, you know, lying about to people about how to cast votes and stuff. So you get in trouble for that kind of thing.
But yeah, it’s, it’s a pretty fraught area in enforcing too much more than that.

Yeah. As we’ve kind of seen in already in the sort of social media context, where you guardrails around disinformation. It immediately becomes kind of highly partisan this debate about, you know, censorship and so forth. And there are generative AI tools called things like Freedom GPT, you know, in opposition to the idea that, you know, if something like a chat GPT might look to put some guardrails around this sort of stuff to bring in the interest of political integrity.
So that’s on the kind of disinformation and kind of election front on the abuse and kind of deep fake porn sort of front. We’ve seen a lot of noise, particularly, you know, on the back of the sort of Taylor Swift stuff from different governments in the US, the White House. Press secretary is called on Congress to create legislation and we have seen there is a proposed law, which I think you You’re a fan of the acronym for this one?

I love it. It’s called the Defiance Act, right? Which stands for Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, spelling out defiance, which is just like, I love the US propensity towards like acronyms and single word acts.
But also in Australia, actually on the individual abuse side, we’re pretty good.
You know, I mean, maybe it shouldn’t surprise me, but we’ve basically every state except as mania has laws explicitly outlawing non-consensual sharing of intimate images. And those laws are drafted intentionally to include synthetic or edited or generated images, as well as actual real images. So yeah, in every state except Tassie, it’s an offense, whether or not it’s generated or real, it’s an offense to produce those images, to distribute such images, or to threaten to distribute without a person’s consent.
And they’re actually, their criminal offenses are actually quite serious. You can go to jail for a couple of years. Also importantly, just because I think these are great laws, intimate doesn’t need to be pornographic.
Intimate could be someone sitting on the toilet. It could be a person wearing or not wearing religious dress that they usually wear in public. So if you usually wear a hijab or a turban in public and someone’s creating or taking photos of you not wearing that in private and then sharing it publicly, that would fall under the scope of these laws.
The last thing to mention on Australian law is the e-safety commissioner has, under the online safety act I think, has their own powers and ability to fine people for doing exactly this and powers to engage with the platforms and get stuff taken down. So, PSA, if this happens to you or someone you know asks you about this or whatever, you can report it to the police, you can report it to the e-safety commissioner. The police may prosecute the e-safety commissioner. They have case workers, they will talk you through the issue, they will support you to what you can do, get this stuff taken down from the platforms, all of this stuff. It’s a really great service. So, you know, yeah, PSA on what to do there.

One of the things I wanted to move us onto, and it kind of comes off some of what you’re discussing around what these laws propose to do and propose to ban is where do we actually take action to stop this stuff?
Because we’re already sort of in this conversation of like, you can go after the people that are creating this stuff. There’s the tools that allow this kind of generative AI, these models and these tools that allow these kind of images to be created.
And then there’s this kind of distribution of this content on social networks and, you know, kind of reaching mass audiences. And so there’s different points at which we’re sort of having this conversation.
And it’s interesting because, you know, the, the tools are often, they’re not necessarily made for this purpose. They’re being misused sometimes to create this sort of imagery. So, you know, someone who makes Dall-e would potentially argue, look, if someone misuses Dali to create a non-consensual porn image or in the case of the Taylor Swift one, I believe it’s a Microsoft tool, then we made the tool, but someone’s misused the tool.
And then if we think about the Taylor Swift case, really the focus of the commentary and the conversation was largely about Twitter or X and how quickly they acted to stop the distribution of that on their platform. So there’s an interesting conversation about like – You know, the best models can be used for unsavory purposes. So is it fair to expect the models to take responsibility? Or should we be pushing this onto, you know, the social media networks that are facilitating the distribution?

I think the Australian laws on non-consensual intimate imagery are great. I think they, I think they’re an important part, part of the puzzle, partly because, you know, you put the restriction on the individuals creating using the tools.
But you also signal as a society that this is unacceptable, right? Like there’s a really important values signaling that, you know, no, no, I don’t care that it’s not a real image. This stuff is abusive. Doing that without consent is violating and awful and it’s unacceptable. And then you just, you work your way up the chain and you expect everybody in that chain from the tools to the distribution to do what they can to prevent the harm as well.
So You know, Microsoft and all of the tool creators need to have their own controls to make sure their tools aren’t used in an abusive way or aren’t used in elections in damaging ways.
And then the distribution channels have their own responsibilities as well. I think that we rightly put the expectation at each level because, you know, it’s, it’s, you know, security defense in depth idea as well, right? Cause each control each stage is imperfect, right? The tool creators can only do so much, you know, there’s always going to be bad actors, there’s always going to be tools that are, you know, like your Freedom GPT, there’s always going to be tools that are created by people who aren’t going to be accountable to regulation, who are intentionally building tools that can be used without those controls, for example. And so Yeah, you need the controls at all stages.

I love that defense in depth kind of analogy. I think that’s great. And I saw a quote from Satya Nadella, the CEO of Microsoft, which I thought was really interesting because they’re kind of in the frame a little bit around this issue, one, because of their sort of partnership with open AI, but also the Microsoft designer tool, I think, has been at the sort of heart of some of these issues with, you know, the particularly Georgia Purcell, the Australian politician, and I think even the Taylor Swift case, but
He had this quote where he says, you know, I think about our responsibility, which is the guardrails we need to place around the technology. So there’s more safe content being produced. So he’s talking about their own responsibility. But then he also says there’s a lot to be done and a lot being done around the kind of global societal norms, law and law enforcement and the platforms. And there’s a lot more kind of governance that we can do than we think.
And I think that’s what you’re saying, which I agree with, which is that.
You know, you look at that holistic picture and there’s kind of governance that you can put all across the chain and you kind of need to, because that’s the way this, this problem manifests, it manifests in so many different points.

Yeah. And if you can get sensible control, controls from the big platforms and the big tech providers, you know, that’s not perfect, it’s not all of it, but it’s a very large chunk of like the mainstream of this stuff. So yeah, I think that’s right.
The other point I wanted to draw out here is the role of journalists and journalism. I read one article, which I really love actually, I will put in the show notes, but it’s in Pop Sugar, it’s about the Taylor Swift deep fakes, but making the point that a lot of this is about the quality of the information ecosystem, and that’s what journalists are for, right?
Like a good, robust, funded, paid for journalism is the way that we combat a lot of the election interference issues and it’s a key part of the health of the information ecosystem. And if we allow kind of journalism to die or to have the money sucked out of it by the big platforms or whatever, new business models, then we don’t have those voices, that discipline of checking and investigating and calling out quickly when things are not.

Okay, well on that holistic note, I think we bring that to an end. You know, it starts and ends with kind of what’s in the news. It’s like the Taylor Swift stuff kind of brings this stuff to the fore, but there’s that, you know, that responsibility on that whole information ecosystem. I like that.

Yeah, yeah, absolutely. So on that happy note, let’s shuffle off into the information ecosystem and, you know, spend our week there and come back and have a chat next week.

Good one. Thanks, Jordan.

Thanks, Arj.