This week it’s the next instalment of the Australian Government’s consultation on Safe and Responsible AI.
Arj and Jordan break down the government’s ‘interim response’ to the consultation, and the case for regulation that it puts forward.
Though the detail is yet to come, we evaluate the government’s proposals – new mandatory guardrails, updates to existing laws, international engagement and domestic investments – what will they mean for AI adoption in Australia, and is our legal system already falling behind?
Listen now
Transcript
This is an automatically generated transcript. We make our best efforts to check that it is an accurate reflection of the episode, but it may contain some errors and unedited content.
Arj
Welcome to this week in Digital Trust, elevenM’s regular conversation about all things tech policy, privacy, AI and cybersecurity. I’m Arj joining you today from Awabakal country.
Jordan
And I’m Jordan joining you from Wirandjuri country in Melbourne and Arj, happy birthday. It’s your birthday and this on this day that we’re recording and I have brought for you a special gift.
Arj
Oh, I can’t wait. What is it?
Jordan
It is an Australian government interim response on the safe and responsible AI in Australia consultation.
Arj
Oh wow.
Jordan
It’s 25 pages. It’s all for you on this special day.
Arj
Amazing. Do you know that is wonderful and heartwarming and also a little bit depressing because do you know, last week you said that you would, because we had a chat about data brokers, you said you would bring me something, you know, a conversation starter my data says about me and where that took you is to give me a government report for my birthday. What does my profile say? That’s the kind of thing that gets me happy.
Jordan
It says you’re into AI. It’s not hard for the data brokers to look through our back catalog of just how regularly we bump to this.
Arj
We are discussing the Australian government’s latest nudge forward on AI regulation. I think it is a nudge forward.
Jordan
Nudge is the right word. I think. Yeah. Interim, they’re calling it an interim response. It’s really a state, you know, a wrap up of the consultation that they did last year and a bit of a… kind of pointer in the direction they’re going, which is super useful, but it’s not like the answer or the end product.
Arj
No, no, it’s not, it’s not kind of, you know, stop what you’re doing kind of thing. But yeah, so interim response. So for those that haven’t followed, the government opened a consultation, middle of last year, June, 2023 into safe and responsible AI. So this was kind of in the middle of the frenzy last year about, you know, are AIs going gangbusters? a bit worried about certain things, what are governments gonna do?
So the Australian government said, okay, well, we’re gonna have an industry and a community consultation. They opened that up in June. They closed the submissions in August. We did an episode on this week in Digital Trust, number 76, if you wanna check it out, which was looking at the responses. So it was AI regulation in Australia, the IDs are in, and we kind of did a bit of a wrap of what industry, community groups, academia was saying.
And then of course now we’re at the point where as of kind of last week, 17th of January, so two weeks ago by the time you listen to this, the government has now published this interim response.
Jordan
Yeah, and it’s worth noting that the actual consultation was, you know, this stuff’s moving so fast, right? So mid-year last year, actually a lot’s happened since then globally in terms of regulation. some of which we’ve talked about on the pod late last year, but the EU’s finalized its Artificial Intelligence Act. So that’s some sweeping regulation in the EU that’s targeted risk-based regulation on artificial intelligence. The US government or the US president published an executive order on AI that’s coordinating a whole lot of actions across US government on regulating and guiding and safe and responsible AI.
And there was this global AI safety summit that we’ve also talked about on the pod and a kind of multi-country declaration and plan for safe global AI governance. So like all of that’s happened since August in the second half of last year since this submission.
And so the interim response kind of acknowledges that. And in the context of that, in the context of the responses they got, kind of sets out government’s plan.
Speaking of the consultation itself, it was pretty popular. There’s over 500 submissions, almost 20% of them were from individuals, which I think is really interesting. There’s clearly a lot of individual care and interest in AI. It indicates how this stuff cuts through to the mainstream.
Um, they also ran a number of town halls, direct participation events. I turned up to one of the round tables, expert round tables they had in Melbourne, which was super interesting. So the views are in.
Arj
Yeah. And I think that comment you make is what makes this interesting. This regulatory challenge is interesting. The fact that there’s such engagement from those different parts of the community from, you know, 20% of individual submissions being from individuals, from academia, from industry, and everyone’s kind of coming out from their different place, the sort of the AI opportunity from an industry perspective, that sort of there’s an industry element around wanting to protect and safeguard the ability to get all the benefits. And then at an individual level, I think obviously people are a little bit more worried about certain harms that are being well publicized.
So, and obviously academia kind of potentially on both sides of that coin, like what’s sort of really cool and and forward thinking in terms of what I could do, but also that academic angle around like, how do we think carefully about risk and harm? So, interesting to see how this process parses all that.
Jordan
Yeah, that’s exactly right. The interim response report kind of does a good job of playing through that kind of argument for regulation. And I mean, we might kind of step through it here, but it’s really Kind of what you just said, right, there’s plenty of benefits. There’s, you know, AI can do all sorts of great stuff, from medical imaging to, you know, forecasting emergencies to automating jobs that people don’t want to do and so on. But there are barriers to adoption.
So you know, some of those are like skills and IT infrastructure and stuff. Some of those are like harms and risks from AI deployments that we’ve talked about a bunch on this podcast.
And so, you know, the question is, how do you get to those benefits whilst understanding and managing and engaging with those risks and harms? And we can dig into some of those, we’ve done it in the past on the pod, what exactly some of the risks and harms and benefits that are, that came out of the consultation, but it gets to the point of this case for just broad calls for government to do more, to harness the opportunity.
We need government action here to harness that kind of great opportunity of AI whilst managing the kind of sets of things that can go wrong and the capacity constraints as well. It’s not just risks, it’s skills and capacity constraints in deploying this stuff.
Arj
Yeah. One of the things I wanted to say about the identification of the harms. And as you said, we’ve talked about them at length, so we probably don’t want to go into the details of all of them, but there’s a couple of different perspectives or prisms through which they present the harms. One is this idea of the life cycle of AI. So, you know, looking across, not just, not just looking at like the chat GPT, you know, the guys developing that versus, you know, look at, look at the entire life cycle from the very early stage development and the collection and training of data through to how it’s deployed and used, but thinking about externalities like environmental costs and the labor and all of that. And so there’s a big life cycle. You can kind of probably go through our back catalog to see more detailed discussions. But I like the fact that they presented that.
Jordan
I love that they did that. You know, we pushed that in our submission and we also have talked about it on the pod before. But There’s also a great little diagram on page 10 of the report, if you want to check it out, that I think they pulled from the ARC Center for Excellence for Automated Decision Making and Society. But their submission had this like, just really neat picture that highlights where different risks appear across, you know, early development, deployment and use of AI.
Arj
Yeah. And then so then the other, so you’re looking at the life cycle, you know, sort of left to right, if you like, but then there’s this categorization of different kinds of harms that can happen as well. And the report talks about, you know, technical limitations. So this is like kind of bad design, just technical failures of the AI. The fact that it’s unpredictable and opaque, so the black box kind of nature. And so if you’re using it, you know, in ways that have meaningful impact on people’s lives and you don’t know why that’s a problem.
And then, you know, looking at different domains like using it to, you know, exacerbate misinformation, the systemic risks in becoming so embedded in our society and what that means, and then just the unforeseen nature of AI, it’s moving so quickly. And this was kind of a slightly different categorization of harms than what I’ve seen in other reports. Like when we talked about the UTS Human Technology Institute, they had a state of AI governance and they had a sort of a map of different ways of categorizing harms as kind of system failures and malicious and misleading use and overuse.
And there’s so there’s this different taxonomy of harms, which can be a bit confusing to wrap your head around. But to me, it just means that we’re getting a much more sort of evolved understanding of the risk profile. It’s not just saying low risk to high risk, which it does that it’s looking at life cycle, it’s looking at these different ways the harms manifest.
Part of me is that we’re still in that phase where slicing and dicing it from different ways is really giving us a sense of what is the problem space before we jump into, oh, this is the regulatory model.
Jordan
Yeah, for sure. And it’s really relevant to the regulatory model, right? Because the way you want to regulate depends on the things that you can see going wrong, right? And I think this taxonomy here gives you, it helps you to distinguish between like whether the problem is just that the model sucks or that it’ll give bad outcomes, say. And that is the case sometimes. So we talk about this for facial recognition sometimes, right? Like the way that facial recognition systems fail often disadvantages particular minorities. So that’s a problem to do with the technical limitations of the thing.
But also, we probably wouldn’t become, that’s not the only problem, right? If you could fix that problem, you could make facial recognition perfect and reliable you have other problems, you have other systemic problems that flow. And, um, like with, like, are we okay with people being identified, you know, without their consent, without their engagement, we are, we okay with the surveillance risks. So like breaking them apart, you know, is it a problem with the technical system? Is it a problem with the way it’s deployed? Is it a problem? Some other problem I think really helps. Yeah. Really helps think about the kinds of things that we need to worry about.
Arj
The other thing I’d say, which I think I said when we talked about the discussion paper, when it first came out and the consultation opened was, it wasn’t in that opening consultation paper and it’s not in the central report, is a reference to existential risk and sentient AI, which I know it’s probably maybe we’ve passed that point, but I feel like it’s important to acknowledge still that this is grounded in the things that we’re sort of, not just facing now quite material and substantial and not getting distracted by, you know, the existential kind of scenarios.
Jordan
Yeah, yeah, for sure. They kind of reluctantly gesture at that stuff in the context of the Bletchley Park Declaration, the Global AI Safety Summit, which like a lot of that safety summit vibe was the existential risk. So they kind of say that, you know, Australian government will participate in that stuff, that global approach. You know, as part of that is systemic and existential, but they, yeah, you’re right, they, they focus, the focus is where we think it should be right, which is like back down on like actual today ways that these things can cause harm.
Arj
Yeah. The cl- like the closest it probably gets to, uh, maybe, you know, that I saw that it got to was a sort of just maybe a more, a sharper focus on frontier models, you know, in the sense that this is sort of an additional category of risk that we need to really be looking at. And so that might require targeted attention and, you know, international collaboration. Yep. Yeah.
Jordan
Moving on a bit, we’ve just been talking about the problem statement really, essentially, that the interim response kind of sets out, says, look, this is what we think and this is what we heard in the consultation, that there are benefits, there are barriers, there are harms, there’s a real case for government intervention here. And so, what are we going to do?
And so, the government response, which is the interesting bit, I guess, is kind of divided between three things, maybe four. It’s a commitment or a commitment to exploring regulation. And there are a couple of different types of regulation. So new guardrails for safety in high risk context. We’ll get to that in detail in a sec. Updates to existing laws as well. So not just a new AI app or new AI requirements, but an examination of existing laws and how they need to be updated. And then this stream of the international engagement stuff that I just mentioned, the Bletchley Declaration and kind of global AI governance participation and domestic investment committing with committing to kind of building that, fixing those capability bottlenecks. Right.
So so there’s kind of four steps, some regulatory, new regulatory, updated regulatory international engagement.domestic investment.
Arj
To me, it also reflected back the sort of the consensus of submissions in a way, which is that there seemed to be a consensus across all the groups, whether you’re kind of industry, you consume academia, that voluntary approaches alone are inadequate, particularly to guardrails, like particularly when it’s talking about establishing guardrails for higher risk systems. But then when you look at regulation, that’s where it starts to sort of split a little bit.
And it seemed to be that industry groups were saying, well, maybe we can regulate using existing approaches. And it was more sort of from consumer and academia that you started to hear like, maybe we need a specific bespoke AI kind of regulations.
Jordan
Yeah, that was a really interesting divide, right? That, yeah, industry wants you just to work within existing frameworks. They don’t want another thing to track. And yeah, the academics and the consumers are like, dedicated law, yeah.
Arj
Yeah, which, which I guess no, no surprise in the sense that, you know, compliance burden of a new set of laws. Um, yeah.
Jordan
So new regulatory guardrails, the basic approach here that the government is proposing to investigate, they’re not even committing to do it, but there’s, you know, commitment to look at is essentially some rules for AI in high risk contexts. What does high risk context mean? You ask?
TBD, that’s on the to-do list for the government as well, right? What exactly does high risk mean?
But there’s some language in there about high risk meaning where harms are difficult or impossible to reverse. So there are things like where it’ll harm a person or where there’s, I think the EU uses the term legal or similarly significant impacts as well. I think that’s in GDPR some definition of high risk and then some set of regulatory guardrails for high risk deployments and then no, importantly, no regulatory requirements here for other, unless you’re in a high risk, the government’s avoiding that for the moment.
Arj
Yeah. And so that’s where the distinctions have been drawn in something like the EU AI Act, which is kind of much more broadly applied across all levels of risk. Whereas this is saying, if you’re in high risk, we’re going to look at some guardrails potentially mandatory guardrails for high risk systems. But if you’re not, then there’s a bunch of voluntary stuff we’re going to work on and develop. But there won’t be any guardrails there.
And the guardrails, yeah, focus around testing, transparency and accountability. And there’s some nice parallels there, I think, with some of the work that that’s where you can see the government tapping into some of that international work as well, which is, you know, that both the executive order from the US and the UK summit kind of had announcements around testing and how you can kind of promote testing of high risk systems. So you can see that there’s an opportunity there just to connect in and, you know, build on that.
I actually like the fact that they haven’t tried to bog down this process at this point by trying to define high risk in this at this stage. It’s just to say that there will be something we need to work on and define what high risk means in and in the Australian context, which I think is important.
And so these guardrails will apply to that. But at this point, let’s not get distracted on agreeing on what high risk is or not. Let’s just agree that for high risk systems, these guardrails need to be in place.
Jordan
Yeah, which is what we were saying at the top about it’s a set of direction, right? That, you know, we’re going to have requirements for high risk. We’ll talk about what those requirements are and what high risk means later. We’ll engage in a further engagement, further consultation.
Yeah, and you mentioned a couple kind of informal or non-mandatory things around that. So for the lower risk or the broader application, they’re looking at developing some industry standards and exploring watermarking for generative AI. So a way of indicating that something that an AI system has produced is actually artificial and establishing an advisory group to kind of lead some of these standards and law reform.
The AI safety standard I thought was interesting. So the premise is that there’s a bunch of AI principles and guidelines and frameworks out there. And for organizations, it’s hard to kind of make that into something practical. Like how do I actually apply these principles? Which is something we are staring into all the time with our clients and with organizations saying, okay, we know we have to do something around AI. We’re seeing these principles. How do we do this?
Jordan
So it’s interesting to me that the, you know, your system must be fair. Yeah. Into, okay. Well, what, what is that? Like, great. That’s an objective. What do I do?
Arj
Yeah. And so the, the national AI center has been commissioned with the task of working with industry to build a best practice and up to date voluntary AI risk based safety framework, which.
On the one hand, I’m kind of like, that’s, that could be cool. And, you know, you think about the work that’s something like the NIST cyber security framework does in cyber, you know, it’s a sort of industry best practice framework and, you know, organizations around the world now kind of have aligned to it and use it. Um, but then the other part of me was actually NIST also has a risk management framework for AI. So is this duplicative and redundant to have, you know, our. National AI Center try to build on. Yeah.
Jordan
And there are about a thousand frameworks from various bodies around the world, right? Which is, and that’s one of the real challenges, just advising clients in this space that we’re seeing is that like, there is no shortage of framework standards things, but tying that back to a particular project, what does that actually mean in practice?
What are the steps that I can put on my project plan to deploy this AI thing safely is really difficult. That’s where the gap is at the moment. So hopefully, proof will be in the pudding with those kinds of standards and guidelines whether they will help with that problem. So that’s the guardrails, right? So there’s some formal regulatory legal guardrails on high risk and there’s some voluntary industry work on the of lower risk or broader things.
Another direction that the government is going is tying into updates to existing laws in this space. There’s this long list in the interim response paper of just crossover areas where the existing law reform is going on and there’s going to be an AI impact. So you know, you’ve got the privacy law reform, you’ve got cyber security strategy and cyber security type law reforms, you’ve got law reform around miss and disinformation and big and platforms. And how do we manage that? You’ve got IP and copyright. And how does that feed into the training data sets for generative AI and other things? Does AI get copyright in their productions?
You’ve got the ongoing ACCC digital platforms work. You’ve got a bunch of work around generative AI and education. So they’re kind of pointing to all of these areas where there is current law reform and saying, well, the dedicated law will deal with the high risk, but there’s a stack of things that AI is going to challenge or change in these other areas. And we’re going to need to look at that on a case by case basis.
Arj
The report says that there’s at least 10 legislative frameworks that may require amendments to respond to the applications of AI. And I think we touched on this when we did our sort of summary episode number 76 of the consultation, which is that, yeah, there’s a lot of work that can, can get done just by moving some of these regulatory reforms forward. I mean, privacy is obviously one close to our heart and we’ve been pushing it very hard and laboriously for some time. But there are some core kind of practices and principles that will come through in that will do a lot of work in this space. And it’s almost a bit cut before the horse to go and think about do we need something new and you know, from an AI perspective, if we’ve got some of these things in train, but I think this is also where you know, I’ve always kind of come back to that sort of that political question as well, which is like, how does the government sort of manage the fact that it’s got a backlog of reforms that needs to move through and move forward.
And if this is now another one, a new sort of approach to AI, is it behind, you know, is it sort of scheduled in behind these and what’s the way is the kind of political will and emphasis and prioritization. So we’re sort of putting this into that into that bucket.
Jordan
Yeah, no, that yeah, totally. And I mean, one of my just concerns when we say we’re going to deal with these risks by updating the specific laws for that particular thing is that there is a risk of fragmentation and a risk of having different standards for the same thing applied in different contexts.
So this is something we resist in privacy law reform as well actually is that there is a privacy act and we want to keep all of the privacy rules in that privacy act. What we don’t want is a privacy in banking law and a privacy in technology law and a privacy in medical records law, although we mostly do have separate medical records laws.
So I think one risk is managing that coordination of standards and requirements so that if you’re building an AI system, you don’t have to build it completely differently depending on what industry it’s getting deployed
Arj
The last two in the paper are in the paper are around international engagement and domestic investment, international engagement. You know, unsurprisingly, this is an international conversation and some of the governments around the world have kind of even gone out a little bit further and quicker than Australia has in terms of having draft legislation or having made announcements or conducted summits and so forth. And so certainly there’s a recognition of that within this paper and that the Australian keep engaging with those international counterparts to shape that kind of AI governance. And frankly, as a matter of sort of pragmatism, some of those conversations, for example, with frontier models are probably more likely gonna come through governments like the US than Australia. And so to sort of be aware of that and just to try to connect in and shape those conversations, if you’re gonna influence open AI, it’s probably gonna be done through that international conversation.
Jordan
It’s a recognition that probably for the most part, we’re going to be consumers of these technologies that are developed out of the US or the EU or elsewhere, rather than Australian-grown things.
Arj
And that final directional piece around domestic investment, again, just keeping in mind the opportunity side and the capability side of AI, that there’s still work to be done in terms of funding skills growth and research and adoption. And so there’s some programs that are listed in the paper around promoting that.
So what, Jordan?
Jordan
Yeah, so what? It’s a good question. It’s gotten pretty good reviews, honestly, from commentators. I think that just the responses to this that I’ve seen around the traps have been relatively positive. It seems to me like a pretty reasonable initial response. It’s useful to kind of point the way.
I mean, it is slightly slow in my view. Like, you know, you’ve got a European AI Act, the Europeans have legislated, a few other countries, Canada’s got a kind of guidelines and policies in place that touching generative AI, you’ve got the US executive order and we’re just now setting a direction.
So, you know, that’s potential criticism, but honestly, I don’t think you want to rush this stuff. It’s such a state of flux at the moment, and it is useful to kind of fast follow to see what, how the international conversation is developing and make considered choices for the Australian context. So, I mean, broadly, I’m positive.
Arj
I think it’s, I mean, the most you can ask for is that it’s balanced and reasoned. I do share your sentiment, which is, I think… if there was to be a criticism from me, it’s that I don’t feel like the government’s really showing its hand.
Like you can’t really see what it wants to do here. It’s sort of, you know, on one hand, this, and on the other hand, that. And it’s a little bit of that. And we could consider mandatory, but also voluntary. And so we don’t have a real sentiment for where it wants to go. But I also totally get that. I mean, again, that political context is like, if you signpost AI legislation and you’re kind of, you know, towards the back end of a term of government and the question of, yeah, okay, so when are you going to get that done by? And there’s a backlog of other things that have been committed to and not yet, you know, commenced legislation. So I understand why that’s the case. And, you know, I think there’s enough there, as you say, to keep the conversation moving forward. And that’s probably what we can ask for at this point.
Jordan
Well, that’s a good place to leave it, I guess. Yeah, I don’t know what the next steps are. The government hasn’t committed to a final discussion paper or report on this consultation as far as I’m aware, other than implying the existence of a final by interim. Yeah, so we’ll just track the government response and see how that develops.
But Arj, happy birthday, mate. I hope you enjoyed your report.
Arj
I loved it. Thank you.
Jordan
Hopefully we don’t have to wait for your next birthday for the next installment.
Arj
Sure. Yeah. Okay.
Thanks, Jordan.
Jordan
I’ll catch you next time, mate.
Arj
Bye.
