elevenM has published its submission to the Australian Government’s consultation on introducing mandatory guardrails for AI in high-risk settings.
Australians don’t trust AI. According to the Australian Community Attitudes to Privacy Survey 2023, only one in five Australians say that they are comfortable with government agencies using AI to make decisions about them, and even fewer Australians (one in six) are comfortable with businesses using AI to make decisions about them.
If we want AI adoption to proceed, we must address the trust deficit through tangible, outcome-driven practices. So far, businesses have been slow to move. Although the majority of organisations (78%) agree on the importance of responsible AI outcomes, only a small minority (29%) have taken concrete action to implement responsible AI practices, according to the Australian Responsible AI index 2024.
It is in this context that the Australian Government recently ran a consultation on introducing mandatory guardrails that would apply to AI in high-risk settings.
The proposed mandatory guardrails would build on Australia’s Voluntary AI Safety Standard and would impose 10 mandatory guardrails that apply to organisations throughout the AI supply chain covering a range of responsible AI practices from transparency and accountability requirements to performance testing, auditing and risk management.
We think that AI safety guardrails have the potential to go a long way to reduce the trust deficit around AI, but they must be comprehensive, flexible and designed to engage the public (much like a consumer protection regime). In other words, in addition to being effective in protecting people from harm and prioritising societal wellbeing, our regulatory framework must be seen and understood by the public as achieving those objectives.
In our submission to the Government, we outline the following positions:
- It is critical that the scope of application for any mandatory guardrails is clear and understandable – for developers, deployers, end users and those affected by AI systems. A clear threshold for ‘high risk’ must be set, ideally by designating in advance the categories of systems and use cases that qualify.
- Targeted bans should be applied to use cases or technologies that present unacceptable risks to human rights or public interests, such as social scoring systems and manipulative AI. Such bans could be applied narrowly, specifically and may be timebound and subject to reviews.
- The mandatory guardrails should include an obligation for developers and/or deployers to take reasonable steps to respond when things go wrong with a high-risk AI system that they are accountable for. That is, where there are grounds to suspect that that a high-risk AI system has or could cause serious harm.
- Voluntary guardrail 10 (stakeholder engagement) expresses the importance of stakeholder engagement in developing organisation-level AI strategy and approach as well as at the level of individual AI system deployments. This is even more critical for high-risk systems. We recommend that the mandatory guardrails include an equivalent of voluntary guardrail 10.
- We are pleased by the government’s approach in developing the guardrails in alignment with international standards and comparable jurisdictions, and we hope that development continues in this way to ensure international interoperability.
- We support the whole of economy approach with oversight by a central authority to consolidate AI expertise, provide a single set of standards for consistency across industries, and to limit complexity and duplication of effort.
Download our submission to read our responses to specific questions posed by the Government.
Contact us
If you’re interested in learning more about implementing AI governance in your organisation, please contact us.