elevenM’s Jordan Wilson-Otto and Arjun Ramachandran explain how the Federal Government’s new Voluntary AI Safety Standard will help Australian businesses operationalise fair and ethical AI.
One of the encouraging things about the hype around AI in recent years has been how quickly “AI safety” has been included in the broader AI conversation alongside commentary on the promises of the technology.
This seems at least partly a response to general unease in the community about AI’s many potential harms, but also likely reflects that lessons have been learned from the reactive approach to risks arising from previous technology advances such as social media and the emergence of the data-driven economy.
To help shape the business community’s approach towards safer AI adoption specifically, we’ve seen the Federal Government, regulators, researchers and the likes of the Australian Institute of Company Directors all emphasising the importance of businesses proactively managing AI risks.
In response, it’s fair to say that the Australian business community is looking for, and in need of, clear advice and guidance on how to do just that. The recently released Australian Responsible AI Index 2024 highlights a yawning gap between businesses being aware of or agreeing with AI ethics principles and actually implementing the practices of responsible AI.
Happily, we’re now at something of a milestone moment to help bridge this gap, with the Federal Government having released a new Voluntary AI Safety Standard as part of its “safe and responsible AI” agenda.
In this post, we share our thoughts on the Standard and break down its key elements.
What is the Voluntary AI Safety Standard and where does it fit?
The Voluntary AI Safety Standard aims to give Australian organisations practical guidance on how to use AI safely and responsibly. This first iteration of the standard is aimed primarily at deployers of AI systems, as there are many more Australian businesses using AI systems than building them. More detailed and technical guidance for developers is promised in the next version of the standard.
The standard lays out 10 guardrails (which we break down further below) that will guide organisations towards safe and responsible AI use. The guardrails cover organisational controls (such as governance, training, strategy and risk management) as well as system-level controls (such as testing, monitoring, impact assessment and human oversight).
It’s important to note that this is a voluntary standard, though the Government has signalled its intent to introduce a similar set of mandatory guardrails for ‘high risk’ AI systems. As we write this, the Government is consulting on mandatory guardrails, including how to define systems that are ‘high risk’. (For more information or to make a submission, visit the Government’s consultation hub. Submissions close October 4, 2024).
By introducing mandatory requirements for higher risk systems, the Government would be following the lead of the European Union, whose AI Act classifies AI systems according to risk, bans systems deemed to be an unacceptable risk, and mandates a range of safety measures for acceptable-but-still-high-risk-systems. The Australian standard was intentionally drafted to be consistent with the EU framework, as well as other international standards on AI management systems, such as AS ISO/IEC 42001:2023 and the US National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF).
It’s also clear that in releasing this standard, the Government is trying to create a default “best-practice” framework for Australian organisations, made clear in the Government’s introduction to the standard: “While there are examples of good practice through Australia, approaches are inconsistent. This is causing confusion for organisations and making it difficult for them to understand what they need to do to develop and use AI in a safe and responsible way. The standard establishes a consistent practice for organisations.”
Do we need another standard?
Source: xkcd
Actually, yes.
The proliferation of competing standards, frameworks and principles has left Australian enterprises struggling to keep up. This has led to what the Responsible AI Index describes as an ‘action gap’ – where the vast majority of executives (78%) agree on the importance of responsible AI outcomes, but only a small minority (29%) have taken concrete action to implement responsible AI practices.
The new voluntary standard provides an authoritative statement on what specific steps Australian businesses should be taking to manage AI risk, endorsed by the Federal Government’s Department of Industry, Science and Resources, the National AI Centre and the CSIRO.
In doing so, it cuts through the noise and tells Australian businesses what they actually need to do.
Additionally, it signals what Australian regulators, industry and consumers can now expect when it comes to best practice AI governance. We could well see, for example:
- Procurement officers querying compliance with these standards as part of standard due diligence for AI suppliers.
- APRA, ASIC or company shareholders referencing the standards in a dispute about whether a board or company director has adequately managed AI risk.
- The Office of the Australian Information Commissioner referencing the standards when considering whether an entity has taken reasonable steps to ensure compliance with the Australian Privacy Principles, or in assessing whether a use or disclosure of personal information is ‘fair and reasonable’ (if such a requirement is enacted as part of proposed privacy reforms).
Building trust around the use of AI
Beyond compliance, we see a lot in the standard that will constructively help Australian organisations that want to proactively manage and govern AI risks (and more broadly digital risks) in a way that builds trust with their customers, employees and other stakeholders.
Some elements of the standards that we particularly like are:
- The standard addresses the whole supply chain, and in particular emphasises transparency to help organisations access sufficient information about data, models and systems to effectively assess and address risks.
- The standard cross-references the Australian AI ethics principles as well as parallel standards AS ISO/IEC 42001:2023 and the NIST AI RMF.
- The standard draws in existing and familiar governance practices across privacy, data governance and cyber security, recognising that these disciplines play an important role in managing AI risks.
- The standard is Australia-specific, notably referencing local requirements and regimes including the Australian Privacy Principles, Notifiable Data Breaches scheme and Essential Eight.
- There is a clear intention for the voluntary standard to align closely with any future mandatory standard for high risk AI systems, so that there will be consistent expectations and a clear path for organisations moving from lower to higher risk applications over time.
Ok, so what’s in it?
In this section we’ll briefly cover the 10 guardrails that make up the voluntary standard, and what’s in them. There’s a lot of detail, so this won’t be an exhaustive summary – our goal is to provide a general sense of what is covered, and what implementation might look like in practice.
Before we dive in, it’s worth highlighting that the guardrails broadly span elements common to many governance and risk frameworks, and that they comprise both organisational and system-specific measures. The guardrails are also not prescriptive on how governance outcomes should be achieved – for instance, whether it’s by updating existing policies and processes or establishing a new and separate AI governance domain.
Guardrail 1: Establish, implement, and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
Why is this important?
This is about establishing the foundational elements for AI governance. As we know from other governance areas (privacy, cyber security), formal ownership and a strategic-level approach to risk management will be critical enablers for organisations to effectively and systematically deal with AI risks.
What might this look like in practice?
- Formal RACIs specifying roles and responsibilities for AI governance
- A policy framework and organisational strategy that addresses AI
- Documented processes for governing AI in accordance with the rest of the guardrails
- Training and awareness activities.
Guardrail 2: Establish and implement a risk management process to identify and mitigate risks.
Why is this important?
Like guardrail 1, this is another foundational piece for establishing an AI governance program, by ensuring AI risk is managed within an organisations broader risk management framework. The guardrail spells out the importance of developing both organisation-level risk management processes, and system-level risk assessments and management.
One of the things the standard isn’t explicit on are the actual set of harms and risks organisations need to manage in relation to AI. Organisations are expected to assess their own risks through stakeholder impact assessments (see guardrail 10). This guardrail does the heavy lifting of translating potential harms and stakeholder concerns into documented risks, which are rated, treated or accepted based on the organisation’s appetite or tolerance for those risks.
What might this look like in practice?
- A review and, if necessary, updates to existing organisational risk management framework to ensure that it addresses AI risks.
- Development of risk assessment processes and artefacts that cover AI-specific risks (either as an independent process or incorporated into existing privacy/cyber/compliance reviews)
- Organisation- and system-level AI risk reviews.
- Identification of controls to mitigate risks and harms.
Guardrail 3: Protect AI systems, and implement data governance measures to manage data quality and provenance.
Why is this important?
AI risks are implicitly tied to privacy, cyber security and data risks. For instance, not knowing the provenance of data that is used to train AI systems is likely to amplify risks like bias or other unexpected outcomes from AI systems.
So it makes sense to have a guardrail explicitly identifying the need for having appropriate data governance, privacy and cyber security measures in place to support the management of AI risks. This guardrail also underscores why organisations with mature cyber security, privacy and data governance practices might have a competitive advantage in relation to AI – as these capabilities are likely to better position them to manage AI risks.
Given the longer and more opaque nature of AI supply chains, addressing requirements around data governance and data provenance are likely to be particularly challenging. Poor governance of training data can lead to a range of risks, including bias and poor system performance, to infringement of intellectual property rights, indigenous data sovereignty issues, and breaches of privacy, confidentiality and contractual rights. If suppliers are not transparent about training data, organisations should consider whether this presents an unacceptable risk in their circumstances. See guardrail 8 for more on transparency across the AI supply chain.
What might this look like in practice?
- Review and ensure that existing cyber security, privacy and data governance approaches are sufficient to address the novel risks and requirements presented by AI systems.
- Define, document and manage data used for training or fine tuning.
- Document and enforce data usage rights for AI systems.
Guardrail 4: Test AI models and systems to evaluate model performance and monitor the system once deployed.
Why is this important?
Establishing processes for testing AI not only helps validate the robustness of AI solutions before deploying them, but the very act of doing this will help organisations contemplate and better understand what outcomes and behaviours of these systems are acceptable for their context.
This guardrail’s prescription for continuous monitoring and evaluation is also critical considering the dynamic and evolving nature of AI systems, which require more than just pre-deployment testing.
What might this look like in practice?
- IT change management processes that include mandatory pre-deployment testing, ongoing system evaluation and monitoring, and regular audit for higher risk systems.
- Pre-deployment testing should be against well-defined acceptance criteria which cover AI-specific risks (eg: bias) to ensure that they are adequately controlled
- Ongoing monitoring should include clear and accessible feedback channels for reporting problems.
- Regular system audit should be based on risk.
Guardrail 5: Enable human control or intervention in an AI system to achieve meaningful human oversight.
Why is this important?
Trustworthy AI relies heavily on keeping humans accountable for outcomes produced by AI systems. This will be increasingly important as AI systems become more capable and more complex tasks can be fully automated. This is one of three guardrails that help ensure individuals continue to play a central role.
What might this look like in practice?
- Establishing a formally accountable individual for AI systems
- Establishing oversight processes to cover intervention requirements, monitoring requirements, identifying training needs for end users and those operating AI systems.
Guardrail 6: Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
Why is this important?
This guardrail directly tackles the critical task of building user trust in AI by addressing the common concern that we may be on the receiving end of decisions by AI systems, often in ways we aren’t aware of. The guardrail does this by spelling out requirements around disclosure and transparency.
What might this look like in practice?
- Requirements to disclose use of AI for decisions or to facilitate interactions with any impacted parties.
- A transparency strategy for each AI system, depending on its risk level and context. Different levels of technical detailed will be appropriate to effectively explain the use of AI to different stakeholder groups.
- Consideration of the required level of transparency when selecting an AI system – (it may be inappropriate to use an opaque system for decisions having significant impact on individuals or groups).
Guardrail 7: Establish processes for people impacted by AI systems to challenge use or outcomes.
Why is this important?
This is the third guardrail that seeks to empower individuals, by enabling the ability for individuals to challenge AI decisions. This aligns to the ethical principle of “contestability” and ensures that end-user concerns are not met with a simple ‘computer says no’.
What might this look like in practice?
- Building organisational processes for people to raise concerns about AI decisions and to evaluate those concerns. This may be built into existing complaint and feedback mechanisms.
Guardrail 8: Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
Why is this important?
To a large extent, this guardrail recognises the challenge faced by deployers of AI systems in seeking to understand how AI systems work, so that they can better identify and manage the risks.
Organisations must provide information to organisations downstream in the AI supply chain for them to understand the components of the AI system, how it was built and to understand and manage the risk of the use of the AI system. In turn, deployers must share information about their use cases and the performance of AI systems to enable ongoing risk management and continuous improvement.
What might this look like in practice?
- For developers of AI systems, documenting and sharing detailed information about the AI systems that they provide (covering capabilities, source data, architecture, performance and known issues, privacy and security, etc), for example via ‘Model Cards’ or other transparency and documentation approaches.
- For deployers of AI systems, procurement processes that demand adequate information from suppliers. Two-way information sharing covering use, performance and faults back to developers.
- Clear delineation of responsibility and accountability between developers and deployers for monitoring and evaluation of system performance, oversight and issue management.
Guardrail 9: Keep and maintain records to allow third parties to assess compliance with guardrails.
Why is this important?
This is about bringing together information and insights from applying all the guardrails and documenting it in order to demonstrate compliance. This requirement strengthens organisational accountability by placing the onus on organisations to demonstrate their compliance with the guardrails.
As the legal and regulatory landscape continues to evolve, and with new laws potentially on the horizon, establishing this kind of record trail for AI governance will become increasingly critical. For example, the Government’s proposed mandatory guardrails for AI in high-risk settings includes a requirement to undertake conformity assessments to demonstrate and certify compliance with the guardrails.
What might this look like in practice?
- Development and maintenance of an AI systems inventory, including key information about each system as well as evidence of compliance with the guardrails (eg: risk assessments (guardrail 2), privacy, cyber and data governance reviews and documentation (guardrail 3), testing results and audit outcomes (guardrail 4), accountable persons (guardrail 5), transparency and challenge processes (guardrails 6, 7 and 8)).
Guardrail 10: Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.
Why is this important?
Given the broad impact of AI systems, this guardrail encourages deployers to better understand likely harms through engaging their stakeholders and identifying potential impacts. This understanding should inform the application of all the other guardrails, but especially risk and impact assessments (guardrail 2). Stakeholder engagement should take place at both an organisational and system level.
What might this look like in practice?
- Mapping and establishing ongoing engagement with key stakeholders that may be affected by the organisation’s use of AI.
- Incorporating stakeholder needs into organisational-level AI policies and risk and impact assessments.
Documenting interactions with users and potential harms for each AI system, including impacts on diversity, inclusion and fairness.
In conclusion
We see this as a very encouraging contribution from the Government, and are already having several constructive conversation with clients and other organisations around how this can support them to govern AI risks.
As for any standard, working out the way to most meaningfully apply the standard for an organisation’s specific context and maturity will be a key challenge, and one we are looking forward to being involved in.
Contact us
If you’re interested in learning more about the Voluntary AI Safety Standard or AI governance more broadly, contact us at hello@elevenM.com.au or on 1300 003 922.