3 June 2025

What’s in an AI risk assessment?

Piotr Debowski
Manager

elevenM’s Piotr Debowski breaks down what is involved in an AI risk assessment, where they fit within an overall AI governance framework and why privacy professionals might find them more familiar than they first thought.

Last year saw a flurry of activity in Australian AI guidance with the release of the Australian government’s Voluntary AI Safety Standard in September and further consultations on introducing mandatory guardrails for AI in high-risk settings. The Australian Government also adopted a Policy for the responsible use of AI in government, which requires all Commonwealth Departments and agencies to appoint an accountable official and publish a transparency statement about their use of AI, and we saw Federal, State and Territory governments sign up to a national framework for the assurance of artificial intelligence in government.

These developments provided welcome guidance for Australian agencies and organisations looking to reap the benefits and capabilities AI promises, as well as helping organisations identify how to establish good AI Governance. Having appropriate responsibilities assigned, ensuring appropriate training is provided and establishing an AI impact assessment tool (among other things) will place an organisation in good stead to establishing AI governance in the long run.

But, drilling down below the enterprise-wide governance layer, what are the considerations organisations need to address when adopting specific AI systems? What are the questions you need to ask the AI system vendor? How do you know it’s a reliable system that operates in the way they said it would? Enter the AI risk assessment.

AI governance frameworks v assessments — what’s the difference?

First things first, there’s a difference between a governance ‘framework’ and a risk or impact ‘assessment’.

AI governance frameworks are aimed at helping you establish a process to manage or govern risks associated with the deployment or development of various AI systems generally. Governance frameworks are about establishing consistent and uniform practices, policies and procedures to be applied across the organisation as a whole. For privacy professionals, we can liken these to your full privacy program, including the development of key policies, establishing roles, responsibilities and who might have oversight, complaint escalation pathways, training and awareness, assurance and so on. For example, ISO/IEC 42001:2023 talks about implementing appropriate leadership, planning, support, operation, performance evaluation, and improvement to manage AI risks. Similarly, the Australian government’s Voluntary AI Safety Standard suggests implementing guardrails such as accountability processes, data governance, testing and evaluation etc.

By contrast AI risk assessments are a structured analysis aimed at identifying, evaluating, and managing risks specific to the deployment of a particular AI system (e.g., the adoption of ChatGPT, Copilot, or some other AI chatbot service). For privacy professionals, we can liken these to privacy impact assessment (PIA). A governance framework might include a requirement to conduct AI risk assessments for all new AI deployments, whereas the AI risk assessment itself comprises a series of questions that prompt you to critically think about the AI system itself. Some examples include:

Choosing an AI risk assessment approach

Once you’ve determined that an AI risk assessment is what you need, how do you determine which approach or tool is most suited to your needs? Well, AI risk assessments tend to group their content according to responsible AI principles, but there is no international consensus on these principles, so they can vary between tools.

My advice for Australian practitioners is to use or develop an AI risk assessment that groups evaluation around the Australian AI Ethics Principles. The CISRO created a question bank to assist with this, where they created questions from principles within five leading AI governance frameworks and assessments and mapped them to the Australian AI Ethics Principles. A concept map showing their results is below:

image 3

What to assess in an AI risk assessment?

As I’ve said, all assessments (whether that’s a PIA or AI risk assessment) are structured in a similar way. The following is a bit of a summary of the key components that are relevant to AI systems. Further, I’ve included some questions that could act as prompts, these aren’t based on any one particular established AI risk assessment and aren’t intended to be an exhaustive list.

  1. Gather background information
  2. Identify and document risks
  3. Address or manage risks

Gather background information

Start by getting an understanding and documenting the AI system and broader environment that it will operate in. Focus on the following:

  • The model itself — What type of model is being used (e.g., is it a large language model (LLM), a deep learning model, or is it based on simpler techniques such as decision trees or linear regression)? How has the model been developed? How has the model performed in the real world? Does the chosen model have known strengths or shortcomings?
  • The intended uses — What is the purpose that the AI system will be deployed for? What features does the AI system have that enable it to achieve these? How will it interact with other systems, processes, or stakeholders?
  • The anticipated benefits — What benefits does the AI system promise over conventional methods? How will these be measured and evaluated?
  • The data — What data has the model been trained on? What data will be fed into the AI system to produce outcomes? Where has this data been sourced from?
  • The people — Who is involved and what is their role across the lifecycle of the AI system (e.g., who created it, is responsible for deploying it, who will use it)? What level of knowledge and familiarity do they have with the AI system?
  • Geographic area and languages — Where was the model and dataset developed and is this different to where the AI system will be deployed? What cultural or language differences arise?

Identify and document risks

Now comes the fun part, you’re going to identify all of the different risks that the deployment of the AI system presents. This is similar to how in a PIA you identify all of the privacy risks the activity presents by looking at the Australian Privacy Principles (APPs). The difference being that to identify AI risks, we’re going to use the responsible AI principles, instead of the APPs.

Australian practitioners may find it helpful to use the Australian AI Ethics Principles, framing each ethical principle as a goal and thinking about ways the deployment of the AI system will detract from that goal:

  • Human, societal and environmental wellbeing — How does the AI system impact individuals, society, or the environment?
  • Human centred values — Will the AI system be involved in automated decision-making (ADM) and, if so, have you evaluated quality assurance processes and human oversight as part of this ADM? Could the AI system affect human autonomy by encouraging over-reliance by users?
  • Fairness — What processes are in place to evaluate outputs for fairness, selection and measurement bias?
  • Privacy protection and security — Have your organisation’s privacy and cybersecurity policies and processes been applied, including a PIA and cyber security assessment?
  • Reliability and safety — What safety risks arise from the deployment of the AI system? What data quality issues may arise as a result of the model being trained on poor data or foreign sourced data? How often are repeatability assessments, counterfactual and exception handling testing carried out? Have you considered ways in which the AI system could be misused or exploited by a malicious actor?
  • Transparency and explainability — How easy is it to understand and explain to others how the AI system operates and arrives at any outputs? What processes are in place to provide these explanations to users?
  • Contestability — Have you established a process that allows users to provide feedback and complain about the use of the AI system or any outputs? If the AI system is involved in ADM, is there a process for review of a decision by a human?
  • Accountability — What mechanisms are in place that facilitate the AI system’s auditability (e.g., logging)? What are the roles and responsibilities for those involved in the AI system’s lifecycle? Have those stakeholders been given adequate training on the AI system / their interaction with it?

Address or manage risks

It’s now time to assess the risks you’ve identified earlier, prioritising by the severity and likelihood of them eventuating. If your organisation has an enterprise risk policy and matrix, these may be useful to your assessment here. Some of the common risks for an organisation, like those seen in the privacy space, often include legal, financial and reputational risks. Some risks can be managed through typical third-party risk measures, such as including specific contractual clauses, or limiting the types of use cases for the AI system. In other cases, having stepped through the assessment, you may find the AI system or tool to be too high a risk to be deployed and alternative tools may need to be sought.

Finally, just as a PIA would, your AI risk assessment should document the organisation’s decision to accept the risks or implement risk mitigation strategies (including what these are and any residual risk that remains).

Contact us

If you’re deploying an AI tool or solutions and need help with AI governance or assessments, get in touch.