20 March 2023

How to manage AI risks and maximise its value

Piotr Debowski
Manager

elevenM’s Piotr Debowski explores a new framework to help organisations manage AI risks – NIST’s AI risk management framework. He describes how it works and when you should consider using it to manage AI risks.

Technology has come a long way since the 1950s when the first artificial intelligence (AI) systems were created. Today, AI systems can predict weather patterns to assist with disaster management, recognise faces to assist law enforcement, and even hold conversations that appear human. The emergence of ChatGPT shows that the race is on to make the most of AI.

At the same time, there has been growing public and regulatory concern about the use and impact of AI systems on society and how to manage AI risks. Australians in particular are concerned about “surveillance and loss of privacy, alongside the misuse of AI technology by governments and companies with malintent.”

Around the world, standard developers and legislatures have been scrambling to address such concerns. For example, the European Union has drafted and is seeking to introduce an AI Act to regulate AI systems whilst in Australia there are the Australian AI ethics principles as set out in the AAIE Framework.

More recently, the National Institute of Standards and Technology (NIST) in the US has released version one of its AI Risk Management Framework (RMF) and accompanying Playbook. The AI RMF aims to help organisations “minimise anticipated negative impacts of AI systems and identify opportunities to maximize positive impacts.” NIST has previously developed well-recognised and widely used frameworks around cybersecurity and privacy, so the AI RMF is a welcome contribution to this emerging area.

But just how does the AI RMF tackle these issues? How does it compare with other standards and why might it be a good standard to implement in order to manage AI risks? Read on to find out.

Why should I consider the NIST AI RMF to manage AI risks?

The community concerns mentioned above reflect many of the inherent risks of AI systems. These include: a lack of transparency in how AI systems operate, decisions by AI systems that propagate bias, and AI systems often being fed unreasonable amounts of data to operate.

These risks mean that businesses and government institutions stand to lose a great deal should their design or deployment of an AI system work incorrectly, as we have recently seen in the Australian Government’s recall of its automated welfare debt recovery process. Without mitigating AI risk, it will impossible to make the most of AI.

There are a variety of standards and regulations (depending on which jurisdiction you operate out of) out there which will help you make the most of AI. But the NIST AI RMF so far is leading the charge for a variety of reasons:

  • Clear and simple process and actions – unlike the AAIE Framework which is principle based, the NIST AI RMF provides a clear four stage process for organisations to follow. This is accompanied by suggested actions in the Playbook and, hopefully one day soon, some use-case or industry specific risk profiles, to assist with implementation.
  • Non-sector specific – the NIST AI RMF is non-sector specific, meaning it is useful for both those involved in designing and developing AI systems and those responsible for their deployment within their organisation
  • Voluntary and flexible – unlike traditional legislative approaches, which are often limited by targeting only certain use-cases, kinds of AI systems, or jurisdictions, the NIST AI RMF is flexible and accommodates a range of scenarios. Its also voluntary, which means there’s hope that widespread uptake will reinforce ‘carrot’ cultures rather than ‘stick’ compliance.

Accordingly, you should be considering the NIST RMF AI where:

  • You’re a developer and your AI system will be handling personal information or have the propensity to impact people’s privacy or other human rights.
  • You’re an organisation about to deploy an AI system.
  • You’re an organisation with an advanced data analytics capability and are interested in improved strategies to manage AI risks.

How to use the NIST AI RMF?

The NIST AI RMF itself is divided into two parts: Part 1 explains key risks associated with the use of AI as well as the focus on developing trustworthiness and Part 2 provides a four-step process for organisations to identify and manage those risks.

The four steps mirror conventional risk management approaches, namely to ensure the right culture, policies and management practices are in place (Govern), to understand the organisational context in which risks arise (Map), to effectively identify and measure risks (Measure) and to prioritise and action those risks (Manage).

To assist organisations in implementing the four-step process, NIST has also released a Playbook which provides a series of suggested actions and further practical information.

Let’s explore these steps in more detail.

Step 1: Govern

Govern
Govern is a ‘cross-cutting function’, meaning it underpins the other three stages. It involves developing processes and documents that connect to the technical aspects of an AI system to your organisation’s values and regulatory framework.

Key outcomes might include:

Developing policies and procedures to: outline business justification for the use of the AI system, identify risk appetite, and plan for deployment and subsequent change management.
Developing training about legal or ethical considerations that may impact the AI systems design or deployment for organisational staff involved in those stages.
Implementing accountability structures so that roles and responsibilities are clearly identified for the next three stages.

Step 2: Measure

Measure
Measure involves employing a variety of tools and methods to analyse and monitor those risks mapped in the earlier stage.

Key outcomes might include:

Establishing and routinely utilising appropriate methods to evaluate an AI system against risks and impacts.
Establishing and routinely engaging with feedback processes to continual improve the AI system.

Step 3: Map

Map
Map involves establishing the context that risks related to the use of an AI system can arise within. This is achieved by looking at an AI system’s life cycle and its interactions within your organisation.

Key outcomes might include:

Documenting information about how the AI system operates (e.g. settings, environment, data). This will assist your staff make informed decisions when interacting with the AI system.
Documenting identified risks (achieved by reviewing documentation that accompanies an AI system, e.g. end user license agreement), impacts (both beneficial and harmful), and their likelihood of occurrence.
Documenting the limitations of the AI system and establishing procedures for human oversight or intervention.

Step 4: Manage

Manage
Manage is about allocating resources to risks mapped and measured on a regular basis to decrease their likelihood and negative impacts.

Key outcomes might include:

A decision is made about whether to deploy the AI system by weighing the negative risks and impacts identified against organisational benefits.
Procedures are followed to respond to and recover from a risk when it occurs.

Key considerations to make the most of AI

Despite the NIST AI RMF’s novel approach to manage AI risks, there are a couple of considerations for organisations seeking to employ the RMF to keep in mind:

  • Still largely descriptive and subject to further change – in developing the AI RMF, NIST engaged with a wide range of stakeholders, bringing together subject matter expertise and special interest groups. However, this has led to a lot of the RMF itself being descriptive where opinions varied on how to be proscriptive. Although the Playbook attempts to bridge this gap, both are subject to further changes and updates. Reva Schwartz from NIST has said  that she hopes for the AI RMF itself to remain stable for the next 3-5 years and expects to see the Playbook updated one to two times per year.
  • Challenges to approaching MAP and MEASURE – there are currently (as recognised by NIST itself) a lack of verifiable and robust measurement methods. Organisations may therefore find it difficult to achieve key outcomes within the MAP and MEASURE stages.
  • No risk profiles, yet – given that the AI RMF is non-sector specific, NIST also opened the door for the development and utilisation of risk profiles. These are templated applications of the AI RMF so that insights into how use-case or industry specific AI systems risks arise and can be managed. Whilst the hopes are for industries to come together and build risk profiles relevant to their field, widespread uptake and the utility of this approach has yet to be seen.

You can find the AI RMF, the Playbook, and other material by visiting NIST’s webpage. 

If you’re interested in learning more about the AI RMF, or in how our consultants can help you integrate trustworthiness with your AI system’s design or deployment and make the most of AI, contact us.