96% of Australians want conditions in place before organisations use artificial intelligence to make decisions that might affect them.
Organisations face increasing pressure to adopt AI to realise a wide range of benefits including improved productivity, efficiency and customer service. But getting AI wrong can cause harm to staff and customers and lead to long term reputational damage and financial losses.
AI systems present a novel and increased risk profile compared to traditional software. They tend to be less predictable and less transparent, and because of the transformative nature of the technology, AI projects are more prone to legal and ethical concerns.
Our AI Risk Assessment methodology brings together evidence based best practice from leading research organisations and standards bodies, tailored to the Australian context and scaled based on your risk appetite. With a structured, scalable assessment process to manage AI risk, your organisation can move quickly and confidently with AI.
elevenM’s AI risk assessment or AIRA, is a structured, scaleable process to identify, assess and mitigate risks relating to an AI project or initiative.
Our methodology is informed by global best practice and draws on tools and resources from the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and the US National Institute of Standards and Technology (NIST), among others.
An AIRA is similar in methodology to a Privacy Impact Assessment but with a deeper engagement with the specific dynamics of AI systems and a broader scope, covering:
Support projects to make smart risk decisions
Identify and manage your project's AI risks
Build long-term trust in your brand among customers, staff and others
Access the benefits of AI with confidence
elevenM will works with you customise our risk assessment process for your organisation’s specific circumstances and needs.
Our team of consultants tailor every engagement to meet the unique circumstances of every client.
You will gain a comprehensive assessment of the potential implications of your AI project, along with specific, prioritised remediation activities to uplift controls relevant to the assessed project, practice or technology.
Throughout the engagement, we use specialised tools that ensure the assessment process is clear, consistent and clearly documented.
What you want to assess, what types of issues you would like covered and how you would like our findings to be reported to you.
Starting with a kick-off meeting, we begin consulting with stakeholders to collect all of the information that we need for the assessment, including mapping the context, characterising AI systems, documenting goals and understanding impacts.
We carry out a detailed assessment, identifying compliance issues, AI risks and opportunities for improvement. This may include identifying relevant characteristics and metrics for AI system trustworthiness, as well controls for managing AI risks pre- and post- deployment.
We consult with you on our findings and provide you with a detailed report.
We work with you to identify and document practical actions you can take to manage any issues identified in our assessment.
While your organisation may already have risk assessment processes to cover risks like privacy and cyber security, AI systems can also introduce risks not covered be these processes (for instance, the introduction of bias). An AI risk assessment takes a broader view of risks that could be introduced by AI.
We work with every organisation individually to tailor our work to your needs.