Over the last two decades, digital innovation has resulted in proliferation and transformation of how governments use artificial intelligence (AI) systems and automated decision-making (ADM) technology.
In Victoria, the deployment of AI systems has grown substantially, and the use of these technologies can be seen across many sectors. In light of this, the Office of the Victorian Information Commissioner (OVIC) has released a number of resources on privacy and AI for organisations, including public statements about the use of ChatGPT and CoPilot. Most recently, they have sought public consultation on AI privacy guidance for Victorian Public Sector Organisations (VPSOs), to ensure the guidance is clear, relevant, practical, accessible and useful.
At elevenM, our goal is to help build trust in an online world because trust is crucial for all digital innovation. Regulation needs to ensure that AI development and use aligns to community values and expectations. That means all AI regulation and regulatory advice must include a strong focus on building and sustaining trust.
In our submission to the Office of the Victorian Information Commissioner, we argue that a whole of Victorian Government strategy on AI systems is called for, including guidance and a mandatory risk assessment so that all risks inherent with AI systems by both VPSOs and their developers are considered, not just privacy risks.
Further, we assert that the guidance should:
- be structured by the AI lifecycle rather than detailing the key privacy risks under each Information Privacy Principle (IPP). This is because the same IPP will arise at different stages of the lifecycle and require slightly different considerations
- include a section that explains what AI systems are, detailing the different types of AI systems and their uses, including definitions of common terminology and examples
- include guidance on when OVIC considers data to be personal and/or health information by reference to common types of datasets across the AI lifecycle or more novel datasets used/generated by AI systems specifically
- include guidance on the methods through which de-identification can be achieved, as well as promoting higher standards (such as anonymisation) when it comes to AI systems due to the higher risk of re-identification
- provide more detail about using technology for automated decision-making. We do note that AI systems are not always involved in ADM and the risks do not always relate to privacy, however, due to the impacts this technology can have on people’s lives, it is essential it is discussed in this guidance
- offer guidance on outsourcing and third-party AI systems due to the additional risks these systems pose to VPSOs.
elevenM’s submission builds on our expanding work in AI and data governance, and in helping organisations in the adoption of ethical and responsible AI. This includes our role leading the consortium delivering one of the Federal Government’s new AI Adopt centres, which will guide Australian small to medium sized businesses through the design and implementation of AI solutions.
Download our submission to read our responses to the specific questions posed by the Government.