elevenM’s Georgia Potgieter discusses risks and lessons relating to the use of AI and Automated Decision Making by governments and other entities that deal with large numbers of individuals.
In a rapidly digitising world, the Australian Government, like many others, is increasingly relying on Automated Decision Making (ADM) systems. These systems, built on machine learning and algorithmic mechanisms, are designed to streamline decision-making processes, enhance efficiency and reduce human error. However, they also raise new and serious concerns surrounding privacy, data security and fairness.
For professionals deeply invested in privacy and cyber security, it is crucial to understand the complex challenges presented by the adoption of ADM. This article will delve into the risks of using ADM for governmental decision-making and explore a recent Services Australia’s snafu as a case study. It will also examine how New Zealand is setting a standard for transparency in the realm of automated decision-making.
What is Automated Decision Making (ADM)?
In its recent Safe and Responsible AI discussion paper, the Australian Government’s Department of Industry, Science and Resources proposed the following definition of Automated Decision Making (ADM):
…the application of automated systems in any part of the decision-making process. Automated decision making includes using automated systems to:
- make the final decision
- make interim assessments or decisions leading up to the final decision
- recommend a decision to a human decision-maker
- guide a human decision-maker through relevant facts, legislation or policy
- automate aspects of the fact-finding process which may influence an interim decision or the final decision.
This definition, while comprehensive, has not yet been legislated. Although international standards and definitions have been formalised in some jurisdictions, the local definition is likely to undergo further discussion, refinement, and eventual legal codification. (Read elevenM’s submission to the Government’s AI consultation here.)
Examples of ADM in real life
In practice, automated systems range from traditional non-technological rules-based systems to specialised technological systems which use automated tools to predict and deliberate. For example, a traditional form of ADM would be its use in HR contexts for tasks such as resume screening and employee performance evaluations.
More advanced forms of ADM are becoming popular in the financial sector, including in approval processes for loans and insurance policies. ADM can also assist with fraud detection, where it scans for irregular patterns and suspicious activities across vast datasets to flag potential fraudulent transactions for further investigation.
At the cutting-edge, ADM can also be employed in medical diagnostics. Algorithms are now able to analyse medical data to assist in identifying conditions and recommend treatment options at significant speed.
These examples highlight that it is likely we will see a significant rise in ADM as the technology progresses, spanning a broad array of sectors from social services and healthcare to criminal justice and regulatory compliance.
The risks and harms of ADM in government
Despite its promise there’s good reason to be cautious when it comes to using ADM, particularly in contexts involving large numbers of people. This is well illustrated by the experience of the Australian Government, which has not had a good track record with using ADM.
The Robodebt scandal was a notorious episode involving the Government’s use of automated data-matching to identify welfare debt. Implemented by the Department of Human Services, now Services Australia, the system cross-referenced income data with the Australian Taxation Office, automatically issuing debt notices to welfare recipients for discrepancies. The lack of human oversight and systemic inaccuracies led to a class-action lawsuit, a Royal Commission, allegations that the stress caused suicides, and eventually forced the Government to suspend the scheme. Ultimately, the government agreed to pay $1.2 billion in compensation to approximately 400,000 wrongly assessed individuals. Culminating in a Royal Commission, the scandal serves as a stark warning about the consequences of inadequately regulated automated decision-making systems.
More recently, we learned about errors in Services Australia’s child support assessments. Using an automated decision-making system, the agency inaccurately calculated child support payments, affecting thousands of families across Australia. The error was attributed to a flaw in the underlying algorithm, which was subsequently rectified.
This incident was only brought to light substantially upon the matter being brought to the Commonwealth Ombudsman. Initially, Services Australia intended not to reach out to former clients impacted by the 15,803 potential “inaccurate child support assessments.” However, the agency reversed its decision after the Commonwealth Ombudsman cautioned that such a move could result in financial disadvantages for affected parents.
Transparency … the Kiwi way
A common and critical issue in both debacles above is the lack of transparency in how decisions were made. With ADM systems becoming increasingly complex, it can be challenging for laypeople and even experts to understand how a decision was reached.
Across the Tasman Sea, New Zealand is taking strides to ensure transparency in its automated decision-making systems. For example, the Ministry of Social Development’s website provides clear information about how ADM is utilised for child support assessments. This includes sharing the criteria used for decision-making, thus providing an insight into how the system works. This transparency helps build the public’s confidence that the system is both fair and accountable
Lessons to be learned
Australia could take a page out of New Zealand’s playbook when it comes to transparency and openness about ADM systems. Transparency is not just good practice; it’s essential for building public trust and ensuring ethical compliance. Being open about the criteria and methods for automated decision-making also enables independent audits, which can uncover flaws or biases in the system.
While ADM systems offer numerous benefits, including efficiency and scalability, they are not without risks, especially in the realms of data security, ethics, and transparency. The recent incidents serves as a cautionary tale that demands immediate attention. By looking to New Zealand’s example, Australia can find ways to improve its approach to ADM, ensuring that technology serves as an asset rather than a liability.
Get in touch
For expert guidance and assistance with your ADM processes, reach out to elevenM: email hello@elevenM.com.au or phone 1300 003 922.