elevenM’s Brett Watson discusses developments in regulation and governance (and some recurring issues) around implementing facial recognition.
In the 1993 film Groundhog Day, Bill Murray’s TV weatherman character gets stuck in a supernatural time loop, waking up every morning on the 2nd of February and experiencing the same day on repeat.
It’s a similar feeling for the privacy-minded when it comes to facial recognition technology. It seems like every other day there’s a story in your news feed about the technology being misused and the predictable, recurring consequences that followed.
Same story, different countries
2023 saw effectively the same story play out across the world:
Australia
The Office of the Australian Commissioner (OAIC) is investigating the personal information handling practices of Bunnings and Kmart. The investigation followed an article by the Choice consumer advocacy organisation in June 2022 that revealed Bunnings and Kmart were using facial recognition technology in their stores with limited notice being provided to the public. The OAIC has advised that the investigation is ‘significantly advanced’.
New Zealand
Consumer advocacy organisation Consumer NZ published an article in November 2022 identifying that major retailer Foodstuffs North Island had deployed facial recognition technology in 29 supermarkets. The Office of the Privacy Commissioner (OPC) engaged constructively with Foodstuffs, which then decided to turn off the cameras already in use and to undertake a controlled trial of the technology before any further rollout.
Canada
Prompted by media reports, the Information and Privacy Commissioner for British Columbia launched an investigation into retailer Canadian Tire, which was completed in April 2023. The investigation found that some Canadian Tire stores were using facial recognition technology without adequate notice, consent, or a reasonable purpose.
United States
In December 2023 the Federal Trade Commission announced that pharmaceutical chain Rite Aid was banned from using facial recognition technology for five years after deploying it without using reasonable safeguards. In news that was as unfortunate as it was predictable, the technology was falsely identifying consumers, particularly women and people of colour, as shoplifters.
What is going on here?
There are some common threads, and something of an identifiable progression, running through these examples:
- advances in facial recognition technology in recent years have meant that purchasing and deploying a facial recognition solution is no longer prohibitively expensive or complex
- the use of facial recognition technology in a retail store or a similar public setting is not usually proactively communicated to the public
- civil society advocacy groups raise the alarm, and privacy regulators are called upon to do something about it
- the organisation that has deployed facial recognition technology will defend it on the basis that it is being used as a safety and security measure that protects the store and its employees from aggressive customers and/or shoplifting.
Facial recognition technology is not inherently ‘bad’. Facial recognition algorithms are, generally, at least as good at facial recognition as humans who are professionally trained when the technology is used in a controlled setting (for example, comparing high-resolution images with a still, live face in a well-lit room). There are significant variations in performance, however, when facial recognition algorithms attempt to identify faces in crowded settings or recognise a face from a low-quality image like a CCTV screenshot. Like any other technology, the key issue is whether it is being used in the way it was designed to be, and in a proportionate manner.
One can appreciate the perspective of the security and safety teams at retailers, stadiums, and other places where facial recognition is increasingly deployed. Identifying and excluding people who are known to have caused harm to staff or equipment using facial recognition is an apparently cost-effective way to increase to safety and security, which is the success measure that these teams are measured against.
Nevertheless, the unavoidable truth is that the possible harms of facial recognition technology are real, and well-documented. The broader ‘chilling effect’ on the community that comes with the widespread use of identifying technologies is also undeniable.
Facial recognition technology can be used to make or influence decisions at a scale that is well beyond human capability alone — for better and for worse.
Breaking out of Groundhog Day
After railing against the time loop for a while, Bill Murray’s weatherman eventually realises that thinking about things from a new perspective (i.e., just not being a total tool) is the way to break free from Groundhog Day.
A different way of thinking is also what we need when it comes to privacy and facial recognition.
Facial recognition technology involves the collection and use of biometric information, which is categorised as sensitive information under the Privacy Act and requires individuals’ consent. A proposed reform to the Privacy Act ‘agreed in principle’ by the government will put OAIC guidance into law, namely that consent must be current, specific and voluntarily provided by a person with the capacity to do so.
Implementing this consent model in a large-scale scenario like a crowd of 100,000 at the Melbourne Cricket Ground is where things can get tricky, particularly regarding voluntariness. How can each person in the crowd be informed and voluntarily provide their unambiguous consent to the use of facial recognition? Through a hyperlinked policy next to a checkbox when they bought their ticket two months ago? Are there crowd flow safety hazards if all patrons are asked to stop and read signage? What if someone doesn’t indicate their consent? Are they excluded?
The Privacy Act does not require consent in circumstances where an organisation can identify an exception to the consent requirements, such as in relation to an enforcement activity or another legislatively permitted exception. This approach is undesirable for organisations because it can lack certainty around whether the exception is valid. It is doubly undesirable for individuals because the absence of a consent mechanism effectively curtails their right to be properly informed about facial recognition and make a choice whether to engage.
The OPC in New Zealand is currently developing a privacy code for the collection and use of biometric information. Following an initial community consultation, the OPC landed on three key proposals that will feature in the draft code.
- A proportionality assessment — whether the reasons for using biometrics outweigh the privacy intrusion or risks.
- Transparency and notification requirements — greater obligations to be open and transparent about the collection and use of biometric information.
- Purpose limitations — restrictions on collecting and using biometric information for certain reasons.
At the iapp ANZ Summit late last year, Deputy Commissioner Liz MacPherson from the OPC noted that a consent requirement was also closely considered as a proposal for the code but was ultimately discarded. This was because there would likely be too many exceptions available in practice, diluting the purpose of obtaining consent in the first place.
This approach is pragmatic, useful as guidance for all organisations, and indicative of the limitations of a consent-based approach to regulating facial recognition technology. And, if proposals 1 and 3 above seem familiar to you, it’s probably because they look just like the ‘fair and reasonable’ test that has been ‘agreed in principle’ by the government in the Privacy Act review.
As it currently stands, the OAIC has indicated that it will read a consideration of proportionality into Australian Privacy Principle 3 (collection of personal information). In its determination against convenience store group 7-Eleven, the OAIC found the 7-Eleven collected sensitive information in breach of APP 3.3 in circumstances where the collection was not reasonably necessary for its functions and activities, and 7-Eleven had not obtained valid consent. Most deployments of facial recognition will probably not be as obviously disproportionate as the circumstances in that case, however.
Despite the principles-based nature of the Privacy Act, facial recognition technology is one area where the law has not kept pace with developments in technology and society. Legislating the fair and reasonable test (or even just giving us something a bit stronger than an ‘in principle’ commitment) could well be the different way of thinking about regulating facial recognition that the community needs.
My organisation is considering facial recognition — what should I do?
Conducting a privacy impact assessment is the obvious, immediate action for any organisation considering facial recognition technology.
A PIA on this issue must be escalated to the senior levels of an organisation for discussion and decision. Done properly, a PIA on the deployment of facial recognition will surface questions about proportionality, consent, and brand reputation. Are there, for example, practical alternatives for people who decline to use facial recognition or for whom facial recognition does not work? Will the deployment of facial recognition disproportionately impact any groups in the community?
It is also worth reading through the public statements and investigations by the regulators and consumer advocacy groups listed above, as this will give a good understanding of the emerging regulatory issues and consumer sentiment on this topic.
A decision to deploy facial recognition technology should be made by an organisation’s executive, fully appraised of the risks and benefits, and not by the security or ICT team in isolation.
Contact us
If you’re interested in learning more about privacy management, contact us at hello@elevenM.com.au or on 1300 003 922.