elevenM’s Piotr Debowski and Georgia Brinkworth explain how facial recognition technology operates and highlight privacy challenges and opportunities in its use, specifically when used for building access or blacklisting.
The recent OAIC determination against Bunnings has made the use of facial recognition technology (FRT) a hot topic in privacy. But this is not a new issue that organisations are facing. We frequently talk to organisations wanting to implement FRT as a physical security control who aren’t fully cognizant of the privacy challenges that are associated with one-to-many matching technology.
The OAIC has released some great guidance on this issue. Our goal here is to supplement that with practical guidance and examples, drawing from our experience in helping clients implement similar technology.
What is facial recognition technology and how does it work?
At the simplest level, FRT involves comparing two biometric templates (mathematical representations of a person’s face) to determine if they match.
It requires an organisation to create a gallery of biometric templates by uploading images of peoples’ faces. These are then compared to those biometric templates generated when a person enters the FRT’s field of vision. It’s important to note that the way FRT decides whether there is a match is based on a pre-determined percentage of similarity between the biometric templates, not whether they are definitively identical.
The following diagrams are a high-level depiction of how FRT works.
Conversion: How images are converted into biometric templates

Recognition: How facial recognition technology identifies matches

From a security perspective, FRT can be a useful tool to help identify persons who are:
- authorised to access a particular premises or asset (e.g., whether an employee is able to gain access to a restricted section of a building or use a vehicle); or
- known threats (e.g., persons who have an intervention order prohibiting them from being near your organisation’s people or premises).
FRT is often used with other technology and strategies to achieve security. For example, security cameras from a secure area might use FRT to check against a list of authorised persons. If a match is not returned, an alert may be issued or, if the cameras are monitored, a visual indicator may be displayed on the camera feed, prompting security to approach and validate them.
However, FRT presents several privacy challenges because of the way it operates and how an organisation may use it.
Collection of sensitive information
One of the main challenges in using FRT is whether your organisation is permitted to collect the sensitive information that is necessary for it to work. Under the Privacy Act 1988 (Cth.) (Privacy Act), biometric templates and biometric information that is used for the purposes of automated verification or identification is considered sensitive information. APP 3.3 prohibits the collection of this sensitive information, unless an exception applies (such as consent).
Am I actually collecting sensitive information?
One pitfall that many Privacy Officers fall into is thinking that they are not collecting sensitive information, an argument that the OAIC has considered in determinations against both Bunnings and 7-Eleven. Let’s unpack some of the common misunderstandings:
Pitfall #1: We’re only collecting images and not sensitive information because FRT only collects an image and then converts into a mathematical representation.
Answer #1: ‘Collection’ includes when an organisation creates new information (like the biometric template) with reference to, or generation from, other information the organisation already holds (like the probe image). Even though the mathematical representation may be different for different FRTs, it is still considered a biometric template.
Pitfall #2: We don’t need to worry about collecting sensitive information of anyone else (e.g., guests or members of the public who enter the FRT’s field of vision), we’re only collecting it from people who have been enrolled into the FRT system.
Answer #2: FRT collects sensitive information of all persons who come into its field of vision, irrespective of whether they have been previously enrolled or have opted out. As explained above, the first step is for FRT to capture the image of a person who enters its field of vision and convert that probe image into a biometric template. The FRT cannot differentiate between groups of people at this point in time (i.e. those who have not been enrolled). This amounts to a collection of sensitive information. The next step is for FRT to attempt to perform a match between the biometric template generated from the probe image and those already in its gallery. If there is no match, or the person has opted out, then FRT does not generate an output. But just because it does not generate an output does not mean that there is no collection.
Pitfall #3: We aren’t collecting sensitive information because FRT only captures a biometric template for a short period of time (milliseconds) and then deletes it if there is no match.
Answer #3: This was an argument run by Bunnings who contended that as biometric templates for individuals who were not matched by the FRT were only stored for less than 4.17 milliseconds, this did not amount to a collection. The OAIC held that the four step matching process performed by the FRT required a collection to be made. Without the collection, the process would not work. The OAIC highlighted that for the purposes of establishing a collection, it does not matter that the information was held momentarily or conducted automatically (see paragraphs 71 – 74).
What steps can I take to ensure the collection of sensitive information for facial recognition technology is permitted?
One exception to the prohibition of collecting sensitive information is if your organisation has the person’s consent. It’s important to note that this is not the only exception that organisations can rely on, but it’s the one we will unpack in most detail.
Employees and contractors
A good way of meeting this exception is by obtaining the express consent of your employees and contractors if your organisation is going to use FRT to identify them. You could draft a consent notice and ask employees or contractors to sign prior to enrolling them into the FRT system.
Employees and contractors must also have an option to opt-out and if they do, you must provide alternatives that don’t significantly impact them. For example, if FRT is used to validate access to a section of a building, a keycard and/or lanyard might be provided to validate access for those employees who opt out. If alternatives that don’t significantly impact them are not provided, the consent might not be considered voluntary and therefore invalid.
Third parties
Obtaining the consent of third parties who enter the field of vision of FRT (e.g., passersby of a camera with FRT) is harder. Whilst the OAIC does recognise that consent can be implied,it’s tricky to satisfy. In the 7-Eleven determination, 7-Eleven contended that people who entered their stores consented to their biometric templates being collected because of signs they had placed outside advising that cameras were being used and that anyone who enters their premises consents. The OAIC ruled that the signs could not constitute implied consent because there was ambiguity and reasonable doubt about a person’s intentions.
In our experience, if your organisation is trying to rely on implied consent, you’re going to have to have very specifically worded and very obvious notices outside your premises, combined with options for people to interact with your organisation without entering your premises (e.g., guests being able to phone in or meet remotely instead of at your premises).
Data quality
The second biggest challenge using FRT are false positives (where FRT identifies one biometric template as being a match with another when it isn’t) and false negatives (where FRT does not identify one biometric template as being a match with another, when it is).
The reason this is number two on our list is because these data quality issues can result in significant harm to both individuals and your organisation, as demonstrated in the following examples:
Example #1 – False positives resulting in business harm
Banking Co uses FRT to authenticate access to its data server room which contains critical infrastructure used to carry out its electronic banking service. Since the introduction of the technology, security has been called out to investigate an excessive number of alerts from the system, that have all been false negatives. This has led to distrust in the system from the security guards who feel responding to an alert is a waste of their time. A day occurs when security has responded to 4 false negative alerts by lunch, with increasing frustration during a busy day. When the alert sounds again in the afternoon, the security officer decides they will not respond and instead turns off the alert, marking it as a false negative.
This time, the person who triggered the alert was a malicious actor and was allowed access into Banking Co’s data server room. Because of this, they were able to disrupt Banking Co’s electronic banking system, resulting in harm including: financial loss (like Banking Co’s share price plummeting due to loss of consumer confidence) and regulatory risk (like Banking Co’s failure to meet its Security of Infrastructure Act obligations).
Example #2 – False negatives resulting in harm to persons
A shelter housing abuse victims uses FRT on its premises’ cameras to identify persons that pose a threat to residents. When a match is detected, security is dispatched to remove the threat from the premises. The shelter has enrolled Person C’s image into its FRT because Person C is prohibited by court order from being near one of the shelter’s residents. A false negative arises when Person C enters the shelter’s premises and the FRT does not identify them. Security is not dispatched and Person C physically harms the resident.
Example #3 – False positive resulting in harm to persons
Logging Co’s headquarters have recently been vandalised by an environmental activist group. Logging Co implements FRT to flag persons who were involved in the vandalism by putting them on a ban list. Person D is a courier who enters Logging Co’s headquarters to deliver some mail. Person D is flagged by FRT as being on the ban list. Logging Co’s security promptly swarms Person D in the lobby, detains them, calls police, and Person D is escorted out of the building in handcuffs. It is only later revealed that the FRT falsely matched Person D’s biometric template to another on the ban list. As a result of Logging Co’s actions, Person D was heavily distressed and suffered damage to their reputation when other people saw them being arrested.
Example #4 – False positive resulting in harm to certain groups
Mining Co has recently introduced FRT as the sole way to manage employee access into their headquarters and detect unauthorised personnel in certain areas. Person A is an employee of the company who wears a hijab, her colleague, Person B wears no head coverings. Each morning when they enter the workplace, Person A has been denied access, where Person B has been allowed through. This has resulted in Person A having to call security to be allowed into the building, a stressful situation for her to deal with each morning prior to commencing her working day. Upon speaking with her colleagues she found that she is not alone and other employees of colour have been having the same experience. It becomes clear to them that the FRT system is not performing effectively regarding certain groups of individuals (i.e. people wearing religious head coverings and people of colour) which is resulting in a discriminatory effect on employees who are being targeted in ways others are not.
What factors affect data quality?
A lot of factors can impact the quality of the probe image and therefore the biometric template that is generated and used to compare to existing ones in the FRT’s gallery. A working paper titled ‘Facial Recognition Technology A Survey of Policy and Implementation Issues’ published by the Lancaster University Management School in 2010 lists some of these factors:
- The environment of the probe image – including the background, lighting conditions, distance of the person from the camera, position of the person, size and orientation of the person’s head, and object occlusion.
- Differences between the quality of probe image and those taken at enrolment – such as intensity, focal length, colour balance, size, and angle.
- Unique characteristics of the person – such as their age, gender, skin colour, skin conditions, and the amount of time that has elapsed between the enrolment image and probe image resulting in changing features or ageing.
- The gallery size – the greater the number of biometric templates in the gallery, the greater the likelihood of similar or identical mathematical representations.
The other factor, as highlighted by guidance issued by the OAIC following the Bunnings determination, is that, like many AI tools, the data that the FRT is trained on may contain inherent bias, leading to different levels of accuracy for certain demographics of people. For example, many studies have shown that FRT trained on mostly light-skinned, male faces is likely to preform less well for darker skinned women.
What steps can I take to address data quality issues?
Be aware of the factors listed above and work with your implementation team to address as many of them as possible. Usually, the FRT vendor’s technical team will be able to advise strategies to achieve best results.
Also remember that FRT never returns a definitive match, FRT only returns the probability that two biometric templates are similar and, if this probability reaches a certain threshold, performs an outcome. Speak with your implementation team about the threshold needed for FRT to return a match and be aware of ‘automation bias’ – the tendency for people to defer to suggestions from automated systems, even in the face of other contradictory information.
From a privacy perspective, you may want to set the threshold high which will decrease the possibility of false positives. However, this will often be in tension with the effectiveness of the FRT (as it will increase the probability of false negatives), so a balance will need to be struck.
Ban lists
Number three on our list of common challenges is the practice of organisations using FRT to flag persons who are on a ban list as soon as they enter a camera’s field of vision. Often, and for obvious reasons, obtaining enrolment images of these persons (in order for FRT to generate biometric templates) from them and/or with their consent is impossible.
What are some of the issues?
Collection of sensitive information: directly from the individual
You may be able to source enrolment images of these persons from your own records if the person has entered the field of vision of one of your cameras before. The trouble is that you will need to meet an exception to the prohibition in APP 3.3 on the collection of sensitive information when their image is converted into a biometric template.
It’s likely going to be difficult for you to obtain consent, or you might not want to go about seeking consent. Other exceptions include if a ‘general permitted situation’ exists (see s 16A of the Privacy Act). However, the recent Bunnings determination also suggests that these are hard to satisfy.
Bunnings contending that it reasonably believed that the collection of biometric templates was necessary to:
- lessen or prevent a serious threat to the safety of its employees arising from abuse, violence and aggression of customers; and
- take appropriate action in relation to unlawful activity, being retail theft and assault or battery of its employees by customers.
However, the OAIC ultimately reject both of these arguments (see para 127 – 147, and 163 – 176), on the basis that it had doubts about Bunnings holding a ‘reasonable belief’ and the ‘necessity’ of the FRT to achieve desired outcomes given certain inefficiencies.
Collection of sensitive information: indirectly from third parties
Another way you may be sourcing enrolment images or biometric templates of persons you intend to put on the ban list is from public sources (like the internet or social media platforms) or by buying or receiving a ban list from another organisation, including law enforcement as was the case with the Bunnings determination (see paragraph 32). This practice presents several challenges which include:
- The lawfulness of collecting sensitive information – given that you’re not getting the person’s consent, are there any other exceptions to the prohibition on collecting sensitive information that apply?
- The same data quality issues we discussed above but on steroids: images from the internet are often of poor quality, and those from social media are often edited. Plus, some novel ones. For example, how can you be sure that the person whose image you are taking from social media is of the same person you want to add to the ban list?
- Retention issues – how do you manage who is on the ban list and when they get taken off?
What can I do, can I even do anything?
In our experience, Privacy Officers often are placed in a difficult position with this one because organisations don’t appreciate how the risks outweigh the benefits and push to continue or don’t loop the Privacy Officer in on the practice.
Our biggest piece of advice is to tackle the prohibition on collecting sensitive information first by highlighting the risks to the organisation if there is no exception e.g., fines, loss of public trust, bad publicity.
If an exception is satisfied, work with the implementation team to embed privacy enriching practices that also increase the effectiveness of the facial recognition technology, which the implementation team will get really excited about. For example, a policy that sets criteria around where enrolment images are sourced and how often they are updated will address your data quality and retention issues and will also reduce the number of false positives that security gets flagged with, increasing their ability to identify people who are actual threats.
Just as with other projects, if your organisation is considering facial recognition technology, the implementation of a privacy by design approach will allow you to embed compliance with the APPs. This means conducting PIAs, regularly training staff, having clear governance arrangements and proactively managing ongoing assurance activities all need to be imbedded into the journey.
Contact us
If you’re interested in learning more about assessing privacy risk or implementing new technologies in your organisation, please contact us.