Facial recognition technology: what you need to know
Growing use of facial recognition tech is raising major privacy and human rights concerns.
You might not know it when you walk into Noel Leeming or The Warehouse, but you’re being recorded by cameras. If you know to look for them, you’ll see signs at the door warning customers that cameras are operating. But you need to dig into the company’s privacy policy to find out these cameras can be used to collect and store customers’ “biometric data”.
Biometric data include information about your physical appearance – your facial characteristics such as your eyes, as well as your voice.
Exactly why the stores need this information isn’t revealed in the policy fine print. A spokesperson for the Warehouse Group, which owns both retail chains, said it was “for safety and security purposes”, such as detecting thieves.
You might be prepared to tolerate that. But what if your biometric data were being used by a retailer for other purposes – say, profiling you to charge a higher price because some algorithm calculates you’re willing to pay it, or deciding you’re a bad credit risk?
This kind of profiling is one of the big concerns with the collection of biometric data and how it could be used by facial recognition technology (FRT). The Warehouse Group said it doesn’t “currently use” this technology, though wouldn’t confirm whether it had plans to do so.
But it’s a growing part of our lives.
Foodstuffs (owner of the Pak’nSave, New World and Four Square brands) uses the tech in some North Island stores and plans to roll it out to some in the South Island as “a crime prevention measure”.
Big online players such as Facebook can use it to automatically “tag” people’s faces in photos. TikTok has also recently come under fire for changing its privacy policy to allow collection of “faceprints” – as well as “voiceprints” – from users.
Why should you care?
Your facial image is unique to you and, for the most part, you can’t change it. However, unlike other types of biometric information – such as fingerprints, iris scans or DNA – it can be easily collected at a distance and you may not know it’s happening.
While there are potential benefits – for example, identifying missing people, victims of human trafficking and other crimes – there are also big risks the information could be used to discriminate against you.
Facial recognition technology essentially takes pictures and uses an algorithm to match an image of a person’s face to one already stored in a system.
One of the major concerns with the tech is that it’s “trained” on artificial intelligence algorithms, which have shown biases and lower accuracy rates identifying faces of minority groups or female faces. If an algorithm is biased, it can result in discrimination.
How it’s being used
There are three main ways the tech is used.
For verification
This is the way it’s used at the border with eGate, where you conveniently don’t wait to see a customs officer – a gate just snaps your photo instead. The tech compares your photo with the image in your ePassport to verify your identity.
The RealMe electronic identity tool is another example. Managed by the Department of Internal Affairs, RealMe’s digital photo capture services use facial recognition to verify your identity so you can apply for a passport or driver’s licence.
For identification
An example of this is using facial images from CCTV or other sources and comparing them with photos on a “watchlist”. Law enforcement agencies have been keen on using the tech for this purpose.
Last year, the New Zealand Police got itself into trouble for trialling an FRT tool from a company called Clearview AI. The tool was used without consulting the Police Commissioner or Privacy Commissioner, and without public knowledge. It was discontinued after it didn’t work well – it only yielded one correct image match.
The Clearview AI system, supplied to 600 law enforcement agencies around the globe, has come in for flak for other reasons. It uses a database of about three billion images sourced online from sites such as Facebook and YouTube. Facebook, Twitter and LinkedIn have accused the company of collecting these images in violation of their privacy policies.
Regulators in Australia and the UK have also kicked off investigations into Clearview AI’s data privacy practices.
For categorisation
The third main use is for building profiles of people, using characteristics such as race, sex and ethnicity. For example, facial images could be used to identify people at certain locations, such as at a protest or entering a store, and added to other data that a company has about them.
You might be served up different services or prices as a result.
Legal protections lacking
The growth of facial recognition technology has left regulators trying to catch up.
Victoria University of Wellington Faculty of Law associate professor Nessa Lynch co-authored a report on its rise in New Zealand.
The report points out the only consumer safeguards against the misuse of this type of data are in the Privacy Act. Facial images are personal information so the act covers their collection, processing and storage. This means companies can use FRT as long as they comply with the act’s privacy principles.
However, Lynch argues there needs to be specific controls on the collection and use of biometric information. Her report recommends establishing a Biometrics Commissioner, a code of practice for biometrics and a consumer right to have data erased.
She also backs a moratorium on the use of “live” FRT by police. This tech uses cameras in public areas and compares images in real-time with those on a watchlist. If a match is found, it alerts officers at the scene. Last year, New Zealand Police inked a deal to purchase a new system with “live” FRT capability, though said it won’t use it. The police are undergoing a six-month review of FRT and intend to publish the review's findings.
Internationally, there’s a growing movement to stop the collection of biometric information in public spaces.
In June, the European Data Protection Supervisor and the European Data Protection Board called for a ban on the use of artificial intelligence to automatically detect “human features” in public spaces. These features include faces, gait, fingerprints, DNA and voice.
Critics here also highlight the major risks of profiling based on ethnicity.
The report cites the concerns of Karaitiana Taiuru, an expert in indigenous ethics in data collection, who said it’s only a matter of time before a Māori person is wrongfully arrested due to a wrong match thrown up by police use of the technology.
Lynch has similar concerns: the tech comes with significant risks to “privacy and other human rights, such as freedom of expression and the right to be free from discrimination”, she said.
What’s next?
The Office of the Privacy Commissioner (OPC) plans to release a paper this year, setting out how the Privacy Act applies to biometrics and the office’s “regulatory approach”.
The OPC said it’s aware some companies are using FRT and it will continue to monitor the technology’s use.
We’ll also be keeping watch on this issue. Let us know what you think. If you’ve got a tip-off on a company using facial recognition technology, email [email protected].
Member comments
Get access to comment