Facial recognition technology could soon be everywhere – here’s how to make it safer
The recent coronation of King Charles III was a high-profile example of when facial recognition technology has been used to monitor a crowd, but there are plenty of others. The technology is used by law enforcement all over the UK and other countries.
It’s now common in US airports. It’s being used to monitor refugees and identify dead bodies in Ukraine. Even Beyoncé fans have been subjected to it.
And there’s more to come. The UK government is reportedly planning to add facial recognition to the police’s body-worn devices, drones and numberplate cameras. It may soon be very difficult to leave your house without having your face scanned.
There are serious questions about whether the benefits of this technology outweigh such concerns. But steps could be taken to address the issues people are worried about.
Uses and limits
Facial recognition can be used by police to scan many faces in a crowd and compare them with a “watch list” of known criminals. This “live facial recognition” is used with the aim of reducing crime. It can also be used retroactively on recorded CCTV footage.
In the UK, the Protection of Freedoms Act 2012 provides a legal basis for the use of surveillance camera systems in a public place.
And according to the government’s surveillance camera code of practice, it’s justifiable to use facial recognition systems in decisions that could negatively affect people, such as whether to arrest them, so long as there is a human in the loop to supervise and make decisions.
So the use of facial recognition systems, or those for other types of biometric information, cannot be used for autonomous decision making, such as automatically tracking a suspect across multiple camera feeds.
Problems with facial recognition
But why should this be of concern to law-abiding citizens? Civil liberties groups argue facial recognition use in public places affects our privacy and freedom, particularly in terms of its ability to track individuals at mass gatherings and to potentially engage in racial profiling.
Security cameras have long captured us as we went about our daily lives. However, authorities easily being able to put a name to a face in the video footage is something we’re not so used to.
The technology creates a situation where many more people could get caught in the sights of the authorities than before. A person’s casual indiscretions or errors of judgement can now be easily tracked and linked to a name and address.
Those with a criminal record could be targeted in public based on their past, regardless of whether they intend to carry out any illegal activity. The technology could provide new opportunities for racial profiling, where authorities track or suspect people based on their background, rather than because of specific information about them.
Facial recognition could also be used against people with no criminal past or plans to commit a crime but who the police simply want to stop, such as protesters. The Metropolitan Police may have announced that facial recognition would not be used to target activists at the coronation, but they also provoked outrage for arresting anti-monarchy demonstrators who were later released without charge.
It’s also important to recognise facial recognition technology still suffers from inaccuracies, which can result in false positive matches where an innocent person is mistaken for a known criminal.
With facial recognition posing such perceived threats, it could have a chilling effect on free speech and demonstrations
What can be done?
However, there are ways that the technology could be used more safely. Law enforcement teams could perform two preliminary steps – activity recognition or event detection – before they resort to face recognition. This approach can help minimise the potential for privacy violations and false positive matches.
Activity recognition refers to the process of identifying and categorising human activities or actions based on CCTV or other sensors. It aims to understand and recognise the activities of individuals or groups, which can include standard activities such as running, sitting or eating.
On the other hand, event detection focuses on identifying specific events or occurrences of interest within a given context. Events can range from simple events like a car passing by or a person entering a room to more complex events like accidents, fights, or more unusual behaviour. Event detection algorithms typically analyse CCTV and other sensors to detect and locate events.
Hence, activity recognition or event detection should be the first step before applying facial recognition to a surveillance camera feed.
Ensuring the data from cameras remains anonymous can also enable police to study the activities of people in the crowd while preserving their privacy. Conducting regular audits and reviews can ensure that the collected data is handled responsibly and in compliance with UK data privacy regulations.
This can also help to address some of the concerns related to transparency and accuracy. By using activity recognition or event detection as a first step, it may be possible to give people more clarity – through signage, for example – about what exactly is going on during police surveillance in a public place.
It is the responsibility of the state to ensure the privacy and security of its citizens in order to foster a healthy society. But if facial recognition is implemented in a way that a significant proportion of citizens feel infringes their rights, it could create a culture of suspicion and a society where few people feel safe expressing themselves publicly.
Author: Nadia Kanwal, Senior Lecturer, Computer Science, Keele University
This article is republished from The Conversation under a Creative Commons license.
Image Credit: Image by pikisuperstar on Freepik