banner 2 shutterstock_editorial_10648711e.jpg

Image: Shutterstock / Kirsty Wigglesworth / AP

Watched: Facial Recognition and Law Enforcement

Mary Cruse suggests that the UK has become a test case for civil liberties and biometric technology

It was a brisk January day in Romford, East London, when police officers stopped and questioned a man for attempting to cover his face.

It was 2019, and the Metropolitan Police Service were trialling the use of a new technology for identifying suspects: live facial recognition (LFR).

The technology was unlike anything the force had used before, capable of instantaneously scanning the biometric data of people passing by and crossmatching their faces with those on the Met’s watchlist.

But these new developments in LFR technology had not gone unnoticed; when the Romford trial rolled around, the civil liberties group Big Brother Watch and their director, Silkie Carlo, were at the scene, along with reporters from the BBC.

What happened next was partly caught on camera. Seeing the facial recognition cameras, a man pulled his jumper up to cover his face. Before he could walk away, officers pulled him to one side and asked him for ID.

“What’s your suspicion?” asked a Big Brother Watch campaigner.

“The fact that he’s walked past clearly masking his face from recognition,” an officer replied. “It gives us grounds to stop them and verify.”

“No it doesn’t!” the campaigner shot back.

The man became angry and was ultimately issued with a £90 fine for disorderly conduct. “I don’t want my face showing on anything,” he later told the BBC. “If I want to cover my face, I’ll cover my face.”

Reflecting on what she’d seen, Carlo told the BBC:

There is nothing in UK law that has the words ‘facial recognition’. There is no legal basis for the police to be using facial recognition

She continued: “We don’t know who is on the watchlists, we didn’t know how long the images are going to be stored for, and the police are making up the rules as they go along.”

Facial recognition has been used in some form by law enforcement in the UK since the 1990s, but the software now allows for the capture and processing of people’s biometric data on a scale never seen before, and lawmakers have failed to keep pace.

In the absence of formal legislative regulation, three police forces - the Metropolitan Police Service, Leicestershire Police, and the South Wales Police - have continued to trial and use live facial recognition; but not without pushback.

The South Wales Police recently lost a case in the UK’s Court of Appeal, after having been taken to court by civil liberties campaigners for their use of LFR.

The Court found that there were ‘fundamental deficiencies’ in the legal frameworks governing LFR, leaving too much discretion to individual police officers, and that the practice involved the collection of personal data on a ‘blanket and indiscriminate basis.’

The ruling puts the UK’s police service at the center of an ongoing legal quandary over law enforcement’s use of live facial recognition - the outcome of which will affect countries around the world.

thumb shutterstock_414805060.jpg
Image: Shutterstock / knyazevfoto

The argument for facial recognition

The day of the Romford trial in 2019, the Met were able to make three arrests as a result of LFR technology: including individuals variously suspected of theft, robbery, assault, breach of a restraining order, and harassment.

While the Met did not respond to a request for comment on this piece, they have previously expressed the value that they perceive LFR to have as a crimefighting tool. In a letter to London Mayor, Sadiq Kahn, dated January 2020, Assistant Commissioner Nick Ephgrave stated:

“The [Met] view is that LFR is a valuable tool that supports the [Met] in keeping London safe for everyone.”

The letter continues to note that LFR had “resulted in the arrest of wanted individuals,” and concludes:

We believe that LFR will be an effective crime fighting tool, providing greater opportunities to arrest violent offenders, stop would-be terrorists and to protect the most vulnerable in society

And this position has been supported by a number of other law enforcement agencies. At present, the website of the international agency, INTERPOL states: “More than 650 criminals, fugitives, persons of interest or missing persons have been identified since the launch of INTERPOL’s facial recognition system at the end of 2016.”

Ultimately, the argument for LFR - and facial recognition more broadly - comes down to its ability to deter and respond to crime, and there is an argument to be made about the value of the software as a tool for safeguarding the public.

But the strength of this argument depends on two factors: first, how effective is the technology at catching the right people, and second, do the benefits to the general public outweigh the costs?

thumb shutterstock_1630986139.jpg
Image: Shutterstock / Fractal Pictures

Flaws in the system

In response to a Freedom Of Information request, the Met stated that the rate of False Positives for their LFR system in trials was between 0.1%-0.01% - meaning that less than 0.1% of people were wrongly identified as being suspects.

However, this statistic is based on the number of false matches as a percentage of the total number of faces processed by the LFR system. So if the system scans 10,000 people and wrongly identifies 1 as a suspect, that would be a 0.1% false positive rate.

But in July 2019, Professor Pete Fussey and Dr Daragh Murray published a study evaluating the use of LFR in six of the 10 Metropolitan Police trials. They based their findings on the number of False Positives out of the total matches made - that is, of everyone who was flagged as a suspect, how many were wrongly flagged?

Using this methodology, Fussey and Murray discovered an error rate of 81%, meaning that, out of 42 matches between faces in the crowd and the Met’s watchlist, only eight were verified as correct.

Not only that, but the pair also flagged the risk of bias in both the development and deployment of facial recognition - a systemic issue that has long concerned civil society organizations.

In January 2020, Robert Williams became the first known person in the US to be falsely arrested due to faulty facial recognition technology

43-year-old Williams was arrested by police officers in Detroit after their software matched him to a suspected jewelry thief caught on camera. Despite having an alibi, Williams spent 30 hours in custody, based on nothing more than the facial recognition match.

Williams’ case is emblematic of a bigger problem with facial recognition technology: built-in bias. The software underpinning facial recognition has been found to be less accurate at identifying the faces of people with dark skin.

Not only this, but there can also be bias in how facial recognition is deployed. Fussey and Murray’s study found disparities in the ways in which officers used their discretion to stop and question avoiders of facial recognition cameras between different neighborhoods.

This suggests that the bias of individual officers could lead to some communities and individuals becoming more aggressively targeted by LFR than others. Ultimately, the tool could add to existing prejudice against marginalized groups, like black and immigrant communities.

LFR has progressed dramatically in recent years, and it’s likely that some of these problems will be resolved over time. But would a highly-efficient LFR system really deliver better safeguarding of the public? Not everyone thinks so.

thumb shutterstock_280017281(1).jpg
Image: Shutterstock / punghi

Sleepwalking into a surveillance state

Alongside problems with the accuracy and reliability of LFR, there are also more fundamental concerns around the use of facial recognition. The vast quantities of data collected and processed, the lack of informed consent, and the potential for misuse all feature highly on the list of concerns raised by civil liberties groups.

With an estimated 4-6 million CCTV cameras, the UK is already one of the most surveilled nations in the world - coming second only to China in the number of cameras per citizen. But LFR differs from CCTV in that it supports both observation and identification. Law enforcement has instant access to people's data, whether they’re a suspect or not.

This raises issues around informed consent. We have laws such as GDPR to regulate the collection of user’s digital data by private companies, but no parallel exists for law enforcement and LFR.

The website of the campaign group, Liberty, summarizes the issue as such: “Everyone in range is scanned and has their biometric data (their unique facial measurements) snatched without their consent.” They continue:

“South Wales Police and the Metropolitan Police have been using live facial recognition in public for years with no public or parliamentary debate. They’ve used it on tens of thousands of us, at everything from protests and football matches to music festivals.”

Liberty also argues that there is a lack of clear limits on how this data is used: “The watch lists can contain pictures of anyone, including people who are not suspected of any wrongdoing, and the images can come from anywhere – even from our social media accounts.”

The Met contends that there are strict limits around what data is stored and who is included on their watchlists. But activists are concerned that there needs to be stronger legislation around the databases underpinning LFR.

Away from the UK, the practice of data scraping was deployed most infamously by the US based company: Clearview AI. While not offering live, instantaneous recognition, Clearview’s software allows users to match individuals with anyone in its database of over 3 billion faces - most of which have been scraped from publicly-accessible social media profiles without users' consent.

In early 2020, the New York Times reported that over 600 US law enforcement agencies were using the services of Clearview AI. Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told the New York Times:

The weaponization possibilities of this are endless. Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail

And this encapsulates the third major concern around facial recognition: the potential for misuse. US law enforcement agencies are not the only ones employing Clearview AI’s technology. In March, Buzzfeed News reported that the software was being used by law enforcement in 27 different countries - including the Met and several other agencies in the UK.

This has led US senators to probe the company on its sale of software to states with recorded human rights abuses, like the UAE. In a statement, Senator Ron Wyden said that Clearview selling their products to "a Saudi regime that is responsible for horrifying human rights abuses" was “deeply troubling.”

Internationally, we are seeing more and more governments and public agencies exploiting the potential of facial recognition to fuel human rights abuses, control citizens, and suppress opposition. Examples of this can be seen around the world, including in China, Russia, and Uganda.

All of this encapsulates wider concerns surrounding facial recognition: in whose hands are we placing this tool? Who can we trust to wield something so powerful? And what precedent are we setting for the future of facial recognition in law enforcement?

thumb shutterstock_698193955.jpg
Image: Shutterstock / Donlawath S

Regulation regulation regulation

Ultimately, there are currently no clear limits in the US or the UK on who can use facial recognition, or how and when it can be used. This leaves the market open - not just to official law enforcement - but also to private security firms and commercial companies.

In recent years, the owner of London establishment Gordon’s Wine Bar set up his own facial recognition system to track customers. The property developer Argent was recently found to have used facial recognition to monitor people in the Kings Cross area.

As its scope and accuracy expands, use of facial recognition continues to be largely at the discretion of individual companies and agencies, rather than governed by clear legislation and regulatory oversight.

What that regulation should look like is still a matter of debate. There are those who argue that the only way to safely manage LFR is to ban it entirely. A number of different US cities have recently voted to prohibit the technology’s use - San Francisco, Boston, and Oakland - while the state of Massachusetts recently voted to ban the use of facial recognition by law enforcement and public agencies.

The European Commission have also refused to rule out a blanket ban on facial recognition technology in public spaces - a move that could potentially place the UK in opposition to many of its neighbors on the continent.

While it will take time to work through the details of regulation, it’s clear that some form of greater oversight is needed. Facial recognition is becoming an increasingly powerful tool with potentially far-reaching consequences: and the law needs to reflect that.

In many ways, the UK has become a testing ground - both for the future of facial recognition and for a wider conflict: to what extent are we willing to compromise personal privacy for public safety? And do we fully understand the long-term consequences of that choice?

The decisions made on British soil could help set a precedent for other nations when it comes to the balance between public safety and civil liberties.

Whatever comes next - the world will be watching.