For most of us, our face is the part of ourselves that we are most willing to show the world - it is central to how we communicate and how others remember us, which makes it a key part of our identity. But exposing our faces makes us vulnerable. Never before have we been as vulnerable as today where every action in public space could potentially be linked to us via facial recognition technology. Like any tool, facial recognition can be used responsibly for legitimate law enforcement objectives but also threatens to infringe on universally accepted human rights, in particular, privacy, equity or non-discrimination, and due process.
As a society, we need to determine which role we want facial recognition to play in public life. The CITRIS Policy Lab, headquartered at UC Berkeley, contributes to the discussion with research outlining which facial recognition uses by law enforcement (see figure 1) create risks to privacy, equity, and due process, and how strategic policy design can mitigate these risks. Based on this framework (see figure 2), the report highlights in which areas the UK and the US, both relatively advanced users of facial recognition technology, immediately need to strengthen their human rights protection through policies specific to the risks of facial recognition.
Privacy - Who decides how, when, and by whom a facial image is used?
To safeguard the human right to privacy (Art. 17 ICCPR), individuals must be aware of and actively consent to the use of their image in facial recognition databases. Because a photo can be taken unnoticed at a distance, procedural safeguards are critical. Moreover, individuals must have an accessible path to object to the use of their image. Finally, because linking someone's face and identity creates a highly sensitive combination of data, strict standards for law enforcement access to the data are necessary.
Both the UK and the US currently collect images without the individual’s active consent, including criminal booking photos, regardless of whether the individual was charged or convicted. Neither country has comprehensive facial recognition technology legislation but in the UK, individuals' personal data, in general, has legislative protections, including through the EU's General Data Protection Regulation (GDPR). These data protection policies allow individuals in the UK to object to the use of their data and establish standards for access.
Equity - Does the use of facial recognition technology affect everyone the same way?
Our face holds information about many parts of our identity, including, for example, our race, ethnicity, or sex. To protect the right to non-discrimination (Art. 26 ICCPR), facial recognition users must be conscious of and mitigate any biases in whose images are contained in their databases, who are exposed to identification based on facial recognition, and the intrinsic bias in the facial recognition algorithms used.
Equity is highly problematic in both countries due to a lack of critical engagement with bias. For example, in one case in the UK, images of individuals with mental health issues were specifically added to the database and in one case in the US, foreign individuals' images were added based on a data exchange agreement with their home country. Both countries have begun taking steps to address the intrinsic bias in facial recognition algorithms and the resulting disparate impact of the technology's use, especially within law enforcement applications. However, significantly greater research and precautionary measures are needed to ensure equitable due process.
Due Process - What rules govern the use of facial recognition technology?
As users of facial recognition, law enforcement agencies must uphold due process rights (Art. 9 and 14 ICCPR) in a new context. In particular, law enforcement agencies should consult the public before they roll out facial recognition programs, ensure that individuals are aware of such facial recognition programs, and exercise caution in using facial recognition results as evidence in criminal proceedings.
Regarding due process rights, UK law enforcement agencies consult and communicate more effectively with stakeholders and the public prior and during the use of facial recognition, whereas in the US, federal programs operated for years prior to the publication of a privacy impact assessment. However, law enforcement agencies and jurisprudence emphasize the importance of human review when using facial recognition in criminal proceedings.
Our research demonstrates that even in countries with a strong commitment to civil liberties, policies specific to facial recognition are lacking to enforce human rights. A challenge highlighted by the study is the knowledge gap between innovators and the public, as well as their elected representatives, which creates an alarming information asymmetry. Creating legislation driven by informed public preferences and specific to the risks posed by facial recognition is necessary to ensure the respect of human rights.
To read the full report, please visit https://citrispolicylab.org/wp-content/uploads/2019/09/Facing-the-Future_Ruhrmann_CITRIS-Policy-Lab.pdf.
The CITRIS Policy Lab, headquartered at UC Berkeley, supports interdisciplinary research, education, and thought leadership to address core questions regarding the role of formal and informal regulation in promoting innovation and amplifying its positive effects on society.