banner - shutterstock_1651644211.jpg

Image: Shutterstock / Fractal Pictures

Built-In Bias: Digital ID and Systemic Racism

Recognizing and resisting prejudice in digital identity programs

We like to think of technology as impartial and unprejudiced - the product of science and engineering, built on data and algorithms.

But this neutrality is an illusion. The truth is, technology does not exist in a vacuum; nothing created by humans does.

Products and systems are inevitably shaped by the biases of their creators - far from being impartial and fair, they are deeply entwined with the sticky complexities of politics and prejudice.

The same is true for digital identity.

As the world wakes up to the toxic reality of systemic racism, we must also acknowledge the ways in which those same biases can be reflected, compounded and supported by digital identity systems.

Because the first step in the fight against prejudice is recognizing how it manifests.

The path towards progress begins with understanding.


Reflecting Discrimination: Digital ID and Unconscious Bias

In 2012, a team of Google software engineers developed a mobile application for users to upload videos from their cellphones to YouTube. But after release, they noticed a bug: 1 in 10 videos was being uploaded upside down.

When the team dug deeper, they discovered that 1 in 10 users were left-handed - meaning that they held their phone in the opposite hand when recording. The app had been designed with right-handers in mind, leaving lefties with upside down footage.

thumbnail - shutterstock_1314016142.jpg
Black women are most at risk of misidentification from facial recognition (Image: Shutterstock / Prostock-studio)

This story demonstrates the ways in which unconscious bias can shape the performance of technologies, creating flaws that leave some groups excluded. But in many cases, the consequences of unconscious bias are much more severe.

Facial recognition is an increasingly prevalent tool for law enforcement around the world. But the algorithms that underpin the technology have profound, race-related shortcomings.

In a recent study, the US National Institute of Standards and Technology (NIST) investigated almost 200 different facial recognition programs for signs of bias. They uncovered a higher rate of false positive matches for Asian and Black faces in the majority of programs tested.

Significant disparities were also found in sex and age. Middle-aged white men were most accurately identified; while black women were found to be the most at risk of misidentification.

This means that - where law enforcement is using facial recognition - people of color are more likely to be misidentified as suspects in a crime than white individuals. The disparity prompted late US congressman, Elijah Cummings, to comment:

If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong. That’s a hell of a combination

The underlying cause of facial recognition bias is the same as that behind the YouTube app bug in 2012: homogeny.

Writing in The Guardian, Ali Breland observes that facial recognition algorithms are “usually written by white engineers who dominate the technology sector.” She continues: “These engineers build on pre-existing code libraries, typically written by other white engineers... The code that results is geared to focus on white faces, and mostly tested on white subjects.”

Beyond the world of facial recognition, discriminatory algorithms can affect everything from the calculations that determine eligibility for loans and insurance, to those that inform decisions on bail and parole.

Clearly, in-built software bias is not an abstract issue; the racialized flaws in digital programs have real world implications in the form of wrongful detention, financial exclusion, and profiling of minority groups.


Compounding Discrimination: Digital ID and Exclusion

The digitization of analogue identification programs is often touted as a route towards inclusion. From finance to healthcare, the availability of digital programs could extend services to those individuals who are otherwise excluded because of geography, time constraints, or socio-cultural issues.

However, digital identification can also embed existing disparities and compound the impact of exclusion. Kenya’s digital identification program is a case in point.

thumbnail - shutterstock_165992831.jpg
From education to healthcare, a digital ID is required to access public services in Kenya (Image: Shutterstock / Sandra van der Steen)

Kenya is an ethnically diverse nation, but discrimination against certain minorities is both widespread and systemic. Muslim Kenyans have been subject to special vetting in the country since the 1980s, purportedly for security reasons.

As such, Muslim groups in Kenya already face myriad beaurecratic barriers to obtaining national IDs. As reported in WIRED:

“If you belong to one of those tribes, you end up waiting months to appear in front of a vetting committee. You may have to produce an extra set of supporting documents - even documents from your grandparents or great-grandparents - and your process can take years.”

When Kenya adopted Huduma Namba - the new digital version of their national identity program - this institutional discrimination carried over. To obtain a digital ID, citizens have to provide proof of identity, but discriminatory vetting of Muslims means that large numbers of Kenyans have no formal identity documentation.

As a result, many Kenyans remain locked out of Huduma Namba. And, what’s more, the consequences of not having a digital ID are far worse than those associated with not having an analogue ID.

A digital ID is now required to access almost any public services in Kenya, from education, to utilities, to healthcare. As such, in this case, digital identity has only compounded and aggravated the impact of existing bias.

Similar patterns of discriminatory digital ID can be seen in India, Uganda, and other countries around the world. Ultimately, where digitized programs are built on an existing landscape of prejudice, the consequences of inequality are amplified.


Compounding Discrimination: Digital ID and Exclusion

The digitization of analogue identification programs is often touted as a route towards inclusion. From finance to healthcare, the availability of digital programs could extend services to those individuals who are otherwise excluded because of geography, time constraints, or socio-cultural issues.

In 2019, researchers from the University of Exeter Law School demonstrated that, because digital identity makes minority status more apparent, digitized programs could allow for more efficient persecution of minority groups.

Thumbnail - shutterstock_1496099588.jpg
Identity data has historically been used for identification of targeted groups (Image: Shutterstock / Lena Ha)

Highlighting the experiences of the Rohingya people in Myanmar and the Uyghurs in China, Dr Ana Beduschi, who led the research, cautions:

"Technology alone cannot protect human rights or prevent discrimination… Having a digital identity may make people without legal documentation more visible and therefore less vulnerable to abuse and exploitation.

“However, it may also present a risk for their safety. If the information falls into the wrong hands, it may facilitate persecution by authorities targeting individuals based on their ethnicity.”

Writing for The Engine Room, Zara Rahman notes the role that data protection can play in persecution:

Data gathering around sensitive topics has a long history of being used in malicious and dangerous ways

“There have even been multiple occasions when mass surveillance and data collection have played key roles in facilitating humanitarian crises, including genocides.”

Rahman goes on to cite examples of identity data being used in atrocities, including the 1994 Rwandan genocide and the Holocaust. In each case, registration systems, census data, and identity cards allowed for identification of targeted groups and the resulting crimes against humanity.

With the support of more efficient, comprehensive, and encompassing digital identity programs, malicious actors could be better equipped to identify and target minorities than ever before in history.

This risk is not a mere abstraction. In May 2020, the American Civil Liberties Union (ACLU) launched a lawsuit against facial recognition firm, Clearview AI, for their role in "unlawful, privacy-destroying surveillance activities."

The firm’s software scrapes publicly available images and information from social media to create a database of ‘faceprints’; law enforcement agencies can then automatically scan this database using facial recognition to find out if any of the images match their suspect.

A recent New York Times editorial observes: “The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.”

Meanwhile, a document obtained by BuzzFeed News reveals that Clearview AI is currently looking to expand into countries with recorded human rights abuses, including the United Arab Emirates and Qatar.

Speaking to BuzzFeed, Albert Fox Cahn of the Surveillance Technology Oversight Project, commented: “It’s deeply alarming that they would sell this technology in countries with such a terrible human rights track record, enabling potentially authoritarian behavior by other nations.”

As digital identification technologies continue to advance, it’s vital to consider their potential role in abetting discrimination and persecution - without this awareness and oversight, digital ID could prove a death sentence for some.


Resisting Discrimination: Towards Good ID

The links between digital identity and discrimination are clear, but it’s not enough to simply acknowledge the problem - we must also address it.

There is no easy fix here, but if we take each of these three issues in turn - digital identity reflecting, compounding, and assisting discrimination - then we can arrive at some conclusions.

While homogeneous teams can inadvertently develop flawed products that exclude minorities, the opposite is also true. As the non-profit organization, Women in Identity, explain: “Digital identity solutions built FOR everyone are built BY everyone.”

thumbnail - shutterstock_1116853745.jpg
The need to discuss discrimination in digital ID is greater than ever (Image: Shutterstock / Jacob Lund)

Research Lead, Louise Maynard Atem, elaborates: “At Women in Identity, we recently published an open letter to the identity industry that tackles this very subject. Our belief is that digital identity solutions should be as diverse as the communities they serve and the problems they solve."

We’re at a unique moment in history, so let’s turn this global awakening into real change in our industry

Finally, we must acknowledge that, regardless of whether a system is inclusive or not, data in the wrong hands can still be dangerous.

Digital identity programs can be, and have been, used as tools for persecution. That’s why Good ID programs must guarantee protections around personal data and limit the amount of information gathered, and thus the potential for harm.

Often the most inclusive option is not to mandate ID programs at all. If a digital ID is not fair, accessible, and safe for all, then it should not be required in order to access services.

Ultimately, prejudice is a societal ill that must be addressed at the societal level. However, there is a great deal that can be done within the identity community to address some of the problems associated with discriminatory digital ID.

Recent events have ignited renewed discourse on prejudice, discrimination, and persecution. The identity community must not allow this moment to pass us by. The need to discuss discrimination in digital ID is greater than ever.

The time for change is now.