Technology is the engine at the heart of the current ID revolution. The technologies behind different identity systems — how they are designed, rolled out, and managed — can include and protect individuals with privacy, security, and choice. Or they can reinforce power imbalances, exclude, discriminate, and support surveillance.
Good technology design and implementation can play a powerful role in ensuring everyone can fully engage in digital society, unlocking access to education, financial services, and voting. With careful design and safeguards, all forms of ID — issued, de-facto, and self-asserted — can embody Good ID.
This interview series explores different technology choices applied by business, academia, and nonprofit organizations as well as innovative technology entrepreneurs to contribute to a thoughtful debate on digital identity policies, practices, and technologies.
First, we talk to Sebastian Manhart, Chief Operating Officer at Simprints, a non-profit technology startup hosted by the University of Cambridge that builds biometrics for identification to "ensure every person is counted in the fight against global poverty."
Increasingly, biometric technologies that capture unique physical characteristics such as fingerprints and irises, or behavioral traits, are being used in digital identification systems to verify or authenticate identity and to eliminate multiple identity data across different databases or ID schemes.
Simprints recently won the Mission Billion Challenge for its open source toolkit that uses audio messages during registration to help people provide meaningful informed consent and to understand how their data is going to be used.
Sebastian, can you tell us about Simprints and your mission?
Simprints is a non-profit biometrics company, and our mission is to radically increase the transparency, effectiveness, and accountability of international development.
We work with clients that deliver essential services such as healthcare, education, and cash to hundreds of millions of people every year. But there’s a challenge: more often than not, the organizations that deliver these services don’t know who they’re reaching. So they’re relying a lot on guesswork. This can mean that services reach the wrong people, or don’t reach people who need them.
At Simprints we believe that functional digital IDs can be one of the most powerful levers we have in the fight against poverty. We build and responsibly deploy biometric technology that is truly built for context, that is affordable, that is secure, and that respects privacy rights.
To date, we’ve provided digital IDs to 250,000 people, and that is going to be four million people by the end of 2020.
Tell us more about what problems Simprints is trying to help solve, and how it works.
The problem we’re helping to solve is what’s often termed the "ID gap" or "ID bottleneck."
An example is our work with the Bangladesh Rural Advancement Committee in Bangladesh. BRAC runs a community healthcare program across the whole country. There are thousands of local female community health workers who go from house-to-house delivering essential services, mainly to mothers and their children.
The challenge is that they might have thousands of mothers that they have to visit, and the names overlap, and there’s a lot of migration. This leads to misidentifications.
That might mean that the wrong medicine is given, there’s a lack in the continuum of care, so mothers don’t get referred to the hospitals properly, and that has a negative effect on maternal and infant mortality rates.
So in partnership with BRAC, we ran an 18-month pilot with 22,000 mothers in Dhaka. The vast majority of these women have no formal (government-issued) ID and no birth certificates. Names overlap and people usually don’t know their age. Coupled with a lot of migration, it is virtually impossible to link a mother to a unique record. We equipped the health workers with biometric technology that they used to enroll these mothers – offline – and then linked them when they saw them the next time to a unique record, using the fingerprint biometrics.
We ran a study, backed by the U.K.’s Department for International Development and the Global Innovation Fund, which looked at the impact that the use of this biometric technology had on this program. We compared it to a control group that didn’t have the use of biometrics. And the key takeaway is that we saw a 38% increase in antenatal care visits. In other words, because there was a more reliable way to identify expectant mothers and track whether they had been seen by community health workers, more of them received the care during their pregnancy. This can literally save thousands, if not hundreds of thousands, of lives if applied at scale.
That’s just one example where using these unique IDs can really solve a very important problem that is not just an inconvenience, but is costing lives.
From the perspective of Simprints, how are ethical considerations or notions of "good" guiding the design of your technology?
The term "good" is quite loaded. At Simprints, we’re not in this to maximize profits, we’re in this to maximize social good. So the notion of doing something with the best interests of the target individual in mind has been at the center of everything we do. It’s easier to give specific examples to explain this.
I’ll give you two.
The first is around designing technology for the right context. There is a well-established biometrics industry. It’s been around for 40-50 years commercially, and there are literally thousands of companies building technology that is incredible – very sophisticated.
When we started Simprints, we firstly took the leading existing, off-the-shelf biometric technology to Zambia, Benin, Nepal, Bangladesh. It just didn’t work. Once we started digging into it, we saw that there were a couple of reasons for this.
There was a bias in terms of the actual algorithms that were used, that were designed for white males mainly, like myself, who do little manual labor. And in terms of the actual casing and the hardware itself, it was designed for indoor or connected, sterile contexts.
So we realized that if you want to make a difference you have to start with the individual. Who are they? If they are rural farmers in Bangladesh, in hot environments and with worn fingerprints, you have to design technology for that specific individual. That’s exactly what we have done. The populations we work with tend to do a lot of manual labor. Arthritis is very prevalent. It’s just one example, but it really drives it home: people very often can’t stretch their fingers.
So if you want to build finger-printing technology that these people can use, you can’t expect them to stretch their finger on a desktop-based flat scanner, which is how the majority of devices that you can find are designed. We had to build something that was ergonomic, and that could be turned in a way that even people with arthritis could use it.
Then there is the unhelpful bias in algorithms themselves. When we did our initial testing in Zambia, for example, we saw that 84% of the people that we worked with had damaged fingerprints. This is why you often hear, "Biometrics doesn’t work in the developing world." Of course it doesn’t if you build it with a dataset that is entirely comprised of non-representative individuals!
So we spent two years just collecting data. We gathered 130,000 fingerprint images of representative populations, to influence how we would design our product, and make sure that it can work.
The results were very good, but it shows it was something that had not been done before because nobody was really building for that.
The second example is around privacy. We take a "Do No Harm" approach because we’re dealing with sensitive data. If we can’t ensure that we’re not doing any harm, there’s no point starting in the first place.
Before we had developed any products, we already had laid out quite clearly and publicly our stance, which was essentially that we would uphold the same privacy rights that people enjoy, for example, in the U.K., no matter where we operate. Which is easier said than done!
When you’re thinking about privacy, if you’re really building with the user in mind, and you have to implement principles such as data minimization, segregation and pseudonym-ization, real encryption at rest – those are all best practices that everyone should follow because they’re in the interest of the individual, but very often the financial incentives are not really there to pay for that best practice.
What learning can you share about engaging with communities to encourage them to be involved with identity processes and biometric technology?
This has been a fascinating question that we’ve been investigating from day one. We came to this work with our own biases. For example, in Europe, until recently, biometrics was associated with law enforcement, with crime, with the state trying to spy on you.
When we started doing our work in Bangladesh we discovered that people, especially illiterate people, have been using their thumbprint, albeit with ink, to sign legal documents for a long time. We found them much more open.
Having said this, I would be lying if I said this has been a smooth path. We found other places where people thought that we were trying to take their blood with our scanners. We had places where they thought that a demon was living in the scanner. And while those might seem peculiar concerns, they are real. People feel strongly about this, and we need to engage with it.
We have learned that we need to build for context. We focus a lot on training the people in our client organizations who use the technology to create digital IDs. The users of the technology are not the same as the end beneficiaries who have a digital ID created at the end of the process. The users of the technology are the front-line workers – people like the antenatal health workers in Dhaka I mentioned in the first example above. We empower these users of the technology to really understand concepts such as consent. What is being done? What data is being collected? We use the training to empower those same users of the technology to explain to the person whose ID is being created about these issues in the local language, and in a way that makes sense to that individual.
How do you work with and define consent?
The more you educate an individual, the better-placed they are to refuse to give consent. We are constantly treading that line: how do we empower people at all levels – the person whose ID is being created, the person who is using the technology to create the ID, the person who is storing and referring to that data? At the same time, how do we make sure that the person whose data is being collected has the right information that allows them to make the most informed decision?
From a legal, regulatory perspective we use consent as our lawful base for processing under the GDPR, so we have to collect informed consent to be within the law as a U.K.-registered non-profit.
But to be perfectly honest that is secondary to our ethical commitment. Consent has to be freely given, explicit, and the individual needs to be able to withdraw it.
We carry out a data protection impact assessment for any data collected for every project. We ask questions. How vulnerable are the people we’re targeting? Are there power dynamics in play that would prevent them from giving free consent? Is there a genuine alternative to using biometrics so that people can easily refuse? Will individuals understand the consent process?
Then we come up with a risk assessment and specific mitigations. To be honest, sometimes we realize as a result of this methodology that we cannot realistically get consent. In those cases, despite the commercial incentives, we say no to the potential client, and we do not further engage.
What role could biometric ID technology players have in the wider effort to promote the attributes of Good ID (e.g. privacy, user-value, user-control, security, inclusion) and ensure they are adopted and shared globally?
This is where advocacy has a huge role to play. Very often the reason why a standard in terms of privacy, data security, interoperability, and so on is not adopted is not a lack of goodwill, or bad intention. It’s a lack of knowledge. It’s a lack of awareness of success stories.
We’ve learned that incentives mean and matter a lot. So until we can shift the incentives structure that, for example, governments or for-profit biometrics companies are driven by, it’s going to be very hard to make a big, meaningful difference.
Finally, is there anything you’d like to say to others reading this and developing their understanding of Good ID?
If technology providers, policy makers, implementers, and the media work together we can define the course of history, shaping how powerful technology such as digital ID will be used.
But like any other technology, it can have a seriously negative impact, or it can be a force for good. So we need to make sure that we achieve the latter. We have to raise awareness through movements, such as #GoodID, and really share examples of success, and how we can merge two often conflicting outcomes – profit and social impact.
I think we have a moral responsibility to do so, and I genuinely hope that more people and companies will join this movement.