At the ID2020 Summit in New York City on September 14, 2018, the day was full of robust discussion about what constitutes a good identity and how that type of identity can help disadvantaged populations around the world. Toward the end of the day, one gentleman stood up and spoke. He said, “We have all heard of the slogan ‘Don’t be Evil.’ We all know that aspiration can be brought low by human frailties and failures. So that is not the standard we must use. Instead, the systems that we design, that we promote, and that we use, must be built on the principle of ‘Can’t be evil.’ Privacy by design, protection by design, built in a way that, even if the designers themselves adopt evil intent, they cannot use their systems for harm. That is what we must adhere to.”
That day my own organization, iRespond, was awarded one of the first two grants offered by ID2020, to pilot a program to provide refugees with portable access to their own medical records from within the camp, after they leave its relative security. That was exciting enough. But that gentleman’s wise words have stayed with me.
Biometric identification and identity systems are powerful. The stories of their actual and potential use can be scary. Biometric information should be and is treated as a special category: While one can get a new passport or credit card number, you can’t get a new set of irises, fingerprints (with a caveat, see below about faking or stealing fingerprints), or a new face!
The technology seems to be getting ahead of the regulatory environment in which it operates. However, there is enormous variation among those systems. Discussing them as an undifferentiated mass is inaccurate and does a disservice to the public and to policymakers grappling with the technology. (A couple of examples of the type of analysis that fails to look at the full spectrum of biometric systems are here and here.)
Neither of these ask some simple questions:
- Is there a difference between biometric systems?
- Is it possible to use biometrics in a way that protects privacy?
- Can you use biometrics without actually storing the raw biometric data?
- What are the criteria by which we can judge whether a biometric identification technology or system is good or not?
The answer to the first three questions is clearly: “Yes.” I know that because the system my organization employs protects privacy, supports individual control of data, does not store raw biometric data, and is vastly different from other systems in use. The fourth question is the one that all of us should be asking, so we know how to judge the quality of a biometric service. They are different! What do we look for when we try to tell the good from the bad, the best from the worst?
There are many people smarter than I who are involved in developing the standards and criteria of what constitutes a ”good” ID system. ID2020 is one. GoodID is another. Here are some of their best ideas:
The key aspects to look for include:
- Function. Note if a biometric is to be used as either as a second factor verification (one-to-one matching), or a primary factor for identification (one-to-many matching), which requires a high degree of uniqueness. What level of confidence does that match need to have for the given application? In every application, there are risk/ convenience/ cost tradeoffs that make the choice of one biometric modality more acceptable than others.
- Accuracy. DNA is the most accurate, but it is invasive. Iris scans are nearly as accurate, but not invasive. Fingerprints are fine for one to one matching, not very good for matching against a large data set, and are increasingly vulnerable to being stolen (videos for making copies of fingerprints are easy to find on the internet). Facial recognition technologies are increasingly using artificial intelligence to improve accuracy, but many systems in use lag badly. 50% or 70% accuracy is simply unacceptable.
- Privacy. Engage in privacy by design, in every step of the process, from the very beginning. Adding privacy protections as an afterthought, as a Band-Aid to an existing system, is likely to produce an inferior product, leaving those in the system vulnerable.
- Control. Put the individual in control of her identity (think of identity as a collection of attributes or credentials). Ideally, store attributes encrypted and per-relationship, and accessible by the issuer, subject/holder, and by relying parties only when authorized by the subject. In other words, allow the individual to decide which attributes or credentials to share.
- Consent. Wherever possible, assure informed, meaningful user consent and control. For this to be meaningful, this should allow a subject to decline participation in the system without losing access to any services or benefits.
- Power. When balancing what is good for the individual versus large organizations, give preference to those of the individual.
- Limits. Limit data collection and use, to a specified purpose. Protect the individual from data correlation across domains and contexts. Be careful of blindly following a path of “inclusion” without checks and balances. Include, where possible, a sunsetting on the collection and use of data, discard/destroy it when it is no longer needed.
This is not comprehensive; it’s a start. Hopefully it illustrates several ways to examine and judge a biometric identity system.
Those who opine on the benefits or faults of biometric systems need to acknowledge that there are substantial differences between the various systems in use or under development. Critiques should be specific, and critics need to “look under the hood” of the systems they examine. Criticizing one system based on the flaws of another is intellectually lazy and not defensible.
Those of us working with or on privacy-protecting biometric systems also need to improve the quality of our communications. Citizens, policy makers, and technologists all need to understand the differences between an authoritarian government’s draconian facial recognition system used to control ethnic and religious minority populations, and a system that uses iris scans to give HIV positive men and women anonymous access to health care, in countries where their diagnosis can lead to harassment or arrest. It is our job to explain these differences, and we need to do better.
No system, biometric or otherwise, is perfect. Everything has its costs and benefits. But those prepared to follow the wise words of that gentleman in New York, and to build systems that “can’t be evil” should have their work judged on its own merits, and not on the scary, dystopian visions of autocratic governments and profit-seeking social media corporations. Fair enough?