The right to know
Leading experts on privacy, laws, and digital identity agree transparency is important in a modern, digital society. This is especially true when it comes to our issued IDs, like government identity numbers or bank accounts, as well as our defacto digital identities created by data trails in Google searches, Facebook profiles – even Internet of Things devices like Ring doorbells, Amazon Alexa assistants or Google Home devices.
Experts argue that people have a right to know what data is being collected and how it is being used for state-issued, defacto, or other kinds of ID.
Substance, procedure, and efficacy are three components of the "right to know," according to Elizabeth M. Renieris, founder of the hackylawyER consultancy and Fellow at Harvard's Berkman Klein Center For Internet & Society. Or, as she puts it: "here’s what you need to know, here’s how you act upon it, and here are the support mechanisms to help you exercise those rights."
Without clarity, it can’t be ‘good’
Nathalie Maréchal, Senior Research Analyst at New America’s Ranking Digital Rights program, notes this is imperative when considering Good ID versus “bad” uses of the same systems:
“Whether you’re talking about the data collection that a company does for targeted advertising, or for creating AI models, or for any other kind of process – or data that a nation state collects – you have to be really clear that digitizing processes can be efficient, can create value, can do lots of good things, but it can also make it a lot easier to do bad things with the data.”
This translates most often to data-related laws, because as Renieris explains: “really, when we talk about identity, we’re talking about various permutations of data points that we use to identify ourselves in different contexts. So we make this leap from data points to identity data and back to data-related laws.”
For self-sovereign identity and identity on the blockchain expert, co-author of A Comprehensive guide to Self-Sovereign Identity and Identity Woman Kaliya Young, such permutations lead to mixing understandings of data protection, transparency, and even identity: “there are significantly different starting points... using the same language and not understanding they’re meaning vastly different things. So, if we want to get accountability and transparency, what are we talking about?”
Most technology users will be familiar with common forms of data-related laws and the right to know: privacy policies and data protection policies. Whether for smartphones, office software, or advertising data collection, policies like Apple’s privacy approach and the General Data Protection Regulation of the European Union are sparking debate globally.
As Renieris explains: “this plays out in pretty much every legal framework as a right to information and a right to access – ‘who has (or should have) access to my information, when, and how’ – that’s what those rights are speaking to.”
Global data protection laws and privacy policies may be growing in prevalence, increasing the amount of understanding and agency available in our digital lives. We know more about how our digital identities are constructed, by tools we choose to use, by companies, by governments.
Maréchal explains this awareness extends from government activity to that of nefarious companies: “if you don’t know what a government agency is doing, you can’t possibly hold them accountable. Crimes that occur in secret don’t get prosecuted.”
“That’s why it’s really important for any government entity that is putting together an ID system to be really transparent to the public, but also to legislators, to regulators, to journalists, to civil society organizations... about what you’re doing, why you’re doing it, on what timeline you’re doing it, what option do people have to opt out, what option people have to check that the data that is being held about them is accurate, and correct it if need be.”
But how many people are aware of the “right to know” mechanisms, even when they are available?
According to June 2019 Eurobarometer research, 73 percent of Europeans surveyed had heard of at least one of the six tested rights guaranteed by the GDPR, like the right to be informed, the right to restrict processing, or the right to access. 67 percent knew about the GDPR itself, and 57 percent knew about their national data protection authorities.
That does not necessarily hold true for those outside of the EU and GDPR’s jurisdiction. A 2018 survey conducted by Pew Research found that Americans “struggle to understand the nature and scope of the data collected about them. Just 9 percent believe they have ‘a lot of control’ over the information that is collected… even as the vast majority (74 percent) say it is very important to them to be in control of who can get information about them.”
“Things like the Good ID campaign are really useful for raising awareness but control is illusory and maybe even the wrong framing,” Renieris says. “The amplification of the digital realm is such that we now vastly overexpose ourselves, often against our will. It’s hard to really internalize what that means. This exposure can feel 'out of sight, out of mind'. Still the awareness is important for accountability.”
And in emerging markets, experts like Young and Maréchal claim incomplete societal institutions and a lack of nascent regulation can bring about unintended rights abuses when governments and other organizations using or creating digital IDs are not transparent with citizens or users.
As Young explains, in her personal experience, citizens and proponents of India’s Aadhaar system were not aware of its central features, or misunderstood its architecture – a failure in transparency in her eyes.
Moreover, Young believes a key element of transparency – open information about the identity system – is being actively suppressed in India through the UIDAI filing “a significant number of first information reports against security researchers, journalists and NGOs, and basically suppressing all speech that’s critical of their system – the questions about ‘is it good or not,’ aren't even allowed to surface.”
However, it is possible for the right to know – like that practically implemented inside the EU via the GDPR – to come about in other contexts. It is also possible for other governments and businesses to demonstrate transparency beyond the right to know how identity data is used. Leaders gain trust and participation when they extend the right to include knowing how and when decisions are made, who will be held responsible, what staff, vendors or technologies will be leveraged, their track record, how much the system costs, what alternatives exist, and other elements.
As Young explains, the British Columbia province in Canada has achieved its own transparent success when it comes to their provincial IDs:
“They were trying to figure out how to take commercially available off-the-shelf technology, and have them align with what our community was saying about user control and Kim Cameron’s laws around choice and directed identity. They also had compelling citizen buy-in to solve this problem. So they were transparent about the problems they were trying to solve, and how they would solve it.”
She notes to achieve transparency in the province, British Columbia hosted “a forum of international and local experts to engage around the system” as well as “a citizen’s jury or citizen panel” to get “in-depth feedback and engagement and build legitimacy.” The government then responded to what they heard and learned from this public input.
Such a high level of transparency may not always be the end goal. Renieris explains there can be a lot of formal laws in place for government transparency mechanisms. But at least in the U.S., tech behemoths like Google and Apple are free to make individual decisions about how transparent they are, and what users can do with this information.
She expounds: “Apple’s a good example of (voluntary) transparency without (formal) accountability. They’ve got privacy as the product, and privacy is their selling point. Facebook, Alexa - all these things are really good examples of theoretical or formal transparency. They’ve got their notices about their privacy practices. But actually, because of their market power, because you [as a consumer] don’t really have any bargaining power… there’s no real accountability.”
The modern transparency toolkit
So what enables transparency to exist when it comes to our IDs, whether they’re on our smartphones, in our wallets or through our biometric identifiers like our irises or fingerprints?
For Maréchal, it’s all about having the right institutions and protections in place before the identity system is designed: “These are things like having the rule of law, having democracy, having checks and balances between the different state entities. Making sure that there’s enough capacity in the legislature, that there’s a data protection authority, that there’s a comprehensive privacy regulation and data protection regulation. Without all these things put in place, if you just parachute in a digital ID system, in the best case scenario it’ll be useful; worst case scenario, it’ll enable human rights abuses.”
Renieris agrees, but notes this isn’t always possible: “My preference would be: you have the legal frameworks in place and then you implement. I don’t think that’s the reality. The reality is these digital ID systems and various applications are rolling out, and there’s almost an inevitability to it, and so the law is never going to move as quickly, and the laws not going to be neatly in place before the ID systems are introduced and are in use in a different jurisdiction.”
Instead, she believes a multi-faceted approach is possible, incorporating laws and strategic coalition-building between government, the private sector, academia, civil society and individuals: “You have to have that informal coalition-building at the same time as you’re formalizing the laws and regulations and as technical and commercial solutions are emerging.”
For instance, Young has been facilitating multi-stakeholder collaboration via the Internet Identity Workshop for the past 15 years. Other coalition-building among institutional actors has emerged more recently through attempts from Better Identity Coalition, the World Bank’s ID4D program, the UN High Level Panel on Digital Cooperation, the #GoodID movement, and ID2020 (for whom Renieris is a technical advisor).
Cautious optimism ahead
For identity to be truly Good ID, experts like Renieris, Maréchal and Young agree protections like transparency absolutely need to be in place to ensure the rights of individuals are secured while designing and implementing private or public sector identification systems.
Young sees a future of securitized identities like self-sovereign IDs that provide protection to individuals and “to get through the wormhole around accountability and the need to support freedom,” noting: “I think it’s going to be critical to support people going online and doing whatever they want, wherever they want, as long as they're not committing fraud, doing other illegal things.”
But Maréchal is a bit more measured with future possibilities, warning Good ID almost inevitably is countered with “bad ID,” and experts should be prepared to tackle the latter. Renieris agrees. As Maréchal concludes:
“I think it’s so important to think ahead of time about all the different ways that a digital ID program could go wrong, or could be used for bad things, whether intended or not, and take very seriously the possibility that it may not be possible to allow the good uses while preventing the bad uses from happening.”