As the pandemic continues and many of us remain confined to our homes, we are shopping, scrolling, and socializing online more than ever before. But for many people, the internet is far from a safe space.
Internet harassment is rampant, particularly at the expense of marginalized groups. Its consequences can be upsetting and can have a profound impact - a 2017 Amnesty International poll found that 32% of women who were victims of online harassment have stopped posting their opinions about certain issues.
At the same time, in a turbulent year defined by the coronavirus pandemic, protests, and a US election, concerns about misinformation have reached new heights. From QAnon, to panic about 5G, conspiracy theories have flourished in 2020, while deepfakes’ encroachment of our online space highlights how warping the truth has never been easier.
A Balancing Act
A common factor between accounts which generate and amplify both hate speech and harassment is anonymity. Twitter users, for example, can easily make a profile with no real name or photo. When they are blocked they can simply hop over to another account to continue their harassment.
Away from major players like Facebook and Twitter, other apps which allow anonymity suffer from a lack of moderation. A report from advocacy group Hope Not Hate found that platforms which have been co-opted, like Twitch and Discord, allow for further radicalization and organizing, as they are "unable (or unwilling) to tackle extremists’ abuse of their sites.”
But while anonymity may create a space for the worst kind of discourse, it can also be a vital tool for freedom of speech, as well as offering protection from what we are increasingly being stripped of online: privacy.
The privacy afforded by anonymity can allow social media to be a safe space for people to seek out information or express elements of their personality that they wouldn’t otherwise feel comfortable disclosing. From trans people trading advice, to women seeking information about abortion, anonymity can in some cases provide a vital pathway to freedom from abuse.
Twitter explains its policy in similar terms: “Twitter provides journalists, activists, political dissidents, whistleblowers, and human rights advocates with a mechanism to speak safely and securely, which is vital to ensure they carry out their work,” a spokesperson told Good ID.
Both Twitter and Facebook are investing in technology that uses AI for proactive moderation, but there are gaps to what an algorithm can and cannot identify - and in the meantime, abuse continues to pervade the platforms.
How do we then balance the right to privacy with the need for accountability?
The Power of Choice
David Babbs is the project lead for UK-based campaign Clean Up the Internet. After questioning the internet’s failures to live up to its potential, the group identified anonymity as a problem, and has set out a vision for how to keep anonymity from enabling abuse.
“There is more abuse from anonymous accounts, and the abuse which comes from anonymous accounts can be even more frightening,” Babbs explains.
The campaign’s proposals argue that everyone on social media should be given a choice to prove their identity. This would operate much in the same way as Twitter’s blue tick - but available to everyone.
Users could choose whether to see posts from unverified users or not, with the idea being that users who have accounts tied to their real identities would be far less likely to spout hate and abuse.
Crucially, this method of handling anonymity would still enable users who prefer to stay anonymous, to do so. We still have the right to anonymity if we want it, and those with verified accounts could continue to use a pseudonym if they chose to.
The technicalities of Clean Up the Internet’s proposal haven’t yet been worked out - it’s simply setting out an idea. And it’s not a completely novel one - Facebook has already started to verify the identities of accounts which go viral, in an effort to ensure fast-spreading posts are from authentic accounts.
Babbs explains how vital it would be to ensure the information that is used to verify our identities is only used for that purpose, with stringent legal protections covering this data.
But Facebook’s track record with privacy would likely leave many skeptical. Do we really want to offer up more of our personal information to big tech?
Elizabeth Renieris, founder of hackylawyER, says providing our real names to social media companies could “pose serious risks for individuals and communities who may be in danger or persecuted for their beliefs, personal traits, and characteristics.
“Doing so would also make online and offline behaviors even more correlatable than they are now, which adds to the breadth, depth, and invasiveness of the profiles and dossiers built up on each individual, further threatening our autonomy and personal sovereignty,” she said.
In March, New York Times' reporter Nicole Perlroth published a callout on Twitter from a nurse, explaining the dire need for a digital space where healthcare professionals on the frontline of the COVID pandemic could securely share knowledge about combating the virus. The nurses’ identities should remain anonymous, but crucially, their credentials would need to be proven.
In Forbes, David G.W. Birch calls the not-yet established technology that could allow digital credentials to verify us ‘counterintuitive cryptography’, stating: "technology that can deliver some amazing solutions to serious real-world problems, solutions that simply do not exist in a world of ID cards and databases.”
Could looking beyond proving our identities, to proving our credentials, be the way forward?
Digital wallets that allow us to prove things about ourselves without sacrificing our personally-identifiable information, by enabling us to demonstrate what we are, without revealing who we are, could limit the spread of misinformation and create safer spaces online.
While certainly possible, this kind of technology has currently yet to be implemented. In the meantime, we must consider other ways to combat abuse and fake news online that both include and go beyond the problem of anonymity.
“What we’ve now seen time and again with mis- and disinformation, hate speech, and harassment, among other harmful online content, is that the harm is not so much in the speech or the content itself as it is in the reach or dissemination of that content,” said Renieris.
“Therefore, effective interventions would not focus on the individual but rather on the companies and platforms themselves, including their toxic targeted advertising-based business models, opaque algorithms that determine how content is distributed and amplified, and widespread corporate governance failures.”
Ultimately, beyond any one policy change, these platforms have the power to combat the issues of harassment and misinformation. But while the current system remains so profitable, we cannot rely on them to take action voluntarily.
As policymakers, civil society organizations, and users, we can push for online spaces that protect both privacy and freedom of speech, while limiting abuse and misinformation.
Then the power of the internet will no longer just be in the hands of tech companies - but in everyone's.