banner shutterstock_editorial_10160484a.jpg

Facebook CEO Mark Zuckerberg appears before the U.S. Commerce and Judiciary Committees, 2018
(Photo: Shutterstock.com / Carolyn Kaster/AP)

Big Data vs Democracy: Microtargeting and Politics

Big data is changing politics. How should we respond?

Everything leaves a trace. The conversations we have with friends; the songs we listen to again and again; the news, people, and ideas we explore.

Every choice we make, every word we type, everything we do online creates a data trail - a minute clue about who we are.

And those clues are valuable to all sorts of people and groups. Those clues are the foundation of a 327 billion dollar industry.

Microtargeting is the practice of using personal data to identify users who could be susceptible to a particular product, service, or idea in order to target them with tailored messages.

With the growth of the internet, we’ve seen the commercial use of microtargeting soar - advertisers around the world exploit user data for sales in a practice that Shoshana Zuboff has dubbed 'surveillance capitalism'.

Zuboff writes in her book - The Age of Surveillance Capitalism - that the practice: "unilaterally claims human experience as free raw material for translation into behavioural data."

In recent years, another industry has become increasingly reliant on these same microtargeting techniques: political campaigning.

In this context, microtargeting involves using people’s online data to try and influence their political views and behavior.

In practice, this means that everything you do online - from the clothes you buy, to the videos you watch - could determine what kind of messages you see from political groups.

Political exploitation of data is problematic because it impedes on users’ privacy and blurs the lines around transparency and consent. Equally worrying is the broader impact on the democratic process.

thumbnail  shutterstock_796805782.jpg
Everything you do online could determine what kind of messages you see from political groups (Photo: Shutterstock.com / Eldar Nurkovic)

Your data could be the difference between you knowing, or not knowing, about a particular campaign promise, the date for an important vote, or the location of your nearest polling station.

Your data could determine the extent to which you are encouraged and empowered to fully engage in the democratic process.

Information is powerful. It can be used to help us, it can be used to manipulate us, and it can be used to hurt us. Here, we take a closer look at political microtargeting and explore what can be done to limit the potential for harm - both to individuals and to democracy at large.


The Rise and Rise of Political Microtargeting

Targeted campaigning existed long before the internet. Politicians would canvas in certain regions that were known to be significant; campaign managers would carefully select times and locations to showcase their candidates in an attempt to reach the right people at the right time.

But the digital revolution has transformed campaigning, allowing for the collection and exploitation of user data on an unprecedented scale.

According to Forbes, human beings now generate 2.5 quintillion bytes of data every day, along with approximately 5 billion internet searches on engines like Google.

Tom Dobber and colleagues, writing in the Internet Policy Review, note that the 2016 U.S. presidential election was a turning point for microtargeting: “as candidates, political action committees, and other interest groups were able to take advantage of significant breakthroughs in data-driven marketing techniques.”

These techniques have since been used in the U.K., France, Germany, Brazil, Australia, the Netherlands, and other countries around the world. The authors continue:

Electoral politics has now become fully integrated into a growing, global commercial digital media and marketing ecosystem that has already transformed how corporations market their products and influence

The use of political microtargeting continues to grow. According to the Wesleyan Media Project, in the 2020 U.S. presidential election, political spending on digital platforms will outpace spending on TV advertisements by 500%.

Clearly, political microtargeting isn’t going anywhere; so it’s more important than ever to consider what’s at stake.


The Politics of Privacy

Microtargeting only works because of data. Vast quantities are needed in order to target people effectively. And most users are providing far more than they realize.

ProPublica reports that, in 2016, Facebook was using at least 52,000 attribute categories to classify users. But this data is not only based on the content that users directly share on the platform - it is also the product of behavioral analysis.

A number of big tech companies use individuals’ data to make inferences about them. They create detailed psychological and social profiles of their users, including personality, intelligence, ideology, sexuality, and emotional state.

thumbnail  shutterstock_1481557553.jpg
Big tech companies can use data to create detailed psychological and social profiles of their users (Photo: Shutterstock.com / Alliance Images)

This is valuable because, once users are grouped together with other users with similar psychological profiles, they become an identifiable target market, with predictable behavior and susceptibility to messaging.

This predictive behavioral analysis was employed, most infamously, by Cambridge Analytica in the 2016 U.S. election. The organization used data gathered from a third party personality-test app on Facebook to identify target groups for politicians, including Donald Trump and Ted Cruz.

Behavioral inferences are problematic from a privacy perspective, because users have not explicitly consented to their data being augmented in this way and there is no mechanism to opt-out.

Matthew Crain and Anthony Nadler observe that data is being generated: “around characteristics and traits that have not been self-disclosed by the targets.” Meanwhile, Dobber notes that: “a data breach could expose information about individuals’ income, education, consumer behaviour, but also their inferred political leanings, sexual preferences, or religiosity.”

In addition to the type of data being generated, there are also privacy issues around the techniques by which that data is gathered.

Cross-device recognition allows advertisers to track the same user across multiple devices. They aren't limited to what that person does on their personal computer - advertisers can also access location data through a user’s mobile phone, or trace the videos their children watch on the family tablet.

Third party data sharing also allows different platforms to share information between one another. Without realizing it, millions of users are sharing their data - not just with the platforms they access directly - but with countless other platforms that they have never used.

In the hands of political operatives, these data collection methods can have deeply problematic consequences.

Anti-abortion groups in the U.S. have been found to use mobile GPS location data to send advertisements - containing messages such as ‘you have choices’ - to women who had visited abortion clinics.

A Channel 4 investigation recently revealed that the Trump campaign used Cambridge Analytica to identify Black Americans through social media, in order to deter them from voting.

Lucy Purdon, Acting Policy Director for the charity, Privacy International, explains: “Whether you like it or not, political parties and the companies they contract want to profile you, understand your hopes and fears, and persuade you to vote one way or another, or shockingly, not vote at all.”

Purdon continues:

More transparency is needed from all actors involved in digital political campaigns, not least online platforms. We all have a right to know where the data came from and how it is used


Targeting and Transparency

One of the fundamental problems with political targeting is that it is difficult for users to provide informed consent. Terms and conditions are complex, and users often agree to terms without fully understanding what they are consenting to.

Similar issues exist around advertising transparency. Often, users will be exposed to advertisements without fully understanding who is funding the message or why they have been selected to see it. Sometimes they are exposed to political influence without even realizing that what they are seeing is a form of advertisement.

The practice of microtargeting is often likened to a one-way mirror, in which advertisers know everything about a user, but the user has no understanding of who is behind the ads.

shutterstock_652747882.jpg
The March for Truth protest, calling for investigations into Russian interference in the U.S. presidential election, 2017 (Photo: Shutterstock.com / Rena Schild)

To emphasize the lack of transparency around microtargeting ads, in 2018, a team of VICE journalists purchased a series of fake ads while claiming to be U.S. politicians, Mitch McConnell and Chuck Schumer. All of the ads were approved without issue, indicating that: “just about anyone can buy an ad identified as ‘Paid for by’ a major U.S. politician.”

Even more problematic is the fact that microtargeting restricts who can see a message. This means that the message is not open to wider scrutiny - because those who might dismantle the advertisement’s claims are purposefully excluded from seeing it.

In his book, New Media Campaigns and the Managed Citizen, Philip N. Howard writes:

The reason this is so attractive for political people is that they can put walls around it so that only the target audience sees the message. That is really powerful and that is really dangerous

Transparency issues go beyond official advertising. Political campaigns are not limited to explicit promotional material - individual accounts and online communities can also be used to spread political messages and influence.

For instance, in 2017, a public group named ‘Dry Alabama’ emerged on Facebook. Superficially, the group was enlisting the public to lobby for a statewide ban on alcohol sales. However, the real purpose of the group was to divide Republican voters on the issue of prohibition, and ultimately siphon off votes from the Republican candidate for senate.

LSE Policy Officer, Emma Goodman, explains: “We have arguably become ‘lab rats in a giant experiment’ – an experiment with significant political and policy implications.” She continues: “The lack of clarity over who is behind ads also increases the risk of foreign interference in election campaigns.”

This fact was starkly demonstrated in the 2016 U.S. presidential election, when Russian political operatives created a network of fake Facebook accounts designed to sow discord amongst North American voters. Facebook estimates that approximately 80,000 political posts were made by Russian operatives, reaching 126 million users.

Ultimately, if we do not clearly understand who is funding an ad, who is seeing it and why, or even what is or is not an ad, then we are deprived of our agency in the political process - anyone can spread political messages without scrutiny, and individuals simply become pawns on a chessboard, ready to be manipulated.


In Defence of Democracy

In its current form, microtargeting is an attack on users’ privacy and their awareness of the political process. But the personal is also political - ultimately, microtargeting compromises democracy as a whole.

We’ve seen how political operatives - both official and unofficial - can exploit users’ data, stifle opportunities for debate, exacerbate polarization in society, and create what Judit Bayer has described as ‘fissures in the democratic process.’

So what can be done?

thumbnail shutterstock_569065828.jpg
People live broadcasting from a protest rally, Romania, 2017 (Photo: Shutterstock.com / Gabriel Petrescu)

Different controls around microtargeting exist in different regions, with Europe enjoying more protections - thanks largely to GDPR - than the U.S..

Rules also differ by platform. In recent years, both Twitter and Google have introduced additional safeguards, while Facebook still lags behind.

But more comprehensive regulation is needed across the board in order to fully meet the challenges posed by political microtargeting. And there are many different ideas for how to do this.

Crain and Nadler group the different approaches under the following three headlines: the ‘bad actors’ approach, the ‘militarization’ approach, and the ‘infrastructure’ approach.

Under the bad actors approach, policymakers and tech companies focus on trying to limit the influence of those using microtargeting to spread problematic and manipulative content.

This entails no reform of the broader infrastructure of digital advertising, and it overlooks the wider issues around privacy and transparency - allowing the officially-sanctioned practice of political microtargeting to continue.

The militarization approach is based on greater surveillance of digital media by military and intelligence agencies. The focus here is on limiting foreign interference - of course, as Crain and Nadler observe: “Encouraging greater surveillance and militarization of digital communication introduces its own threats to free and open democratic communications.”

Ultimately, the authors advocate for a third option: the infrastructure approach. Here, the harms caused by microtargeting are counteracted through regulation that limits the powers of advertisers and promotes transparency and digital rights. The authors explain:

The very capacities of digital ad systems that facilitate such weaponized communication need to be recalibrated to better serve democratic ideals

This infrastructure-based approach has been met with widespread support from digital rights advocates, who contend that major changes are needed to microtargeting practices in order to protect users and safeguard democracy.

"Until relatively recently, efforts to tackle 'fake news' and 'disinformation' have sucked the air out the room in discussions about online digital campaigning,” Purdon explains.

“But we need to also look behind the curtain and challenge the prolific data collection, profiling and targeting that has resulted in you seeing this content.”

Reform is no easy feat. There are debates to be had around free speech, on what can be classed as ‘political’ advertising or influence, and around who should be responsible for monitoring and enforcing compliance with regulations.

But none of these complexities negate the very real need for action on political microtargeting.

Ultimately, political microtargeting is a disease of democracy - the practice relies on manipulation and disenfranchisement. And so the only antidote is to empower users and restore agency to voters.

Everything leaves a trace. It’s up to us to decide how microtargeting impacts democracy in the years to come.