Photo: iStock/ vgajic

A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI

– Oxford Internet Institute

Sandra Wachter and Brent Mittelstadt argue that Big Data allows internet platforms to make problematic inferences about users, and it’s time for data protection laws to catch up

As we navigate the digital landscape, vast quantities of data are being collected on our habits, behaviors and characteristics, and this “Big Data” is being used to generate inferences about who we are as individuals.

In this blog for the Oxford Internet Institute, Sandra Wachter and Brent Mittelstadt argue that users need to be protected from invasive or damaging assumptions being made about them based on their digital data trails. The pair call for greater recognition of and legislation around data-based inferences, including a new data protection right: the “right to reasonable inferences.”

So what do these problematic inferences look like? Based on Big Data analytics, some internet platforms – such as Facebook – are able to predict protected characteristics such as race and sexual orientation, as well as political views, financial status, and physical and mental health.

Despite the power of inferences and the clear privacy risks they entail, the article highlights that there is little appropriate legislation designed to regulate this form of surveillance. Both the GDPR and the European Court of Justice have not established robust oversight of inferences, leaving users unable to adequately control how their data is used:

“Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.”

When we think about Big Data analytics, we often focus on how input data – such as our name, date of birth and contact details – is used. But Wachter and Mittelstadt assert that the greatest risks stem from inferences, which are generated based on our online behavior. It is time, they conclude, for data protection law to better reflect the ways in which data is being processed, so as to better protect users’ privacy online.

As it was necessary to create a ‘right to be forgotten’ in a big data world, we believe it is now necessary to create a ‘right of how to be seen’ in the age of Big Data and AI. This will help us seize the full potential of these technologies, while providing sufficient legal protection for the fundamental rights and interests of individuals

View Article