As we navigate the digital landscape, vast quantities of data are being collected on our habits, behaviors and characteristics, and this “Big Data” is being used to generate inferences about who we are as individuals.
In this blog for the Oxford Internet Institute, Sandra Wachter and Brent Mittelstadt argue that users need to be protected from invasive or damaging assumptions being made about them based on their digital data trails. The pair call for greater recognition of and legislation around data-based inferences, including a new data protection right: the “right to reasonable inferences.”
So what do these problematic inferences look like? Based on Big Data analytics, some internet platforms – such as Facebook – are able to predict protected characteristics such as race and sexual orientation, as well as political views, financial status, and physical and mental health.
Despite the power of inferences and the clear privacy risks they entail, the article highlights that there is little appropriate legislation designed to regulate this form of surveillance. Both the GDPR and the European Court of Justice have not established robust oversight of inferences, leaving users unable to adequately control how their data is used:
When we think about Big Data analytics, we often focus on how input data – such as our name, date of birth and contact details – is used. But Wachter and Mittelstadt assert that the greatest risks stem from inferences, which are generated based on our online behavior. It is time, they conclude, for data protection law to better reflect the ways in which data is being processed, so as to better protect users’ privacy online.