A Right to Reasonable Inferences

By Dr. Sandra Wachter, Lawyer and Research Fellow (Asst. Prof.), University of Oxford

A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI, Columbia Business Law Review, forthcoming (2019) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3248829

Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviours, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. We know that Big Data and algorithms are increasingly used to assess and make decisions about us. Algorithms can infer our sexual orientation, political stances, and health status. They also decide what products or news feeds are shown as well as if we get hired or promoted, if we get a loan, we get insurance or if we are admitted to university. These technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute.

Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics.

Ironically, inferences are effectively ‘economy class’ personal data.

They receive the least protection of all the types of data addressed in data protection law, and yet now pose perhaps the greatest risks in terms of privacy and discrimination. We argue that several data protection laws and the Courts in Europe do not guard against the novel risks of “high risk inferential analytics” (e.g. privacy invasive or reputational damaging inferences with low verifiability).

In standing jurisprudence, the European Court of Justice has consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing (e.g. name, age, e-mail address), and to rectify, block, or erase it, but not output data (e.g. inferences, opinions or assessments such as credit scores, risks assessments). Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent.

Very often the way decisions are made remain in the private autonomy of companies and we have little regulation (e.g. anti-discrimination laws) that govern how decision have to be made and what criteria are relevant, justified or socially acceptable (e.g. using social network profiles to make loan or hiring decisions). Similarly, we do not yet have standards for socially acceptable inferential analytics. Is inferring political opinions, sexual orientation or (mental) health using clicking/browsing behaviour justified or socially acceptable? Or are these assessments too privacy invasive?

In the same way as it was necessary to create a “right to be forgotten” in a Big Data world, we think it is now necessary to create a “right on how to be seen.”

**Bio: **Dr. Sandra Wachter is a lawyer and Research Fellow (Asst. Prof.) in Data Ethics, AI, robotics and Internet Regulation/cyber-security at the Oxford Internet Institute where she also teaches the course Internet Technologies and Regulation.

Related: