A ban on using predictive policing to forecast human behaviour: a step in the right direction

In 2021 members of the European Parliament passed a resolution to endorse the report of the Civil Liberties Committee. The report expresses an opposition to the use of predictive policing tools which operate on artificial intelligence (hereinafter AI) software in order to make predictions about the behaviour of individuals or groups “on the basis of historical data and past behaviour, group membership, location, or any other such characteristics.” (par. 24) This opposition is based on the fact that predictive policing tools cannot make reliable predictions about the behaviour of individuals. (par. 24) Additionally, the report notes that AI applications have a potential for reinforcing bias and discrimination. (par. 8) Although this resolution is non-binding, Melissa Heikkilä believes that it conveys a message of how the European Parliament is likely to vote on the AI Act. There is a need for a legally enforceable ban on the use of AI predictive policing tools in respect of human beings. As discussed below, the use of AI can lead to inaccurate assessments due to the inherent character of the data. The basing of decisions on group data is inconsistent with protecting individuals from discrimination.

The report describes AI as facilitating the collection and analysis of large amounts of data using computer algorithms and advanced data-processing techniques. (p. 17) The data can be provided by individuals or collected by the private sector and the government. (p. 17) The AI based software then produces a prediction based on detecting “certain correlations, trends and patterns” in a pool of data. (p. 17) Thus, it predicts the behaviour of an individual on the basis of data about other individuals whom it treats as having similar characteristics. (p. 146) One of the potential sources of information about individuals and groups are the internet and social media apps. Individuals can join group activities by using services such as Meetup, WhatsApp and Facebook. A scenario in which an AI is used to analyse the activity of individuals on social media to predict the likelihood of a particular individual committing a crime illustrates why the use of predictive policing can produce inaccurate outcomes.

Michal Kosinski believes that AI can be used to detect people’s emotions, IQ, and how likely they are to commit certain crimes. AI cannot predict with accuracy someone’s propensity to commit a crime because data has both homogenous and heterogenous character. Kate Crawford and Trevor Paglen argue that there are politics involved when developers label data for the purpose of enabling the AI to assign meaning to the data. Programmers make and reproduce assumptions about the world when they label data. Cultural and subjective values determine what meaning the developers ascribe to the data. However, images have “multiple potential meanings, irresolvable questions, and contradictions.” When individuals try to define the meaning of their representations, they are engaged in a struggle for justice. Frank Knight elaborates that statisticians take heterogenous data and aim to create as much homogeneity as possible when placing individuals into groups based on similarity for the purpose of making a prediction. (p. 217) Data can be inherently ambiguous even when it provides evidence of behaviour due to having homogenous and heterogenous qualities. Imagine that individuals join a WhatsApp group in their city. The group founders organise social and sports activities which anyone can join. The data generated in the course of individuals sending messages over WhatsApp has a homogenous character. It can be assumed that individuals who regularly gather to play basketball have a connection to one another and to basketball. On the other hand, this data provides little information due to having a high degree of heterogeneity. Individuals are likely to have vastly different motivations for joining the games. For example, some individuals come because they want to experience the thrill of their team winning the game. Others come because they like meeting new people. Some members of the group have set themselves fitness goals. Similarly, individuals in the same group can have varying attitudes towards complying with the law. Some individuals can become part of the group in order to gather information, to identify a potential victim and to communicate this information to a criminal gang. Thus, the fact that individuals are part of the same group, spend time playing basketball and socialise provides little information about how likely a particular group member is to commit a crime.

Crawford and Paglen point out that an assumption underlying the labelling of data is that different concepts have a unifying essence. The basketball game is an example which shows the problematic nature of drawing a connection between a pattern of activities which can be discerned from the WhatsApp data and the act of engaging in a criminal activity. The conduct of the members of the group sheds little information about the motivations and the future behaviour of a particular individual. Momin Malik explains that machine learning can account for variability in the data. (p. 15) It needs to use comparable multiple entities in order to make a prediction. (p. 17) Malik clarifies that predictions are statements about how strong the correlations are. (p. 45) Correlations are “partially reliable” for deriving general conclusions about someone from a pool of data. (p. 45) The present example shows that individuals could share the same attributes such as playing basketball, age and living in the same area while having vastly different likelihoods of committing a crime.

The reliance on “certain correlations, trends and patterns” in a pool of data to predict someone’s propensity to commit a crime (p. 17) is inconsistent with the goals underlying the prohibition of discrimination. The making of assumptions about correspondence of a pattern in a group’s data to a specific phenomenon such as a crime relies on the making of subjective judgment. As part of making these subjective judgments developers make generalisations which are akin to stereotypes. Stereotypes involve attributing meaning to someone’s identity based on a “generalised view or preconception” about characteristics possessed by a group. Numerous international human rights treaties recognise that stereotyping is conducive to discrimination and thus require states to take steps to prevent stereotyping. The use of predictive policing tools has parallels with stereotyping. States should ban the use of predictive policing in respect of human beings because the attribution of characteristics to individuals based on making generalisations about a group is one of the causes underlying discrimination. This resolution brings the European Union a step closer to building the necessary enforcement framework for the use of predictive policing.

Tetyana (Tanya) Krupiy

Author: Tetyana (Tanya) Krupiy

Tetyana (Tanya) Krupiy is a postdoctoral fellow at Tilburg University in the Netherlands. Prior to this, she received funding from the Social Sciences and Humanities Research Council of Canada in order to undertake a postdoctoral fellowship at McGill University. Tanya has expertise in international human rights law, international humanitarian law and international criminal law. Tanya is particularly interested in examining complex legal problems which arise in the context of technological innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *