In 2021 members of the European Parliament passed a resolution to endorse the report of the Civil Liberties Committee. The report expresses an opposition to the use of predictive policing tools which operate on artificial intelligence (hereinafter AI) software in order to make predictions about the behaviour of individuals or groups “on the basis of historical data and past behaviour, group membership, location, or any other such characteristics.” (par. 24) This opposition is based on the fact that predictive policing tools cannot make reliable predictions about the behaviour of individuals. (par. 24) Additionally, the report notes that AI applications have a potential for reinforcing bias and discrimination. (par. 8) Although this resolution is non-binding, Melissa Heikkilä believes that it conveys a message of how the European Parliament is likely to vote on the AI Act. There is a need for a legally enforceable ban on the use of AI predictive policing tools in respect of human beings. As discussed below, the use of AI can lead to inaccurate assessments due to the inherent character of the data. The basing of decisions on group data is inconsistent with protecting individuals from discrimination.
Continue reading “A ban on using predictive policing to forecast human behaviour: a step in the right direction”