Challenges and Opportunities for algorithmic crowd surveillance

Artificial intelligence brings numerous challenges to law enforcement frameworks. As States intend to ever more rely on artificial intelligence, such use remains challenging under European law. The intent of the French government to use algorithmic crowd surveillance reflects such challenges.

The use of surveillance measures by law enforcement agencies has always been controversial from a human rights point of view. On one hand, the legitimacy of the States to ensure security; on the other, the privacy of individuals (Centrum för Rättvisa v Sweden no 35252/08, (ECtHR, 25 May 2018). However, the reliance on artificial intelligence as a means of surveillance has rapidly grown in Europe bringing multiple challenges for different frameworks. Moreover, artificial intelligence is used not only for adopting surveillance measures targeting one individual but a vast number of them as a crowd. Law enforcement agencies see the opportunities of such uses for special instances, such as sports events or as a general deployment for maintaining public order. The challenges with such uses are not only linked with the technology itself, but the legal framework allowing surveillance as well. Such surveillance aims at identifying behaviours that would constitute a threat to public order and transmitting the information to law enforcement agencies for them to act upon. However, if such technology can enhance police interventions, it also should be framed accordingly by law. A multiplicity of legal frameworks collide in such a perspective. These frameworks, read together, reveal major flaws, mostly due to a lack of precision in the application of such technology.

Firstly, such frameworks often lack the capacity to clearly define the reasons for their deployment. For instance, the French framework establishing algorithmic surveillance in crowds for the Olympic games 2024 aims at targeting “unusual behaviour” (Le Monde, JO 2024 : les députés autorisent la videosurveillance algorithmique avant, pendant et après les Jeux ). The law enacting surveillance technology does not specify what constitutes such behaviour. While the Ministry of internal affairs indicated that such terminology would be defined in an executive order later, one could question whether it would be satisfactory under European law. Indeed, such algorithms must be authorized by law. However, the extent to which this important feature of the algorithm, directly linked to its purpose, is left at the hand of the Ministry for internal affairs is challenging. Both the GDPR and the AI Act aim for such qualifications to be defined in advance, in order to bring clarity and transparency in such a sensible use of AI (AI Act and GDPR). In the current framework, the Ministry of internal affairs with the head of law enforcement agencies, both decide what constitutes such behaviour and enforce such qualification. The lack of clarity over terminology is also incompatible with the ECHR framework, requiring such surveillance tools to be extremely thorough in assessing the reasons for their use under a precise and clear legal framework (Klass v Germany, App no 5029/71, (ECtHR, 6 September 1978). Clarity is thus a main feature of AI used by law enforcement agencies, and multiple frameworks assess its importance.

Secondly, such frameworks are lacking precision concerning the powers of law enforcement agencies when using artificial intelligence. The main issue lies in the decision-making of the algorithm when it detects behaviour that would be a threat to public order. Both the Artificial Intelligence Act and the GDPR require meaningful human intervention for artificial intelligence. Hence theoretically, users must be able to weigh the algorithmic result. However, such systems are designed to detect numerous instances of abnormal behaviours to increase the speed of intervention of law enforcement authorities. Such a need for rapid intervention might impair the effectiveness of the human intervention to effectively assess whether the detection of abnormal behaviour is well founded. The risk of such a perspective is the impossible mitigation of potential errors of the algorithm, leading to detaining a person by the mere detection of the algorithm.

Algorithmic crowd surveillance brings legal challenges to multiple frameworks. Notably, it brings issues regarding the confrontation of the GDPR, still applying today and the forthcoming Artificial Intelligence Act. Since the GDPR focuses on the protection of personal data when they are automatically processed by algorithms, artificial intelligence software deployed in such instances is most of the time incompatible with its provisions. This is notably due to the amount of data processed by such an algorithm. However, such use could be compatible with the artificial intelligence act if it answers its requirements. While such software is incompatible with the GDPR, it could be compatible with the AI Act. Thus, it is only a matter of time before such systems can be deployed for law enforcement agencies.  Nevertheless, it would have to clearly define, in a clear and transparent manner the purpose for its deployment. The use of vague terms brings the risk of arbitrary detections by the algorithm. Not only the legal framework must clearly pinpoint the purpose, and define the behaviours AI aims at detecting, but it must be developed accordingly.

Algorithmic crowd surveillance brings intrinsic challenges under the GDPR framework, the human rights framework, and the forthcoming Artificial Intelligence Act. The French project demonstrates its imperfect translation and the necessity for clarifying the purposes of algorithmic surveillance. However, to weigh the algorithmic result in such a surveillance scheme, covering a large area as well as a large number of people can prove to be challenging. Nevertheless, the legal framework should allow human redress of the decision to avoid any human rights violations that would result. Hence, not only the legal framework, but law enforcement agencies must train its personnel on how to use such technologies to avoid any risk to the human rights of citizens.

The French example demonstrates that the use of algorithmic crowd surveillance answers to a plurality of frameworks under the law enforcement scheme. In this perspective, States must take into account this plurality of frameworks in order to preserve the Rule of Law and human rights.

Théo Antunes
Latest posts by Théo Antunes (see all)

Author: Théo Antunes

Théo Antunes is a doctoral student at the University of Luxembourg and the University of Strasbourg. His thesis focuses on the independence of judges and the use of artificial intelligence in front of criminal jurisdictions. His researches focus on new technologies, (especially artificial intelligence) applied to criminal justice, human rights, corporate criminal law, and financial law.

Leave a Reply

Your email address will not be published. Required fields are marked *