Legal, ethical, and governance challenges of AI for law enforcement. Voices from CPDP.AI 2024

This is the first of a special series of articles exploring our time at CPDP.ai this year. The conference set out to put the accelerating complexity of AI at centre stage, with the underlying question: Is AI governable? It is no wonder then that many panels at the conference focused on the controversial use of AI for law enforcement, specifically the regulation and governance of AI in this critical sector, as well as related issues of data protection and the handling of electronic evidence. The article tries to put in broader conversations the presentations, statements, and ideas from different panels and panellists on the topic. This article puts in conversation statements and ideas from CPDP.ai 2024 panellists Erik Valgaeren, Emilio De Capitani, Michèle Dubrocard, Andrea Bertolini, Sofie De Kimpe, Elise Lassus, Maximilian Zocholl, Jan Ellermann, Francesco Paolo Levantino, Johan Van Banning, Niovi Vavoula, Alexandra Karaiskou, and Naomi Theinert.

By Michele C. Tripeni

Opening the panel on responsible AI in law enforcement, Europol’s Jan Ellerman stated that “the use of AI by law enforcement is in the end absolutely necessary. And that is due to the ever-growing volumes of personal data we’re processing”. He later reiterated the point during the console talk by The Security Distillery on Avatar.fm. In a similar vein, Sofie De Kimpe (Vrije Universiteit Brussel) talked about the long-standing relationship between law enforcement and technology, highlighting a general “moral optimism” when it comes to the use of technology. Tech in law enforcement is seen as a way to increase efficiency, to help collect and process data, to solve crimes, and eventually even to bring about structural reform. However, even though higher-ups constantly request further technology to be implemented, there’s traditionally more scepticism on the ground. Street cops tend to find ways not to use technology and not to register data if they find it inconvenient.

One of the main challenges highlighted at the conference comes from the use of AI systems for processing evidence. Specifically, the impact this has on the judicial process. As stated by Johan van Banning (Vrije Universiteit Amsterdam), Niovi Vavoula (University of Luxembourg), and Alexandra Karaiskou (European University Institute), the main issue is the difficulty in contesting AI processed evidence and algorithmic decisions. Vavoula and Karaiskou offered the example of European Travel Information and Authorisation System decisions where the grounds of the decisions, which are needed for contestation, are extremely vague. This is in clear contrast with the European Court of Justice decision in League des droits humains v. Conseil des ministers that a defendant “must have had an opportunity to examine both all the grounds and the evidence on the basis of which the decision was taken”. The lack of an “effective opportunity” to contest evidence is also the main hurdle to Van Banning’s proposal that AI processed evidence should be treated as similar to expert evidence. However, it is impossible to question the algorithm the same way you would an expert witness to contest the evidence. This is further exacerbated by the black box nature of many systems. Courts are thus reluctant to consider AI systems as expert witnesses, and the AI Act fails to regulate this aspect.

Another set of legal challenges stems from the inability of the AI Act to provide clear regulation for the use of AI by law enforcement. Both Emilio De Capitani (Free Group) and Michèle Dubrocard (EDPS) pointed out how ambiguous the Act is about criminal liability and the use of AI by law enforcement. In agreement, Van Banning highlighted the questionable effectiveness of the act’s provisions. The definition of high risk AI tools and their regulation appears to be limited, and it is not always clear whether tools used by law enforcement fall into this category. Both Van Banning and Erik Valgaeren (Stibbe) highlighted a major problem with the interpretation of the AI Act regarding data that could be used for classification purposes, which are prohibited by the Act. As Van Banning pointed out, it is unclear if the collection of such data is in itself a violation of the LED’s provisions on profiling and the art. 5.1(c) AI Act. Furthermore, Valgaeren argued the data could just be collected on the basis of legitimate identification purposes. Andrea Bertolini (Sant'Anna School of Advanced Studies) provided an arguably more problematic insight on the matter, stating that often even the engineers working on a specific AI system find it impossible to clearly assign the high risk category.

Of course, aside from the legal concerns above, one of the main talking points in the conference was the widely discussed issue of algorithmic bias. With regards to its impact on law enforcement uses of AI, Elise Lassus (European Union Agency for Fundamental Rights) showcased the FRA’s research on feedback loops in predictive policing. This is one of the main issues plaguing fully automated systems in law enforcement. Feedback loops can end up reinforcing biases stemming from the model itself, from overreaction to random noise, overreliance on historical data, or from differences in the observability of certain crimes. These issues were also echoed by Maximilian Zocholl (Europol), Niovi Vavoula and Alexandra Karaiskou. Zocholl pointed out how even non fully autonomous systems can be impacted by biases, especially when considering the effects of automation bias on the human operator. A phenomenon which according to Zocholl is even worse when employing explainable systems. Vavoula and Karaiskou agreed, adding to the list other human biases such as selective adherence, and anchoring bias.

Other ethical and governance challenges were highlighted as well. Francesco Paolo Levantino (Sant'Anna School of Advanced Studies) explained the concept of emotional dominance, derived from the concept of “identity dominance” developed during the occupation of Afghanistan which meant the ability to deny adversaries the possibility to mask their identity. According to Levantino, advances in emotion recognition systems might inhibit the free expression of emotions, creating behavioural and emotional standardisation. Importantly, these emotion recognition systems fall under the high-risk category in the AI Act and are subject to the related exemptions for law enforcement. Somewhat in line with this, De Kimpe also underlined a tendency for dehumanisation in law enforcement, with less frequent and more abstract contact between police and public, as well as between officers themselves.

It was not all doom and gloom as many different solutions were proposed for each problem. Van Banning highlighted some solutions which can be thought of as minimum requirements to make AI decisions contestable. Namely, knowing for what purpose the tool was developed, what data it was trained on, how it was tested for a forensic context, how it was integrated in the forensic process, and whether it is appropriate for the specific data. The need for more testing, evaluation, and justification was also echoed by Naomi Theinert and Elise Lassus. Lassus also proposed technical solutions to counteract biases and feedback loops, such as regularisation and down sampling. Similarly, Banning and Zocholl talked about explainability systems, echoed by Theinert who also highlighted the need to properly select and justify the explainability and fairness metrics. Finally, De Kimpe underscored the crucial role that proper training and policies play in using technology in a human and responsible way.

Panels and panellists (link to available recordings):

Gathering Data for Criminal Investigations After the e-Evidence Regulation: Future Challenges and Solutions – Stanislaw Tosza, Vanessa Franssen, Erik Valgaeren, Antonios Bouchagiar, Aisling Kelly

Surveillance State or Safety Net? Navigating the Future of AI in Law Enforcement – Emilio De Capitani, Michèle Dubrocard, Marc Rotenberg, Oreste Pollicino, Andrea Bertolini, Karine Caunes

Responsible AI in Law Enforcement – Sofie de Kimpe, Elise Lassus, Maximilian Zocholl, Daniel Drewer, Jan Ellermann

CPDP Academic Session IIFrancesco PaoloLevantino, Johan van Banning, Niovi Vavoula, Alexandra Karaiskou, Naomi Theinert, Ivan Szekely