Ethics, Artificial Intelligence and Predictive Policing

Abstract

AI is increasingly being used in all areas of our lives, including law enforcement. Through pattern identification, AI offers the field of law enforcement an incredible opportunity to better prevent crime. In this regard, AI is being used in predictive policing, or the ability to predict crime before it happens. The practice itself already poses many ethical and legal dilemmas, but AI reinforces these problems. This article explains how the use of AI in predictive policing poses a threat to fundamental rights and proposes a possible alternative.

By Apolline Rolland


Artificial Intelligence (AI) and law enforcement

Whether in the detection of fraud, traffic accidents, child pornography, or anomalies in public space, AI has promising applications for law enforcement and is already being used in this field [1]. Indeed, AI makes law enforcement work less time-consuming, less prone to human error and fatigue, and more cost-effective [2] [3]. AI is based on algorithms and its growing use also corresponds to an increasingly data-driven society [4]. The most promising application of AI for law enforcement is its ability to identify patterns, thus an opportunity to better predict, anticipate, and prevent crime [5]. This ability to forecast crime before it happens is also known as predictive policing [6]. The use of AI in predictive policing has been subject to controversy, given that it poses great ethical and juridical concerns [7].

In this context, two questions arise: who is targeted by predictive policing and for what purpose? When AI is used in predictive policing to investigate digital networks, such as tracking fraud or child pornography online, it appears rather positive. But, when used for spatial analysis to identify ‘street crime’ and at-risk areas, for example, it has the power to stigmatise and discriminate [8]. The social effects of AI applications in predictive policing are, therefore, very different depending on who is targeted and for what purpose. While its use alone is not a  problem per se, the lack of safeguards and the use of collected data are. As a result, the use of AI in predictive policing can threaten individual fundamental rights, as hereafter explained.


A threat to fundamental rights

AI algorithms are used in predictive policing to identify and sort through large amounts of historical data on criminal activity in order to determine people or places at risk. Such processes are also known as risk or threat assessment. While generally well-intentioned, the historical data that feeds these algorithms raises significant concerns.

First, the data can be subject to error: law enforcers may incorrectly enter it into the system or overlook it, especially as criminal data is known to be partial and unreliable by nature, distorting the analysis [9]. The data may be incomplete and biased, with certain areas and criminal populations being over-represented [10]. It may also come from periods when the police engaged in discriminatory practices against certain communities, thereby unnecessarily or incorrectly classifying certain areas as ‘high risk’ [11]. These implicit biases in historical data sets have enormous consequences for targeted communities today [12]. As a result, the use of AI in predictive policing can exacerbate biased analyses and has been associated with racial profiling [13]. A good example are the cases of Brisha Borden and Vernon Parter. Borden, an 18-year-old black girl with no prior convictions, and Vernon Parter, a 41-year-old white man previously charged with armed robbery and sentenced to 5 years in prison,were both charged with various thefts of goods worth about $80 in 2014. Yet, the AI algorithm COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), used by police departments to conduct risk assessments,  rated Borden as high risk of future conviction, while Mr Parter was assigned a low risk. The AI algorithm clearly discriminated against Ms Borden [14] [15].

Moreover, the data used often focuses on 'street crime', such as theft or drug trafficking, offences that are often associated with certain demographic groups and neighbourhoods. Meanwhile, white-collar crime, such as money laundering, corporate fraud or embezzlement, tends to be given less attention. Other data, such as information about domestic violence, remain largely unreported [16] [17]. Such selection bias is always present in datasets, thus data should never be taken as an accurate representation of the world [18]. Thus, the use of AI in predictive policing facilitates the creation of what Asaro (2019) calls a problematic ‘criminal type’ [19]. As a result, some neighbourhoods or individuals can become ‘over-policed’, with, for example, increased identity checks or patrols, exacerbating stereotypes, discrimination and prejudice and criminalising certain cultures [20]. This can lead to a breakdown in relations and an erosion of trust between the population and the police, and can lead to an escalation of violence rather than its prevention [21] [22].

In addition, the lack of transparency in the operation and decision-making process of AI is also of great concern. According to a joint report by the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Interpol, AI decisions and actions must always be understandable to human users if AI is to respect the fundamental rights of citizens [23]. However, the AI algorithms used in predictive policing can be very complex, with predictive policing programmes even being referred to as ‘black boxes’ because of the opacity of the systems [24]. Even experts struggle to understand everything that happens in the process and cannot always explain the decisions made by AI [25]. This lack of transparency leads to a lack of accountability. Without explicability, the police cannot effectively be held accountable to the public for their actions. Additionally, AI models must always be explainable and verifiable for people to trust the system, and for the judiciary to be able to exercise their authority lawfully. Indeed, in democracies, convicts have the right to understand judicial decisions. If these decisions made by the AI are not fully explainable, because they are the result of an opaque process, then they cannot de facto be legal. In addition, the private tech companies that design and sell this technology are not required to reveal their algorithms, further participating in the opacity of the process [25]. It also raises major concerns about the responsibilities of the private sector involved, power structures and democratic accountability [27]. Thus, the use of AI in predictive policing should be subject to proper control and oversight by governing institutions to ensure fair trials and protect people from arbitrary decisions.

As it stands, the use of AI in predictive policing requires more regulation to avoid the deepening of social inequalities, the erosion of trust in law enforcement and the judiciary, the weakening of democracy, as well as the threat of people’s fundamental rights and their exclusion.


A possible alternative

Given that AI applications within predictive policing, as they exist today, threaten human rights, necessary safeguards and legislation must be adopted to promote their more ethical use. While this discussion has revealed the darker sides of using AI in predictive policing, it need not be so. Predictive policing is about better predicting crime, not about dictating how law enforcement should respond to this information [28]. Law enforcement has a duty to prevent crime where it can, but it can choose how to do so, taking into account the social effects of its actions [29]. The issue here is not so much the use of AI in predictive policing, but rather what law enforcement decides to do with the data generated to better prevent crime. Law enforcement may decide to define predictive policing as a way of targeting people before they commit a crime, thus condemning people before they even act. But it can also decide to use the AI-generated data to create long-term solutions to prevent crime based on effective social policies that will help people to thrive [30]. Some programmes have already been implemented and proven successful, such as the annual One Summer programme in Chicago. This project provides social resources to young people identified as being at risk to reduce the risk of violence and has shown a 51% reduction in involvement in violence-related arrests [31].

This is, therefore, how AI in predictive policing should be used: not as a means to oppress or discriminate, but as a means to process large amounts of data to better target the needs of the population that law enforcement serves. This will ensure that appropriate responses are developed for the benefit of all citizens.

Sources

[1] Gonzalez Fuster, G (2020) ‘AI and law enforcement: Impact on fundamental rights,’ European Parliament Think Tank, pp.1-87, [online] available from https://www.europarl.europa.eu/RegData/etudes/STUD/2020/656295/IPOL_STU(2020)656295_EN.pdf, accessed on 19th July 2021.

[2] Jenkins, R and Purves, D (2020) ‘Artificial Intelligence and Predictive Policing: A Roadmap for Research,’ pp.1-31, [online] available from http://aipolicing.org/year-1-report.pdf, accessed on 19th July 2021.

[3] Raajmakers, S (2019) ‘Artificial Intelligence for Law Enforcement: Challenges and Opportunities,’ IEEE Security & Privacy, Vol. 17, No. 5, pp.74-77.

[4] Rudin, C (n.d.) ‘Predictive Policing: Using Machine Learning to Detect Patterns of Crime,’ [online] available from https://www.wired.com/insights/2013/08/predictive-policing-using-machine-learning-to-detect-patterns-of-crime/, accessed on 19th July 2021.

[5] Ibid.

[6] Rigano, C (2019) ‘Using Artificial Intelligence to Address Criminal Justice Needs,’ National Institute of Justice Journal, No. 280, pp.1-10.

[7] Gonzalez Fuster.

[8] Jenkins and Purves.

[9] Leese, M (2020) ‘Predictive Policing: Proceed, but with Care,’ Policy Perspectives, Vol. 8, No. 14, pp.1-4.

[10] European Union Agency for Fundamental Rights (2020) ‘Getting the Future Right: Artificial Intelligence and Fundamental Rights,’ pp.1-103, [online] available from https://fra.europa.eu/en/publication/2020/artificial-intelligence-and-fundamental-rights, accessed on 19th July 2021.‌

[11] AccessNow (2018) ‘Human Rights in the Age of Artificial Intelligence,’ pp.1-38, [online] available from https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf, accessed on July 19th 2021.

[12] Asaro, P M (2019) ‘AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care’, IEEE Technology and Society Magazine, Vol. 38, No. 2, pp.40-53.

[13] United Nations Interregional Crime and Justice Research Institute (UNICRI) and International Criminal Police Organization (INTERPOL) (2019) ‘Artificial Intelligence and Robotics for Law Enforcement,’ pp.1-33, [online] available from http://www.unicri.it/artificial-intelligence-and-robotics-law-enforcement, accessed on July 19th 2021.

[14] Akshara, K (2020) ‘Artificial Intelligence Predictive Policing: Efficient, or Unfair?,’ [online] available from https://medium.com/the-black-box/artificial-intelligence-predictive-policing-efficient-or-unfair-fe731962306d, accessed on 23 July 2021.

[15] Angwin, J, Larson J, Mattu, S and Kirchner, L (2016) ‘Machine Bias,’ [online] available from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed on 23 July 2021.

[16] AccessNow.

[17] European Union Agency for Fundamental Rights.

[18] Leese.

[19] Asaro.

[20] AccessNow.

[21] Leese.

[22] Jenkins and Purves.

[23] UNICRI & INTERPOL.

[24] Couchman, H (2019) ‘Policing by Machine: Predictive Policing and the Threat to our Rights,’ pp.1-86, [online] available from https://www.libertyhumanrights.org.uk/issue/policing-by-machine/, accessed on July 19th 2021.

[25] Leese.

[26] Asaro.

[27] AccessNow.

[28] Jenkins & Purves.

[29] Ibid.

[30] Leese.

[31] Asaro.