Exploring the impact of AI Systems on People on the Move in Light of the EU's AI Act

In the fall of 2023, the European Union (EU) is on the brink of finalising the world’s first comprehensive Artificial Intelligence (AI) law, a potential milestone that could establish the global benchmark for regulating the use of AI technologies. From AI lie detectors, automated decision-making, and risk assessment to tech surveillance systems at European borders, AI-powered technologies are increasingly becoming part of migration management and control in the EU [1]. While migrants face a multitude of pressing issues such as housing and access to health care, their fundamental right to privacy and data protection often takes a back seat, rendering them susceptible to experimentation with high-risk AI technologies. Consequently, it is imperative that the European Union’s Artificial Intelligence Act (AI Act) explicitly addresses the implications of AI systems on people on the move. Bringing together exponents from European institutions, the private sector and civil society, the Computers, Privacy and Data Protection (CPDP) conference offered a comprehensive and multi-faceted insight into this issue. Drawing from the panel discussion and an exclusive interview with Alyna Smith, a member of the Platform for International Cooperation on Undocumented Migrants (PICUM), the following sections will first delve into the impact of AI systems on migrants and then assess the role of EU regulations in protecting digital rights of vulnerable categories.  

BY Eleonora De Martin

The Impact of AI Systems on People on the Move

Artificial Intelligence is developed, tested, and deployed in the context of migration in a variety of ways affecting millions of people on the move. Understanding how these technologies work throughout a person’s migration journey and the reasons behind their deployment shed light on the interconnectedness of their use across the EU and globally. Technologies of migration control and management operate at a global scale reinforcing institutions, cultures, policies, and laws as well as highlighting the increasing influence of the private sector [2]. Technology innovation and development come often at the cost of oversight and accountability. As a vulnerable category, people on the move have been used as a human laboratory for experimenting the use of technologies with a high risk of human rights violations [3].

Even before crossing a border, people interact with AI-powered technologies taking the shape of unpiloted drones, augmented reality tools, biometrics, and surveillance technologies. The increasing deployment of drones for border control in Europe has created multiple levels of both vertical and horizontal surveillance, expanding borders thus, state authority. This not only results in forcing people to take more dangerous routes to escape border control technologies and enforcement, but it has also an impact on their right to privacy. This extends to all technologies employed for migration management, as their main scope is to gather data, make decisions, and report to the authorities information on potentially unsafe or unknown migrants. As a result, individuals are transformed into security objects and sources of data to analyse, store, and render intelligible. The collection of large amounts of  data is not apolitical, as it can be manipulated for political ends such as supporting anti-immigration policies [4].

At the border, technologies have been used to scan, surveil, and collect information relying on automated decision-making. Through systems such as AI lie detectors, polygraphs, and emotion control, AI is being used within a broader framework of racial suspicion against migrants. Biases at the border have serious implications if they permeate emerging technology that is experimentally deployed for migration control. For instance, Hungary and Greece have started to introduce AI-powered lie detectors and risk-scoring developed by a consortium called iBorderCtrl at checkpoints in the airports [5]. However, it remains unclear how these technologies would take into account cultural differences and the trauma affecting the memory and overall behaviour of refugees. 

Similarly, when it comes to automated decision-making, it remains unclear how algorithms account for the complexity of migration and human experience. While facilitating the decision-making procedures, the inherent risks for bias, discrimination, and potential “machine mistakes” are a concern for migrants and asylum seekers who are already disenfranchised [6].  Moreover, how the use of sensitive data collected is safeguarded and protected remains unclear raising issues of privacy and informed consent. As Alina highlighted, ‘this is very different from surveillance in a typical way but this is more of a hidden way in which technology is used, having an impact on people’ [7].

The AI-Act: What needs to change?

As the use of AI technologies increasingly permeates our daily lives and grows exponentially worldwide, the EU’s regulatory approach is trying to find a compromise between innovation and good governance. Although it is still in the legislative process, the AI Act proposal highlights that developing technologies can pose various types of risks to individuals and communities. The new regulation identifies five levels of risk, categorising AI-powered technologies from low risk to a total ban and demarcates responsibilities in the development and deployment of these technologies [8].

As Alina noted, ‘in the first proposal there was a recognition within the risk-based framework that a variety of uses of AI in the migration context presents a high risk of violations of human rights. That was already positive, but there were a variety of gaps concerning specific instances in the migration context but also more generally in terms of what it means to have oversight and safeguards to high-risk technologies’ uses.’[9]. The proposal does not address certain systems that are systemically deployed in the migration context such as predictive analytic systems and generalised surveillance technologies at borders. In response, civil society networks led by the European Digital Rights (EDRi), Access Now, Refugee Law Lab, and PICUM have been calling for EU policymakers to ban the use of these AI systems in the context of migration. 

Civil society organisations have also required more accountability, transparency and oversight regarding the “high-risk” use of AI systems. As  new technology development and deployment alter the relationships between the public and private sectors, there are growing concerns over ‘who becomes responsible for data protection risks and possible “machine mistakes” and related inaccurate or discriminatory outcomes’ [10]. This requires new governance structures and legislative frameworks like the algorithmic transparency standard for public institutions developed by the United Kingdom in 2021, which requires full information disclosure over the algorithm’s architecture and use [11]. However, even if certain information is publicly available, the complexity of the matter makes it hard for civil society and affected communities to understand and engage with the implications of AI technologies, thus limiting robust debate and stifling participation in policymaking for those who are most affected.

Conclusions

In reviewing the impact of AI systems on individuals on the move and the European regulatory landscape, it becomes evident that a deeper reflection on the scale of the issue is warranted. This deployment is often  framed and legitimised as a means of targeted narrowly-focused surveillance, but when confronted with the number of people involved and of technologies used this assumption raises significant questions. The deployment of AI-powered technologies is not confined to a small group of individuals. Rather, every person seeking authorisation to enter Europe in the future will undergo extensive data collection and storage through a common European Information Technology (IT) system. This entry-exit IT system, designed to document the movements of non-EU citizens across the EU’s external borders, relies on automated fingerprint identification and automated face recognition technology for the purpose of verification and/or identification [12]. In light of the scale of this high-risk deployment, the issue of transparency and liability concerning AI systems within the context of migration assumes paramount importance as the AI Act takes shape. Thus, it is necessary to promote the participation and agency of affected communities not only to ensure that oversight and accountability exist in this opaque space of high stakes and high-risk decision-making, but also to avoid the perpetuation of power imbalances through technologies.

References

[1] EDRI, “Civil Society Calls for the EU AI Act to Better Protect People on the Move,” European Digital Rights (EDRi), February 6, 2023, https://edri.org/our-work/civil-society-calls-for-the-eu-ai-act-to-better-protect-people-on-the-move/

[2] EDRI, “The Human Rights Impacts of Migration Control Technologies,” European Digital Rights (EDRi), September 29, 2020, https://edri.org/our-work/the-human-rights-impacts-of-migration-control-technologies/.

[3] Petra Molnar, “Technological Testing Grounds - Migration Management Experiments and Reflections from the Ground up, 2020 ... Knowledge Mobilization for Settlement,” EDRI, November 202AD, p. 16. https://km4s.ca/publication/technological-testing-grounds-migration-management-experiments-and-reflections-from-the-ground-up-2020/.

[4] Molnar, “Technological Testing Grounds,” p. 17

[5]  Robb Picheta, ”Passengers to face AI lie detector tests at EU airports,” CNN, 3 November 2018 https://edition.cnn.com/travel/article/ai-lie-detector-eu-airports-scli-intl/index.html   

[6] Derya, Ozkul. ‘’Automating Immigration and Asylum: The Uses of New Technologies in Migration and Asylum Governance in Europe’’. 2023. Oxford: Refugee Studies Centre, University of Oxford. p. 12 https://www.rsc.ox.ac.uk/files/files-1/automating-immigration-and-asylum_afar_9-1-23.pdf

[7] Alyna Smith, interviewed by author, Brussels, May, 2023.

[8] Petra, Molnar. ‘’The EU’s AI Act and its Human Rights Impacts on People Crossing Borders’’. DoT.Mig In Brief. June, 2022, p. 4. https://www.bosch-stiftung.de/sites/default/files/publications/pdf/2022-06/The%20EUs%20AI%20Act%20and%20Its%20Human%20Rights%20Impacts.pdf

[9] Alyna Smith, interviewed by author, Brussels, May, 2023.

[10] Ozkul, Automating immigration, p. 12

[11] Ozkul, Automating immigration, p. 23

[12] Dumbrava, Costica. “Artificial Intelligence at EU Borders Overview of Applications and Key Issues.” European Parliamentary Research Service, July 2021. https://www.europarl.europa.eu/RegData/etudes/IDAN/2021/690706/EPRS_IDA(2021)690706_EN.pdf.