Accountable, trustworthy, and ethical AI in public administration. Voices from CPDP.AI 2024

This is the second article exploring our time at CPDP.ai this year. Once again, keeping in mind the conference’s underlying question: Is AI governable? Different panels at the conference focused on the use and abuse of AI in public administration, specifically looking at the regulation and governance of public AI systems. The article tries to put in broader conversations the presentations, statements, and ideas from different panels and panellists on the topic. This article puts in conversation statements and ideas from CPDP.ai 2024 panellists Mirko Tobias Schäfer, Minna Ruckenstein, Anni Ojajärvi, Diletta Huyskes, Oana Goga, Matthias Spielkamp, Saskia Lensik, Fife Ogunde, Migle Laukyte, Tijmen Wisman and Kris Shrishak.

By Michele C. Tripeni

Unsurprisingly, just as with the panels on AI for law enforcement, the legal debate around AI and liability remained central in the discussion of AI systems for public administration. Anni Ojajärvi (Kela) added a new dimension to the debate by asking who should be responsible, developers or users. Indeed, from the developer’s point of view, they only built a tool for the end user, without control on how it would be used. Similarly, from the end user’s point of view, they only employed a tool built by others, without control over the inner workings of the system. In the same vein, Fife Ogunde (Government of Saskatchewan) also talked about determining liability and the difficulty in assessing whether a person has been discriminated against. This was echoed by Tijmen Wisman (Vrije Universiteit Amsterdam) when talking about “secret blacklists” and the lack of avenues to challenge the administration, particularly when the use of automated systems is not disclosed. In fact, satisfying the burden of proof and ensuring accountability can be almost impossible. Furthermore, algorithmic decisions are presented as true and without motivation, as Migle Laukyte (Universitat Pompeu Fabra) points out. This is clearly incompatible with the right to good administration, which guards against irrational and unmotivated decisions. The risk is that governments might exploit what Wisman calls “unprecedented invisible informational power”. A sentiment that was also behind Laukyte’s remark that automated systems can serve as an “algorithmic layer of complexity to keep citizens out”.

Indeed, central to the whole discussion about AI in the public sector is the issue of alignment and public values. As Minna Ruckenstein (University of Helsinki) stated, there is a need to better understand what public values mean when it comes specifically to algorithmic systems. Both she and Ojajärvi believe the added efficiency that comes from employing automated systems is a necessary aspect of this discussion. However, there is good and bad efficiency. If governments are to act in the interest of their citizens, it is necessary to combine cost-cutting efficiency with other values, such as equality. Yet, as Schäfer pointed out, public values are problematic because they are flexible and often change. Indeed, Diletta Huyskes (University of Milan) noted that public values are very contextual and dependent on culture. Thus, we need to contextualise algorithms and the values ingrained in them. Schäfer believes public values are actually already embedded in the technologies because they are developed differently depending on context. On the other hand, according to Laukyte, it is important to question whether algorithmic systems can take into account individual stories and backgrounds, which are fundamental for good administration.

As Ruckenstein reassured, the problem is not a lack of interest from developers and tech companies in solving important societal problems. However, the reality is that complex societal issues cannot be solved by technology alone. Furthermore, the models developed can end up having no predictive value due to bad or scarce underlying data or flawed sociological conceptions. Most importantly though, they might not be needed. As Ojajärvi recounted, developers sometimes come up with solutions to problems they are not aware of. They end up asking to find a problem to their solution, rather than the usual opposite. Indeed, Ruckenstein believes social workers know perfectly well how to operate and they only ask for technological tools to help with information overload. The problem is the discourse around AI is very general, without any definition of the systems discussed. It is only when you see the systems in action that you start understanding whether they can help your line of work. Indeed, as Mirko Tobias Schäfer (Utrecht University) pointed out regarding the Dutch child benefits scandal, the failure is not strictly technological. Rather, political pressure forced civil servants to accept the data at face value. The context of the failure often matters more than the tech behind it. It is no use talking about certification and fair algorithms if the use case is inadequate.

Another issue with AI governance in the public sector is that the communities involved don’t interact as much as they should, according to Saskia Lensink (TNO). Developers only go to developer conferences, and lawyers only go to law conferences, thus, there should be deeper cross-over and collaboration. On a similar note, Huyskes highlighted the lack of public discourse on AI and consequently on AI failures, bringing the case of Italy as an example. She stated that Italy has not had an AI failure yet, because of this lack of discourse. Indeed, a lot of AI failures go unnoticed, as Ruckenstein points out.  This is a problem for the whole AI and government community. Both Ruckenstein and Ojajärvi agree that people are learning about AI’s issues as they go. Tech has a learning-by-doing mentality – go fast and break stuff – but that doesn't include learning from others’ failures. Therefore, there needs to be openness and disclosure about this model and about the failures and lessons learned. As Ojajärvi stated, to make AI systems trustworthy we need dialogue and transparency.

Finally, the role of academia is in developing good governance practices and adequate regulation was highlighted throughout the panel on the role of researchers in AI governance. As Natali Helberger (University of Amsterdam) stated, the AI Act specifically provides for academics helping to audit AI systems, developing codes of conduct, providing a counterweight to commercial innovation, and developing socially relevant and environmentally beneficial systems. Indeed, Oana Goga (CNRS) believes policymakers want evidence and clear recommendations, and researchers can provide the evidence needed to support policy. In a similar vein, Matthias Spielkamp (AlgorithmWatch) called for “evidence-based advocacy”. That is, advocates collaborating with academics to influence policy as a way to ensure positive outcomes.


Panels and panellists (link to available recordings)

Beyond Failures: Repairing the Future of AI with Public ValuesMirko Tobias Schäfer, Minna Ruckenstein, Anni Ojajärvi, Diletta Huyskes, Iris Muis

The role of research and researchers in AI governanceOana Goga, Matthias Spielkamp, Sven Schade, Saskia Lensik, Natali Helberger

CPDP academic session IIIFife Ogunde, Pablo Marcello Baquero, Claudia Diaz, Alexandre Debant, Jo Pierson

The use of AI in decision-making by public authorities: critical perspectives – Migle Laukyte, Tijmen Wisman, Kris Shrishak, Eline Leijten