Terug naar overzicht

Promovendireeks # 24: An administrative algorithmic decision is more than a sum of its parts


The first administrative experiments with algorithmic decision-making beyond simple automation are behind us, with the childcare allowance scandal being perhaps the most notorious example in the Netherlands. This ‘experiment’ has come with some costly lessons in both the literal and figurative sense. While the amount of financial damages owed to the victims of the scandal exceeds €900 million, the emotional damage as well as the societal ramifications of this scandal amount to what the State Commission on the Rule of Law has termed ‘The Broken Promise of the Rule of Law’.

Beyond its immediate consequences, a scandal such as this reveals the constitutional implications of inserting algorithms into the administrative decision-making process. The injustice resulting from the childcare allowance scandal cannot be solely attributed to the discriminatory algorithmic system used within the decision-making process, but instead spans across various actions taken in the legislative, executive and judicial branches. This scandal thus highlights that injustice resulting from administrative algorithmic decision-making cannot be avoided simply by improving the accuracy, faithfulness or explainability of the chosen algorithm. Instead, addressing the injustices in administrative algorithmic decision-making requires a comprehensive account of the vulnerabilities of this decision-making process. The challenge lies thus in understanding how exactly are algorithms transforming the traditional administrative decision-making process, which vulnerabilities result from this, and how to address these vulnerabilities at the level of the legislative, executive and judicial branches.

The first part of this challenge can be addressed by recognising that administrative algorithmic decision-making is more than a sum of its parts. Next to the social and technical elements leading to the decision, the sociotechnical interactions between these elements, as discussed further below, carry considerable significance in determining the lawfulness of the resulting decision. This new sociotechnical reality needs to be systematically studied in order to propose meaningful adjustments to the lawmaking procedure, and perhaps more importantly, to ensure that persons subjected to algorithmic decisions maintain effective access to justice while the executive branch experiments with getting algorithmic decision-making right.

This contribution first highlights the challenges related to effectively contesting administrative algorithmic decisions. It then explains how adopting the sociotechnical system lens could take us a step further in understanding the vulnerabilities embedded in the administrative algorithmic decision-making process, and enable us to assess the effectiveness of available remedies.

The challenge of effectively contesting administrative algorithmic decisions

While governments are busy devising rules and procedures that could curtail the harms associated with administrative algorithmic decision-making, individual remedies must maintain their effectiveness in righting the wrongs that unfold when algorithms are deployed. Although a post hoc judicial remedy is not always the best remedy for addressing algorithmic harms, the courts’ constitutional role in keeping legislative and executive action in check should not be underestimated. At the same time, ensuring that persons adversely affected by administrative algorithmic decisions have meaningful access to a judicial remedy is no easy feat. While there appears to be broad consensus that administrative algorithmic decisions should remain contestable by their recipients, there is little clarity regarding what precisely contestability entails.

To be able to challenge an administrative decision, a person must have some knowledge pointing towards that decision being unlawful or arbitrary. The CJEU has consistently maintained that to access an effective remedy within the meaning of article 47 of the EU Charter of Fundamental Rights, the decision recipient must be given reasons regarding why a specific administrative decision has been taken. The reasons given for an administrative decision should provide its recipient with adequate information to verify whether the decision is well-founded or needs to be contested. Thus, administrative reasoning serves an instrumental function in making an administrative decision contestable.

This function of ensuring contestability should also hold for administrative algorithmic decisions. Yet, administrative authorities currently lack an understanding of how can administrative reasoning fulfil this function in light of the sociotechnical reality of administrative algorithmic decision-making.

The sociotechnical lens for studying administrative algorithmic decision-making

To gain a better understanding of what information administrative reasoning for algorithmic decisions should contain, we can inspect the administrative algorithmic decision-making process as a sociotechnical system spanning across the stages of commissioning, model-building and decision-making. The sociotechnical system perspective recognises that the performance, effectiveness and downstream consequences of technologies derive from the interplay between the technical design and the social dynamics of their implementation. Uncovering the choices and sociotechnical interactions in the various administrative algorithmic decision-making stages that could lead to unlawful or arbitrary outcomes provides a foundation for assessing what aspects of this process should decision recipients be able to contest.

Algorithms introduce several new aspects into the traditional administrative decision-making process. Next to the administrative official that would traditionally have been tasked with deciding a matter by investigating the relevant legal provisions and factual circumstances, the algorithmic decision-making process involves other stakeholders such as project or service managers and developers, and technical components such as the algorithmic model or the digital interface that complements it. By deconstructing the administrative algorithmic decision-making process into the choices made across the commissioning, model building and decision-making stages, it becomes possible to study the interactions of these choices and the legal and technical limitations ensuing from them. Such an approach draws the focus away from the widely studied opacity-related concerns around algorithmic models. While focusing on how to ensure sufficient transparency and human interpretability of the algorithmic models used in administrative decision-making is undeniably important, such studies tend to overlook other relevant interactions that contribute to the lawfulness of administrative algorithmic decision-making.

For instance, consider that the legal norm underlying a specific administrative decision foresees the need to exercise administrative discretion. For example, an environmental agency has to, among other conditions, make sure that an activity for which a permit is sought does not ‘adversely affect the environment’. Regardless of existing examples of activities that have caused adverse environmental effects, the notion of an ‘adverse effect’ to the environment is a concept that cannot be clearly captured in code nor can it be exhaustively defined. Instead, the administrator has to apply, at least partially, discretion in assessing the facts of the case at hand to determine whether an activity could lead to ‘adverse effects’ under the specific circumstances.

This means, first, that the entire process of issuing such a permit cannot be fully automated since the application of discretion must be justified with coherent reasons that go beyond a descriptive account of deployed algorithmic processes. Thus, the algorithmic model can only fulfil a specific supportive function such as sorting information, generating a recommendation or calculating a value relevant for the decision-making. This function, and its significance in the decision-making process, affect what type of algorithm – ranging from interpretable to inscrutable – can be used for such purposes. Systems that are inscrutable for the untrained eye may need to be complemented by design features or specific explainability techniques aimed to increase their interpretability, whereas inherently inscrutable systems, according to the CJEU, might not be an option at all.

The function chosen for the algorithm in the decision-making process also determines what role(s) need to be fulfilled by administrative official(s) later in the decision-making stage. These roles may vary from simple monitoring of the algorithmic output for correctness and relevance, and ensuring ‘effective human oversight’, to conducting the discretionary part of decision-making based on the algorithmic output.

Although this blogpost cannot delve deeper into the various sociotechnical interactions, it is already evident from this example that numerous sociotechnical vulnerabilities could affect the lawfulness of the resulting administrative decision. Think, for example, about the downstream impact of misinterpretation or misapplication of available discretion in the commissioning and model-building stages. Or, consider the impact of human biases on the resulting decision when humans interact with the output of algorithmic models in the decision-making stage.

A step closer to algorithmic justice

Systematically deconstructing the administrative algorithmic decision-making process in the described manner helps to create a more coherent picture of the various elements at play. An overview of the sociotechnical interactions that may lead to arbitrary or unlawful administrative algorithmic decisions provides an account of information that may need to be contested by decision recipients and shows that an administrative decision is more than the sum of its parts. This account of information also takes us a step closer to understanding what administrative reasoning for algorithmic decisions should contain so that it could effectively reinforce the principle of access to justice.

 

This material is produced as part of AlgoSoc, a collaborative 10-year research program on public values in the algorithmic society, which is funded by the Dutch Ministry of Education, Culture and Science (OCW) as part of its Gravitation programme (project number 024.005.017). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of OCW or those of the AlgoSoc consortium as a whole.

Over de auteurs

Kätliin Kelder

Kätliin Kelder is a doctoral candidate in the Institute of Constitutional, Administrative Law and Legal Theory at Utrecht University

Reacties

Andere blogs uit deze reeks
Promovendireeks 2025-2026
Promovendireeks #23: Europees constitutioneel advies en het Nederlandse institutionele staatsrecht
Promovendireeks 2025-2026
Promovendireeks #22: De regulering van grensoverschrijdend gedrag door Kamerleden: een (grondwettelijke) kink in de kabel?
Promovendireeks 2025-2026
Promovendireeks #21: Het bijzondere belang van de gemeentelijke autonomie voor de bijzondere wetgever
Promovendireeks 2025-2026
Promovendireeks #20: De Duitse Existenzminimum-doctrine als inspiratiebron voor de normatieve versterking van het fundamentele recht op sociale zekerheid
Promovendireeks 2025-2026
Promovendireeks #19: Nadenken over de rechtsstaat betekent óók nadenken over complotdenken