We are happy to announce that three articles on algorithmic systems linked to the project “Deciding about, by, and together with algorithmic decision-making systems” have just been published.
The first article has appeared in European Political Science and discusses whether and where algorithms could improve decision-making in democratic politics. The paper concludes that in decision areas that are merely about finding the appropriate means for given ends, they may indeed have a place. Policy instrument choice is likely candidate in this regard. It is, however, dangerous to think one could bring algorithmic systems to the heart of political decision-making to achieve better problem-solving. One of the main objections to this possibility is that politics simply is not about problem-solving.
The second paper, published in Current Issues in Criminal Justice discusses in what sense algorithmic risk assessments provide evidence to be used in decision-making, using the example of pretrial in the US. The paper show that there is quite some variation in how existing pretrial tools are designed and point to how there is even further discretion in creating and evaluating these applications. Also, even where pretrial risk assessment tools adhere to methodological standards – and some are exemplary in this respect -, these standards may not be helpful in judging what exactly constitutes “good” decision making. In a nutshell, what counts as a good performance of a risk assessment tool depends on scholarly conventions and these conventions can serve to make comparisons between statistical models, but they cannot directly be transferred to contexts with real-world stakes of decision-making.
Finally the third paper, published in the British Journal of Criminology, looks at three US states to study why they have rolled back the implementation of pretrial risk assessment tools. The development and implementation of these tools are commonly dealt with in policy subsystems with little public attention. However, once a politicization of these tools occurs, the paper argues, this is likely to thwart their implementation. This is not so much a question of technical properties or performance that may or may not characterize these risk assessment rools. Rather, they are likely to reach high-level politics through political actors publicly linking them to ideas and concerns about opacity, fairness and public safety. Strikingly, it seems to be possible to discredit these tools along the lines of a “penal populism”, i.e. through presenting them as an approach that is weak on crime and a threat to public safety. As a result, it is the politically safer option to stick with the status quo.
How does artificial intelligence (AI) affect democracy? A recent article by Pascal König and Georg Wenzelburger published in Government Information Quarterly tackles this question. The paper highlights how the adoption of AI, with its capability of solving specialized cognitive tasks, heavily intervenes into the informational foudations of societies. In doing so, it affects information requirements that are at the basis of the democratic political process and that condition the realization of responsiveness and accountability. Drawing on systems theory, the article shows that AI can reduce or increase information deficits of both citizens and decision-makers on the input, throughout, and output level of the political system. This is illustrated by means of two contrasting scenarios that describe how AI can change the workings of democratic government.
The paper also discusses that the challenges to liberal democracy that arise with the adoption of AI in politics, despite their novel technological dimension, show considerable continuity with long-standing transparency and accountability problems. Democracy is not made obsolete in face of new possibilities of steering through AI. To the contary, the political ideas that are embodied in liberal democracy and that safeguard responsiveness and accountability already offer important answers to how the adoption of AI can strengthen democratic politics.
Realizing this outcome and avoiding a negative, possibly disruptive, impact on democracy will require institutionalizing suitable governance mechanisms. This is a challenging task, especially on the input level of politics where applications of AI already markedly intervene into processes of public opinion formation, but where the governing of such applications can also easily have adverse effects.
How do digital technologies and particularly data-driven algorithmic decision-making and artificial intelligence affect democratic governance? In a recent German comment entitled “The Digital Temptation” and published in the journal Politische Vierteljahresschrift, I address this question. The article adopts a broad perspective and discusses how the emergence of an “algorithmic society” in which people increasingly delegate decisions to machines can undermine the bases of a free society. The main argument developed in the comment is that through relying more and more on such machines only because they deliver satisfactory performance prepares the ground for paternalistic forms of power and dependence in liberal democracies – an arrangement which is not markedly different from the Chinese Credit System, which uses a scoring of citizens to direct their lives toward predefined goals and values.
Algorithmic decision-making systems are increasingly used in many areas throughout society where they steer the behaviors of a potentially vast number of people. They come into contact with these systems mostly through widely used platforms or services in the private sector. However, the state too shows a growing inclination to employ algorithmic decision-making systems in its operations, e.g. in the managing of resources and processes of so-called smart cities. It is therefore an important question, what kind of governance algorithmic systems establish and how this relates to democratic standards of legitimacy.
I deal with this question in a paper that has been published in Philosophy & Technology. It draws on governance theory and political theory to shed light on the nature of algorithmic governance. It argues that this kind of governance indeed forms a novel way of coordinating behaviors and managing societal complexity – one that is potentially also highly responsive to individual preferences. Yet despite its responsiveness and ability to accomodate pluralistic preferences, algorithmic governance nonetheless is qualitatively different from the political that characterizes democratic politics. The article develops this argument by comparing algorithmic governance with Thomas Hobbes’ figure of the Leviathan.