The Politics of Algorithmic Decision Making

Algorithms are increasingly part of our everyday lives. They calculate the risk of credit default; they inform the insurance company of our car about the risk of accidents; and they decide about the news we see on facebook. Sometimes, algorithms are also part of a decision-making system that strongly affects the lives of individuals. In the US, for instance, such systems are used to calculate the risk of recidivism and inform criminal justice decisions about bail or parole (Brennan et al., 2009). And, most recently, the Austrian government announced to use algorithms to assess the motivation of job seekers when they ask for unemployment benefits (see here).

In contrast to the increasing use of algorithms, political science and public policy scholars in particular have only very slowly begun to think about the consequences of the increased use of such systems. This is problematic, because algorithms can potentially produce bias. In the US, for instance, it has been shown that one algorithm erroneously predicted too high risks of recidivism for Afro-Americans and too low risks for white Americans (Angwin et al., 2016). Other evidence points to a much better performance – at least in comparison with decision-making by humans (see here).

At any rate, it is indispensable to at least think about how the political system should regulate and decide about the use of algorithms. Such regulation may be used to increase transparency and should also prevent an increase of biased decisions. Starting in 2019, two interdisciplinary projects at the professorship will analyze these questions.

The first project “Fair and Good ADM” financed by the Federal Ministry of Higher Education and Research (BMBF) is a joint project with the “Algorithm Accountability Lab” (Prof. Zweig) and the Chair of Philosophy (Prof. Joisten) at the TUK and will research measures of quality and fairness of ADM systems and discuss ethical questions linked to the decision to use (or not use) ADM systems. The political science part of the project will closely inspect three ADM systems that have already been implemented, analyze the political decision-making process that led to the decision to use (and buy) the system as well as the regulatory framework. For a period of two years, a Ph.D.-student will perform a qualitative analysis, based on interviews with relevant decision-makers and a close study of the three cases. The ultimate goal of the project is to provide some ideas about how a decision-making system could look like that includes ethical principles in the policy-making and regulatory framework.

The second project “Deciding about, by and together with algorithmic decision-making systems”, funded by the Volkswagen Foundation, zooms in on criminal justice decision making. It is an interdisciplinary and international project reuniting Prof. Anja Achtziger, Chair of Social & Economic Psychology, (Friedrichshafen), legal scientist and media researcher Prof. Wolfgang Schulz (Hamburg), Prof. Karen Yeung, working at the interface between law, ethics and informatics (Birmingham), computer scientist Prof. Katharina A. Zweig (TUK) and myself. The political science part of the project involves two major goals. First, we will – in joint collaboration with the colleagues – build an inventory of all ADM-systems in use in the US states and describe their characteristics. Second, we will analyze quantitatively, how we can explain the variance between the US states concerning the extent of use and the choice of ADM-systems. This project will run for 48 months and involve a post-doc.

Both projects build on some current work on algorithmic decision making (Zweig et al. 2018) and on criminal justice policy (Wenzelburger 2015, Wenzelburger/Staff 2017), but also involves embarking on new ground. We are very much looking forward to this inspiring new horizons for public policy research at the Professorship.

Read more about the project here.

Angwin, J.; Larson J.; Mattu S, & Kirchner L. (2016): Machine Bias – There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Brennan, T.; Dieterich, W., & Ehret, B. (2009): Evaluating the predictive validity of the COMPAS risk and needs assessment system, Criminal Justice and Behavior, 36:1, 21–40.

Wenzelburger, G. (2016): A global trend toward law and order harshness? European Political Science Review 8, 589-613.

Wenzelburger, G. & Staff, H. (2017): The ‘third way’ and the politics of law and order: Explaining differences in law and order policies between Blair’s New Labour and Schröder’s SPD. European Journal of Political Research, 56:3, 554-557.

Zweig, K.A., Wenzelburger, G., Krafft, T. D. (2018): On chances and risks of security related algorithmic decision making systems, European Journal for Security Research, firstview: https://doi.org/10.1007/s41125-018-0031-2.