Week 6: Amplifying Inequality
Instructions
This week’s focus is on algorithmic decision-making in bureaucracies. We look at how algorithmic decision making has the potential to amplify existing inequalities and, therefore, hurt vulnerable minorities. To prepare for this meeting, skim the essay by Maciejewski and then read the case studies of Virginia Eubanks’ book. Why are these algorithms being deployed by bureaucrats? Could AlgorithmWatch’s Impact Assessment Tool prevent it?
Required readings
- Eubanks, Virginia (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin's Press. [PDF on Moodle]
- AlgorithmWatch (2021). Automated Decision-Making Systems in the Public Sector. An Impact Assessment Tool for Public Authorities.
- AlgorithmWatch. 2021. “Automated Decision-Making Systems in the Public Sector. An Impact Assessment Tool for Public Authorities”
Further reading (General)
- Maciejewski, Mariusz (2017). To Do More, Better, Faster and More Cheaply: Using Big Data in Public Administration. International Review of Administrative Sciences, 83(1S), pp. 120–135.
- Pencheva, Irina and Esteve, Marc and Mikhaylov, Slava Jankin (2020). Big Data and AI–A transformational shift for government: So, what next for research?. Public Policy and Administration, 35(1), pp. 24–44.
- Bovens, Mark and Zouridis, Stavros (2002). From Street-Level to System-Level Bureaucracies: How Information and Communication Technology is Transforming Administrative Discretion and Constitutional Control. Public Administration Review, 62(2), pp. 174–184.
- Bell, Bernard W (2021). Replacing Bureaucrats with Automated Sorcerers?. Daedalus, 150(3), pp. 89–103.
Additional Case Studies
- ProPublica (2016). Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks.
- Wired (2020). Everything that went wrong with the botched A-Levels algorithm.
- The Guardian (2020). Ofqual's A-level algorithm: why did it fail to make the grade?.
- Dressel, Julia and Farid, Hany (2018). The Accuracy, Fairness, and Limits of Predicting Recidivism. Science Advances, 4(1), pp. eaao5580.
- Obermeyer, Ziad and Powers, Brian and Vogeli, Christine and Mullainathan, Sendhil (2019). Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science, 366(6464), pp. 447–453.
- Byrne, Thomas and Metraux, Stephen and Moreno, Manuel and Culhane, Dennis P and Toros, Halil and Stevens, Max (2012). Los Angeles County's Enterprise Linkages Project: An Example of the Use of Integrated Data Systems in Making Data-Driven Policy and Program Decisions. California Journal of Politics and Policy, 4(2).
- Lopez, Paola (2019). Reinforcing Intersectional Inequality via the AMS Algorithm in Austria. In Getzinger, Günter (Ed.), Conference Proceedings of the th STS Conference (Critical Issues in Science, Technology and Society Studies) (pp. 289–309). Verlag der Technischen Universität Graz.
- Rinta-Kahila, Tapani and Someh, Ida and Gillespie, Nicole and Indulska, Marta and Gregor, Shirley (2022). Algorithmic Decision-Making and System Destructiveness: A Case of Automatic Debt Recovery. European Journal of Information Systems, 31(3), pp. 313–338.
Suggested Media
- Coded Bias Documentary film avialable in UCL Mediacentral