The interdisciplinary basic research project examines the opportunities and limitations of algorithmic decision making systems by using the example of their use in criminal justice systems.
Deciding about, by and together with algorithmic decision making systems is also becoming increasingly important in the field of public communication. Therefore, the Hans-Bredow-Institut participates in an interdisciplinary project on this topic, which is funded by the Volkswagen Foundation within the funding line “Artificial Intelligence and the Society of the Future“ for four years.
For this, machine learning algorithms that are stored in decision trees or neutral networks are used to deduce decision rules from the input data (algorithmic decision making; “ADM”). Over time, the AI tool improves itself by learning from its past decisions, correct or incorrect.
The overarching ambit of this project is to examine whether there are limitations to this kind of ADM. ADM systems are becoming increasingly popular, especially within notoriously cash-strapped criminal justice systems (“CJS). Within western CJS, especially those of the USA and the UK, these tools are used at various stages of the criminal justice process to assess the risk a particular individual poses to the public (e.g. the risk of reoffending). In the USA, major civil liberties unions such as the ACLU have even advocated their use at all stages of the criminal process to avoid possible human biases.
Against this backdrop, the project examines the questions
- How do humans make decisions about other humans compared with how do ADM systems make the same decisions about humans?
- How do humans in conjunction with ADM systems take decisions about other humans?
- What are the limits where machines should make decisions about people?
- And how can states decide whether ADM systems should be used within criminal justice systems at all.