Element 68Element 45Element 44Element 63Element 64Element 43Element 41Element 46Element 47Element 69Element 76Element 62Element 61Element 81Element 82Element 50Element 52Element 79Element 79Element 7Element 8Element 73Element 74Element 17Element 16Element 75Element 13Element 12Element 14Element 15Element 31Element 32Element 59Element 58Element 71Element 70Element 88Element 88Element 56Element 57Element 54Element 55Element 18Element 20Element 23Element 65Element 21Element 22iconsiconsElement 83iconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsElement 84iconsiconsElement 36Element 35Element 1Element 27Element 28Element 30Element 29Element 24Element 25Element 2Element 1Element 66
Human In the Loop? Autonomy and Automation in Socio-Technical Systems

Human In the Loop? Autonomy and Automation in Socio-Technical Systems

Automated decisions are not always free of errors. This is because they are based on training data that may contain unintentional biases or lack human contextual understanding. Thus, individual machine decisions often do not do justice to the individual situations of people. A well-known example is lending, where banks use technological systems to automatically assess the creditworthiness of applicants. This is why there have been calls for a long time to integrate people into such processes so that they can monitor the decision-making processes and thus contribute to improving technological systems.

The project poses the following questions: How should meaningful interaction between humans and machines be designed? What role do human decisions play in the quality assurance of automated decisions? How can we ensure that this interaction is not only legally compliant, but also transparent and comprehensible? And what requirements apply to the interaction between humans and machines when considering the technical system, the human decision-makers, their context and their environment?

Project Focus and Transfer
Four Case Studies

Analysis of human participation in automated decision-making processes through field analyses, workshops, and dialog formats in four selected scenarios.       

Taxonomy of Influencing Factors

Investigation of the factors that influence human decisions and identification of the errors, vulnerabilities and strengths of all technical systems and people involved in decision-making processes.     

Recommendations for Action

Development of practical solutions to optimize the collaboration between humans and machines and to improve the implementation and interpretation of existing legal and regulatory projects (GDPR, AI Act and DSA).
Image: Luís Eusébio / unsplash

(Hamburg, 12 March 2024)

show more

Project Description


Project Information


Duration: 2023-2027

Research programme:
RP1 - Transformation of Public Communication

Third party

Stiftung Mercator

Contact person

Prof. Dr. Wolfgang Schulz
Director (Chairperson)

Prof. Dr. Wolfgang Schulz

Leibniz-Institut für Medienforschung │ Hans-Bredow-Institut (HBI)
Rothenbaumchaussee 36
20148 Hamburg

Tel. +49 (0)40 45 02 17 0 (Sekretariat)

Send Email



Subscribe to our newsletter and receive the Institute's latest news via email.