Element 68Element 45Element 44Element 63Element 64Element 43Element 41Element 46Element 47Element 69Element 76Element 62Element 61Element 81Element 82Element 50Element 52Element 79Element 79Element 7Element 8Element 73Element 74Element 17Element 16Element 75Element 13Element 12Element 14Element 15Element 31Element 32Element 59Element 58Element 71Element 70Element 88Element 88Element 56Element 57Element 54Element 55Element 18Element 20Element 23Element 65Element 21Element 22iconsiconsElement 83iconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsElement 84iconsiconsElement 36Element 35Element 1Element 27Element 28Element 30Element 29Element 24Element 25Element 2Element 1Element 66

When Scholars Sprint, Bad Algorithms Are on the Run

When Scholars Sprint, Bad Algorithms Are on the Run

In the first research sprint of the project "Ethics of Digitalisation", funded by the Mercator Foundation, international researchers looked at the use of AI in the moderation of online content. PD Dr. Matthias Kettemann and Alexander Pirang provide an overview of the key findings in this blog article.
 
Read the full article here

Abstract
In response to increasing public pressure to tackle hate speech and other challenging content, platform companies have turned to algorithmic content moderation systems. These automated tools promise to be more effective and efficient in identifying potentially illegal or unwanted  material. But algorithmic content moderation also raises many questions – all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate this on a global scale? Should platforms scale the use of AI tools for illegal online speech, like terrorism promotion, or also for regular content governance? Are platforms’ algorithms over-enforcing against legitimate speech, or are they rather failing to limit hateful content on their sites? And how can policymakers ensure an adequate level of transparency and accountability in platforms’ algorithmic content moderation processes?


Kettemann, M. C.; Pirang, A. (2020): When Scholars Sprint, Bad Algorithms Are on the Run. In: HIIG Digital Society Blog, 3 December 2020, online: https://www.hiig.de/en/when-scholars-sprint-bad-algorithms-are-on-the-run

When Scholars Sprint, Bad Algorithms Are on the Run

In the first research sprint of the project "Ethics of Digitalisation", funded by the Mercator Foundation, international researchers looked at the use of AI in the moderation of online content. PD Dr. Matthias Kettemann and Alexander Pirang provide an overview of the key findings in this blog article.
 
Read the full article here

Abstract
In response to increasing public pressure to tackle hate speech and other challenging content, platform companies have turned to algorithmic content moderation systems. These automated tools promise to be more effective and efficient in identifying potentially illegal or unwanted  material. But algorithmic content moderation also raises many questions – all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate this on a global scale? Should platforms scale the use of AI tools for illegal online speech, like terrorism promotion, or also for regular content governance? Are platforms’ algorithms over-enforcing against legitimate speech, or are they rather failing to limit hateful content on their sites? And how can policymakers ensure an adequate level of transparency and accountability in platforms’ algorithmic content moderation processes?


Kettemann, M. C.; Pirang, A. (2020): When Scholars Sprint, Bad Algorithms Are on the Run. In: HIIG Digital Society Blog, 3 December 2020, online: https://www.hiig.de/en/when-scholars-sprint-bad-algorithms-are-on-the-run

About this publication

Year of publication

2020

RELATED KEYWORDS

Newsletter

Subscribe to our newsletter and receive the Institute's latest news via email.

SUBSCRIBE!