Element 68Element 45Element 44Element 63Element 64Element 43Element 41Element 46Element 47Element 69Element 76Element 62Element 61Element 81Element 82Element 50Element 52Element 79Element 79Element 7Element 8Element 73Element 74Element 17Element 16Element 75Element 13Element 12Element 14Element 15Element 31Element 32Element 59Element 58Element 71Element 70Element 88Element 88Element 56Element 57Element 54Element 55Element 18Element 20Element 23Element 65Element 21Element 22iconsiconsElement 83iconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsElement 84iconsiconsElement 36Element 35Element 1Element 27Element 28Element 30Element 29Element 24Element 25Element 2Element 1Element 66
"Platform Providers Should Not Become Judges"

"Platform Providers Should Not Become Judges"

To let social media decide on deleting content in the sensitive area of opinion formation is not a good idea, says Dr. Jan-Hinrik Schmidt in an interview with medienpolitik.net. "Within the limits of freedom of opinion, there is a wide spectrum of 'fake news' - the conscious or unconscious intention to mislead, successful or unsuccessful satire, unintentional errors in reporting or views that are not covered by scientific evidence. It may not always be possible to clarify clearly and without controversy whether or not these are indeed 'fake news'."

However, Schmidt considers it justified calibrating the recommendation algorithms in such a way that dubious information is not spread on a mass scale. "The right to express opinions would thus be preserved, but the automated and unchecked dissemination of corresponding messages would be more difficult.”

Read the full interview here.


Subscribe to our newsletter and receive the Institute's latest news via email.