How come the Taliban have access to social media while former President Trump does not? The short answer is that Trump has deliberately violated the internal speech rules of social media companies before being banned while certain Taliban and Taliban-affiliated accounts have not done so recently or consistently. They are, however, often banned and/or suspended. But things are complicated.
by Martin Fertmann, Matthias C. Kettemann, David Morar and Wolfgang SchulzSocial media companies have emerged as important normative actors. Their decisions have global implications. They are, more unwillingly than not, in the middle of some of the world’s key political conflicts. Especially in situations of political instability, the power of defining communication rules matters. Whether in Syria, in Kenya, Uganda, in India or most recently in Myanmar: the decisions that social media companies take reverberate around the globe.
Platforms therefore have to carefully assess how to engage with violent and powerful actors, especially if they - as the Taliban now - exercise state powers. This is one of the reasons why social media platforms had a hard time dealing with Trump - and kept the former US President online long after he had thoroughly violated platform rules.
As we have shown in a recent study, the exercise of private power over public speech is often contested, but the conflicts are magnified when this power is asserted over parties, political candidates and office-holders that function as focal points for public debates. While most platforms’ terms of use and enforcement systems are global, opinions relating to any preferential treatment of speech by well-known political figures and office holders vary across national, political and legal contexts.
Banning the Taliban?
Can Twitter ban the Taliban? Yes, it can. While Twitter is certainly an important global communication platform, it is under no specific duty to provide access to media to any government, much less to a non internationally-recognized military dictatorship. But when it bans the Taliban (or anyone else, for that matter), Twitter has to apply its rules fairly to its users. Applying rules equally, without unfair distinction, is a staple of most national legal orders. While private companies enjoy substantial freedom in deciding which contracts to enter into (under what conditions and with whom), this freedom is more limited when these platforms provide a service that approaches non-substitutability, that is when they exercise substantial power over the communication spaces they have constructed. As we have argued elsewhere, the private orders of companies are part of, and intrinsically linked to, the communicative space they provide. In designing this space, and setting rules, their freedom to choose the rules and enforcement tools they like gets increasingly smaller. This is at the heart of demands for more transparency and accountability. At least in Germany, the Supreme Court (BGH) has recently ruled that before an account is banned, the concerned user has to have an opportunity to defend themselves. (For deleted postings, the BGH demanded a limited ex post review).To the case at hand: Was Twitter wrong in banning Trump but not the Taliban? Let’s check the facts:
- Trump has broken Twitter’s rules many times, but has gotten a pass almost every time, in his role as first a candidate and then the President of the United States.
- The Taliban as a whole are an amorphous group of people, a loose political group, which could mean a general ban would overly restrict harmless pieces of content. Facebook and other platforms have had problems in the past addressing amorphous groupings, like QAnon. The Taliban’s specific members are subject to individual account rules and, in the eyes of Twitter, not all of them have run afoul of them on the platform. (A Whatsapp group created by Taliban, however, seems to have been banned.) A number of media-oriented websites also went offline though it was not immediately clear which service provider - and at what level of the “stack” of delivering internet content - was responsible.
- The Taliban and its officers are not officially designated by the United States, where Twitter operates from, as a foreign terrorist organization (like the Pakistan Taliban or al-Qa’ida are), but sanctioned as a Specially Designated Terrorist Group. This means that their US assets are frozen and Americans are prohibited from working with them, but only under 31 CFR § 595.204. To those outside of foreign policy circles the difference may seem like semantics, but it makes a significant difference for platforms. The penalties here are much softer (see 31 CFR § 595.701 et seq.) than for providing services to “foreign terrorist organizations”, which is a criminal offence for which responsible persons can be punished with up to 20 years in prison (or life, if the death of a person results) (18 U.S. Code § 2339B). Although it is dubious whether the provision of a free and freely available online service would actually suffice as “material support” under this provision, the fear of this has motivated companies such as Zoom in the past to try and prevent FTOs from using their services. Meanwhile, other platforms, like Facebook, have taken a more hardline approach and rely not just on the US “terrorist” organizations lists, but on its own designation as “Dangerous Individual or Organization”.
- Twitter is also playing an interesting game where it does not want to be a de facto validator of the Taliban as the rulers of Afghanistan, so it is simply continuing its previous treatment of the organization, until the United States and the international community decide how to treat them.
But how can companies deal with conflicts between national power-holders and international law? This question is largely unresolved. For example, Twitter and Google both state that they prevent any content found to be illegal under “local law” (whose law, one may ask relating to Afghanistan) from being accessed from the respective jurisdiction, while at the same time claiming their policies are rooted in international human rights law (here and here). Their transparency reports show they do not act on all “government” claims, but they don’t publicly cite international law as a reason, instead claiming to have a different interpretation of the respective national law or practical issues.
Get rid of the bad, don’t touch the good
This wiggle-out strategy will likely not succeed vis-a-vis blatant human rights violations by the Taliban. It seems more likely the difficult spot the companies’ find themselves in will be resolved by explicit references to international law and future United Nations resolutions on the (non)recognition of the self-proclaimed Taliban government. Similarly, companies in the past have found it easier to refer to international human rights to motivate their reluctance to adhere to national decisions.The world of online content moderation might seem easy: get rid of the bad stuff and don’t touch the good stuff. But reality is never that simple. It is very difficult to choose who actually gets to decide what stays online or goes offline. And after that decision has been made, enforcing it - often via algorithmic tools - is very difficult.
While content moderation at scale is hard, a global conversation on the guidelines that would help platforms make better rules is essential. Where should these meta-rules come from? The Leibniz Institute of Media Research’s Private Ordering Observatory is currently conducting global conversations to find out more about these questions. Some have suggested the creation of more social media councils to help platforms set better rules. These represent a good opportunity to increase the legitimacy of the normative orders of platforms, strengthen the protection of individual rights, and promote social cohesion. Within the EU, proposals to build a platform monitoring entity have also been published. But perhaps better than individual and regional councils or oversight boards, which provide mainly ex post rulings, a global node for exchanges on the most challenging developments in platform policies is necessary. The Private Ordering Observatory will function as such in an inaugural phase.
Call for new ways
What makes the development of good rules for a better online discourse especially difficult is that private and public norm-making is deeply connected - a new hybrid speech governance regime has emerged. While traditionally distinctions between private and public law were important, online speech governance forces us to reconsider this. The status and perspectives of speech governance by internet platforms can be better understood by acknowledging that there is – and increasingly will be – a normative field in which we find a hybrid mixture of private and public norms (and values). This calls for new types of norms, new ways of “doing law” and new institutions. The development challenges the “state action doctrine” in US constitutional law, concepts of horizontal application of fundamental rights and other normative concepts that rest on the distinction of public and private regulation. Based on that shift of perspective, new analytical tools and elements of governance architecture can be designed and evaluated. And that is needed to protect freedom of speech in digital societies.Looking into the future, we wonder whether perhaps an entity like the European Commission for Democracy through Law (“Venice Commission“) might be an interesting model. Wouldn’t the platform ecosystem profit from an independent consultative body that provides expertise and conducts studies on platform law and its impact on democratic institutions? One key task of any global discussion on meta-rules for platforms will be that we need to strengthen our understanding of the interaction of international law, regional law, national law and non-binding standards and transnational arrangements (“soft law”) that are applied on and to the Internet; and the relationship between politics and power; and the role of automated vs. human enforcement.
Finally, one would hope that former President Trump recognizes that the argument, that he needs his social media profiles back because he is less of a threat to human rights and global democratic discourse than the Taliban, is not quite as strong as he might feel.
Photo: Jon Tyson / unsplash