FAILURE OF FACEBOOK TO MODERATE TERRORISM CONTENT IN EAST AFRICA

By Amnesty International Kenya and Haki Africa

Nairobi 16th June, 2022

Amnesty International and HAKI Africa express deep concern with the recent Institute for Strategic Dialogue findings that Meta and Facebook failed to stop the Al-Shabab and the Islamic State from using its social media platform to spread hateful terrorist content in East Africa and Kenya in particular.

The Institute for Strategic Dialogue (ISD) research released yesterday factually demonstrates Facebook content moderators missed extremist content on their platform between 2020-2022. This expose occurs in a region that remains threatened by terrorist attacks and as Kenya heads into a tightly contested 2022 general elections. These elections are already generating malicious and hateful content designed to disinform, divide and demonise political opponents and their supporters.

This ISD report comes five months after the February 2022 whistle-blower report alleging Meta’s third-party content moderation contractor Sama fuelled mental trauma, intimidation, suppressed staff unionisation and may have undermined quality content moderation. As far back as 2021, the Wall Street Journal and others revealed in the Facebook Files that Facebook significantly under-invests in content moderation in Africa, Asia and Latin America and exposes millions of users to disinformation, hate speech and violent content.

The use of automated systems and machine learning to detect violent and extremist content on the platform is not enough. Meta/Facebook must publicly commit to increasing investment in human content moderation. In addition, moderators must be trained in identifying and preventing violent extremism and hate messaging.

While community guidelines are now available in over 60 languages, these guidelines do not include the Somali language, a critical language of users in this region. Without prioritising the Somali language, users will not be aware of community standards and not flag harmful content.

It is time that Facebook become more transparent and accountable to the public. We call on Facebook to regularly record and publicly share disaggregated data on the trends, levels and types of abuse being reported and their response. In addition, Facebook must publicly state how many moderators they will deploy to tackle terrorist content and the languages and regions monitored.

We call on the ICT Ministry and the Communications Authority of Kenya actively encourage companies to develop and publicly sign a self-regulatory Code of Practice on Disinformation. The Code should contain explicit public commitments to take down illegal, malicious and hateful content and actively mitigate the risks of disinformation, and perhaps most importantly, make data available to independent researchers to verify that the Code of Practice is being enforced by the companies.

We remind Facebook and all social media platforms they are central to the promotion of digital democracies. Failing to moderate content posted by extremist groups directly threatens human rights and democracy.

Notes; See here for the Institute for Strategic Dialogue “Under-Moderated, Unhinged and Ubiquitous: Al-Shabaab and the Islamic State Networks on Facebook” 15 June 2022 https://www.isdglobal.org/isd-publications/under-moderated-unhinged-and-ubiquitous-al-shabaab-and-the-islamic-state-networks-on-facebook/