August 1, 2022, Islamabad – Media Matters for Democracy (MMfD) expresses grave concern over Facebook’s continuing failure to detect hate speech in the advertisements submitted to the platform for publication. This failure not only speaks volumes of Facebook’s parent company Meta’s negligence towards non-English-speaking markets, but also exacerbates the situation in countries grappling with political volatility and ethnic tensions, laying bare the fact that the Big Tech continues to ignore the implications of their mounting influence in these regions.
We find Meta’s ignorance, inadequate moderation mechanisms, and lack of resources to timely and effectively regulate content extremely detrimental to vulnerable groups that are already at the risk of violence around the world. The recent investigation conducted by an international rights group into the tech giant’s ability to filter out advertisements containing harmful content is a glaring example of how Meta continues to ignore the political and ethnic sensitivities in countries like Kenya, where it has repeatedly failed to curb violence promoted through its social-networking platforms.
Meta approving ads laced with violent hate speech and words openly calling for ethnic cleansing only shows its constant disregard and inaction towards content that leads to real-world harm. Besides inflammatory material in regional languages, Facebook’s failure to catch hate speech in ads with content in even English raises questions over what Meta calls its “super efficient AI models to detect hate speech”.
The fact that this is the third time this year that Facebook has been unsuccessful on the test carried out to examine content regulation on the platform is deeply alarming and must not be ignored. In March, Facebook failed a similar test run with hate speech against the Rohingya people in Myanmar, where it is weaponised to target minorities. The advertisements, containing hateful and divisive content, went undetected by Facebook’s systems and were eventually approved for publication.
Later in June, Facebook’s inability to classify and reject life-threatening, dehumanising and hateful content surfaced forth again when it approved similar advertisements targeting vulnerable ethnic groups in Ethiopia. Despite having been notified of its constant failure to detect content violating its own policies, Facebook ended up approving more hateful advertisements, only proving that its moderation systems are not compatible with non-English languages. Violence resulting from hate speech perpetuated through Facebook has also been witnessed in South Asian countries, including India, Sri Lanka, Bangladesh, and the Philippines.
Meta’s blatant disregard towards developing countries was publicly exposed in 2021 by one of its former employees, who accused the company of putting profits before public good. The cache of internal documents made public to support these claims also revealed how Meta was aware of the harm caused by its social networking products, including Instagram, and that the company was deliberately choosing not to act in order to gain wider exposure and greater profits.
We demand that Meta acknowledge its damaging role in the politics of non-English-speaking countries that are prone to instability and the need to take special measures to tackle hate speech on its platforms, given its rapid expansion into foreign markets and lofty annual profits. For a social media giant with users running in billions and every possible resource at its disposal, it is only expected of Meta to invest in setting up mechanisms that detect and remove hate speech from its platforms before it results in real-world harm.