Social networks’ anti-racism policies belied by users’ experience
The world’s biggest social networks say racism isn’t welcome on their platforms, but a combination of poor enforcement and weak rules have allowed hate to flourish.
In the hours after England’s loss to Italy in the European Football Championship, both Twitter and Facebook, which owns Instagram, issued statements condemning the swelling racist abuse.
“The abhorrent racist abuse directed at England players last night has absolutely no place on Twitter,” the social network said on Monday morning. A Facebook spokesperson said similarly: “No one should have to experience racist abuse anywhere, and we don’t want it on Instagram.”
But the statements bore little relation to the experience of the company’s users. On Instagram, where thousands left comments on the pages of Marcus Rashford, Bukayo Saka and Jadon Sancho, supportive users who tried to flag abuse to the platform were surprised by the response.
“Due to the high volume of reports that we receive, our review team hasn’t been able to review your report,” a number of users were told over the course of the day. “However, our technology has found that this post probably doesn’t go against our community guidelines.” Instead, they were advised to personally block the users who posted the abuse, or mute phrases so they didn’t see them.
The posts were undeniably racist, attributing the players’ failure to score in a penalty shootout to their race, or posting monkey or banana emojis, yet the automated moderator decided otherwise – with no obvious way to appeal and force the matter to human eyes.
That moderation was mistaken, Facebook now says. In fact, monkey and banana emoji are specifically listed as examples of “dehumanising hate speech” that is banned on the platform, in training documents given to Instagram moderators seen by the Guardian.
“It is absolutely not okay to send racist emojis, or any kind of hate speech, on [the platform]”, Instagram boss Adam Mosseri said. “To imply otherwise is to be deliberately misleading and sensational.”
Facebook’s definition of hate speech is broad, and covers “violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.”
Twitter, however, takes a narrower view, and bans only hate speech that could “promote violence against or directly attack or threaten other people on the basis of race” or other protected characteristics. Users can be penalised for “targeting individuals with repeated slurs, tropes or other content that intends to dehumanise, degrade or reinforce negative or harmful stereotypes about a protected category”, the company’s public rules state.
Sunder Katwala, the director of the British Future thinktank, points out that a number of messages that are plainly racist do not fall under that guideline. “No blacks in the England team – keep our team white,” for instance, and “Marcus Rashford isn’t English – blacks can’t be English,” would be allowed on Twitter’s platform, Katwala was told.
“What you can’t do on Twitter is “dehumanise” a group – eg by faith or ethnicity,” Katwala added. “‘The blacks are vermin – deport them all’ has been against the rules since Dec 2020, after 18 months of pressure that this should apply to ethnicity as well as faith!”
Some newer social networks are seeking to avoid the pitfalls of their older competitors. TikTok, for instance, has a “hateful behaviour” policy that straightforwardly bans “all slurs, unless the terms are reappropriated … or do not disparage”, as well as banning “hateful ideology”. The video-sharing platform has long taken the stance that its American counterparts overweigh abstract ideals of free speech against building a community that is pleasant to be a part of.
Twitter and Instagram did not reply to multiple requests for comment from the Guardian.