You Got Zucked: The Regulation of Hate Speech on Facebook

icanhearyoublogging
4 min readJan 14, 2021
Credit: Urban Dictionary

In ‘Speech Police: The Global Struggle to Govern the Internet’, Kaye (2019) sets out three main areas of public concern regarding the regulation of online content — the first being hate speech.

What is hate speech?

You and I might understand hate speech as incitements of hatred towards a specific group of people, usually over something they can’t control such as race, religion, gender, or sexuality (Del Vigna, et al., 2017; Vidgen, Margetts, & Harris, 2019). However, from a content moderation standpoint, hate speech is actually quite difficult to define (Kaye, 2019). Moreover, not much is known about how we might combat the dissemination of hate speech online (Siegel, & Badaan, 2020).

If both the definition and the solution to online hate speech are grey areas, how can platforms fight against it?

To contextualise this question, I’d like to draw upon a recent example of the dissemination of hate speech on social media. Namely, of the propagation of racist hate speech on Facebook during recent #BlackLivesMatter protests. As said by Griffin (2020) Facebook’s lack of response towards far-right content regarding the BLM movement has “bought renewed attention to the problem of hate speech on social media”.

Credit: CBS News

During May of 2020, POTUS Donald Trump posed a series of Tweets warning BLM protestors that he would “send the military to intervene if there was “any difficulty””, amongst other escalating claims (Hern, 2020). Twitter responded within two hours, removing the Tweets placing a content warning upon the Tweets that would hide them from being seen unless the user specifically agreed to see them and would limit them from being disseminated by Twitter’s algorithms (Hern, 2020). Facebook decided to keep the posts up — in fact Zuckerberg himself defended the choice (Griffin, 2020), stating in a Facebook post “Unlike Twitter, we do not have a policy of putting a warning in front of posts… I disagree strongly with how the President spoke about this, but I believe people should be able to see this for themselves”.

However, it seems that Facebook isn’t always so lenient — Black activists claim that the company is stifling anti-racism activist groups (Guynn, n.d.; Levy, 2020). In February of 2019 Hollywood actor Liam Neeson made racist remarks during an interview. Black activist Carolyn Wysinger shared a Facebook post about Neeson, commenting “”White men are so fragile” — within just fifteen minutes Facebook had removed her post for violating its community hate speech standards (Guynn, n.d.).

These examples perfectly highlight the answer to our aforementioned question: platform owners decide what hate speech is, or isn’t, and they moderate it as they see fit. As said by Kaye (2019) “platforms have the freedom to make their own content rules” (p. 19).

What are the consequences?

The issue, Kaye (2019) argues, is that platform-owning companies have too much power — power that they were never meant to have. These companies are not public-interest bodies, when they started their platforms they never imagined to be where they are today, yet the longer they are expected to police their platforms the more power over content moderation they will gain (Kaye, 2019). The result is a worrying landscape for the future of speech regulation online. As we have seen above, platforms don’t always get it right. In fact, in response to being told Facebook has too much power over speech, Zuckerberg simply replied “frankly, I agree” (Guynn, n.d.).

References

Del Vigna, F., Cimino, A., Dell’Orletta, F., Petrocchi, M., & Tesconi, M. (2017). Hate me, hate me not: Hate speech detection on facebook. In Proceedings of the First Italian Conference on Cybersecurity (ITASEC17) (pp. 86–95).

Griffin, R. (2020). How public pressure forced Facebook to change its policies on hate speech. Sciences Po. Retrieved from https://www.sciencespo.fr/public/chaire-numerique/en/2020/07/09/how-public-pressure-forced-facebook-to-change-its-policies-on-hate-speech/

Guynn, J. (n.d.). Facebook while black: Users call it getting ‘Zucked,’ say talking about racism is censored as hate speech. USA Today. Retrieved from https://eu.usatoday.com/story/news/2019/04/24/facebook-while-black-zucked-users-say-they-get-blocked-racism-discussion/2859593002/

Hern, A. (2020, May 29). Twitter hides Donald Trump tweet for ‘glorifying violence’. The Guardian. Retrieved from https://www.theguardian.com/technology/2020/may/29/twitter-hides-donald-trump-tweet-glorifying-violence

Kaye, D. (2019). Speech police: The global struggle to govern The Internet. Columbia Global Reports.

Levy, P. (2020, July 9). Black Activists Warn That Facebook Hasn’t Done Enough to Stop Racist Harassment. Mother Jones. Retrieved from https://www.motherjones.com/politics/2020/07/facebook-black-lives-matter/

Siegel, A. A., & Badaan, V. (2020). # No2Sectarianism: Experimental Approaches to Reducing Sectarian Hate Speech Online. American Political Science Review, 114(3), 837–855.

Vidgen, B., Margetts, H., & Harris, A. (2019). How Much Online Abuse Is There? A Systematic Review of Evidence for the UK. The Alan Turing Institute. Retrieved from https://www.turing.ac.uk/research/research-programmes/public-policy/programmearticles/how-much-online-abuse-there

--

--

icanhearyoublogging

Final year Media and Communications undergraduate student at Loughborough University.