In particular, the chapter examines racial bias in these systems, explaining why models intended to detect hate speech can discriminate against the groups they are designed to protect and discussing efforts to mitigate these problems. It argues that hate speech detection and other forms of content moderation should be an important topic of sociological inquiry as platforms increasingly use these tools to govern speech on a global scale. https://academic.oup.com/edited-volume/55209/chapter-abstract/430648288?redirectedFrom=fulltext Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Social media users’ online behavior with regard to the circulation of hate speech (Carleton University) SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter (ACL Anthology)