In particular, the chapter examines racial bias in these systems, explaining why models intended to detect hate speech can discriminate against the groups they are designed to protect and discussing efforts to mitigate these problems. It argues that hate speech detection and other forms of content moderation should be an important topic of sociological inquiry as platforms increasingly use these tools to govern speech on a global scale.

By author

Leave a Reply

Your email address will not be published. Required fields are marked *