The proliferation of harmful content on online platforms is a major societal problem, which comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other. … Researchers have developed different methods for automatically detecting harmful content, often focusing on specific sub-problems or on narrow communities, as what is considered harmful often depends on the platform and on the context. We argue that there is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content. We thus survey existing methods as well as content moderation policies by online platforms in this light and we suggest directions for future work.https://dl.acm.org/doi/10.1145/3603399Share this:FacebookXLike this:Like Loading... Post navigation Hate Speech Detection using Deep Learning and Text Analysis (IEEE Xplore) Disrupting hate: The effect of deplatforming hate organizations on their online audience (PNAS)