The multimodal character of digital information makes it even more difficult to moderate hate speech (HS) in the dynamic online environment. With an emphasis on the growing significance of large language models (LLMs) and large multimodal models (LMMs) in identifying, elucidating, debiasing, and countering HS, the overview explores current developments in HS moderation. The researchers start with a thorough examination of recent research, revealing the ways in which audio, visuals, and text work together to disseminate HS. The propagation of HS is made more intricate and nuanced by the mix of several modes. Additionally, the authors noted the need for solutions in low-resource contexts and identified research shortages, especially in underrepresented languages and cultures. Future research directions, such as innovative AI approaches, moral AI governance, and the creation of context-aware systems, are discussed in the survey’s conclusion.https://aclanthology.org/2024.findings-emnlp.254Share this:FacebookXLike this:Like Loading... Post navigation The Content Moderator’s Dilemma: Removal of Toxic Content and Distortions to Online Discourse (arXiv) New world-first standards set new rules for how tech giants must tackle worst-of-the-worst online content (eSafety Commissioner, Australia)