The multimodal character of digital information makes it even more difficult to moderate hate speech (HS) in the dynamic online environment. With an emphasis on the growing significance of large language models (LLMs) and large multimodal models (LMMs) in identifying, elucidating, debiasing, and countering HS, this overview explores current developments in HS moderation. The authors start with a thorough examination of recent research, revealing the ways in which audio, visuals, and text work together to disseminate HS. The propagation of HS is made more intricate and nuanced by the mix of several modes. Additionally, what is noted is the need for solutions in low-resource contexts and identified research shortages, especially in underrepresented languages and cultures. Future research directions, such as innovative AI approaches, moral AI governance, and the creation of context-aware systems, are discussed in the survey’s conclusion. https://aclanthology.org/2024.findings-emnlp.254Share this:FacebookXLike this:Like Loading... Post navigation Video… How AI is Revolutionizing Hate Speech Moderation | EMNLP 2024 (LCS2) Delving into Qualitative Implications of Synthetic Data for Hate Speech Detection (ACL Anthology)