The multimodal character of digital information makes it even more difficult to moderate hate speech (HS) in the dynamic online environment. With an emphasis on the growing significance of large language models (LLMs) and large multimodal models (LMMs) in identifying, elucidating, debiasing, and countering HS, this overview explores current developments in HS moderation. The authors start with a thorough examination of recent research, revealing the ways in which audio, visuals, and text work together to disseminate HS. The propagation of HS is made more intricate and nuanced by the mix of several modes. Additionally, what is noted is the need for solutions in low-resource contexts and identified research shortages, especially in underrepresented languages and cultures. Future research directions, such as innovative AI approaches, moral AI governance, and the creation of context-aware systems, are discussed in the survey’s conclusion. https://aclanthology.org/2024.findings-emnlp.254 Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Video… How AI is Revolutionizing Hate Speech Moderation | EMNLP 2024 (LCS2) Delving into Qualitative Implications of Synthetic Data for Hate Speech Detection (ACL Anthology)