The multimodal character of digital information makes it even more difficult to moderate hate speech (HS) in the dynamic online environment. With an emphasis on the growing significance of large language models (LLMs) and large multimodal models (LMMs) in identifying, elucidating, debiasing, and countering HS, the overview explores current developments in HS moderation. The researchers start with a thorough examination of recent research, revealing the ways in which audio, visuals, and text work together to disseminate HS. The propagation of HS is made more intricate and nuanced by the mix of several modes. Additionally, the authors noted the need for solutions in low-resource contexts and identified research shortages, especially in underrepresented languages and cultures. Future research directions, such as innovative AI approaches, moral AI governance, and the creation of context-aware systems, are discussed in the survey’s conclusion.

https://aclanthology.org/2024.findings-emnlp.254

By author

Leave a Reply

Your email address will not be published. Required fields are marked *