The proposed method adapts characteristics from well-resourced languages to effectively detect hate speech in low-resource environments, addressing the lack of labeled data in different languages. The suggested system first makes use of supervised contrastive learning, which transfers information from source languages to maximize the usefulness of sparse labeled data. This modification maximizes the resources available by enabling the precise identification of hate speech in underrepresented languages. Next, the researchers refine hate speech representations in low-resource languages by including contrastive adversarial training. By ensuring a sophisticated understanding of hate speech across linguistic barriers, this method greatly improves the accuracy and flexibility of the model. Finally, the authors performed zero-shot and few-shot cross-lingual evaluations in three languages to verify our methodology. Findings show how effective the suggested contrastive learning-based models are.https://www.sciencedirect.com/science/article/abs/pii/S0952197625002969Share this:FacebookXLike this:Like Loading... Post navigation Explainability and Hate Speech: Structured Explanations Make Social Media Moderators Faster (ACL Anthology) A Systematic Review on How to Address Hatred in its Various Manifestations: Understand Its Different Aspects, Use Different Tools and Specific Interventions (Global Psychiatry Archives)