The proposed method adapts characteristics from well-resourced languages to effectively detect hate speech in low-resource environments, addressing the lack of labeled data in different languages. The suggested system first makes use of supervised contrastive learning, which transfers information from source languages to maximize the usefulness of sparse labeled data. This modification maximizes the resources available by enabling the precise identification of hate speech in underrepresented languages. Next, the researchers refine hate speech representations in low-resource languages by including contrastive adversarial training. By ensuring a sophisticated understanding of hate speech across linguistic barriers, this method greatly improves the accuracy and flexibility of the model. Finally, the authors performed zero-shot and few-shot cross-lingual evaluations in three languages to verify our methodology. Findings show how effective the suggested contrastive learning-based models are.

https://www.sciencedirect.com/science/article/abs/pii/S0952197625002969

By author

Leave a Reply

Your email address will not be published. Required fields are marked *