This research presents a novel method: a multilingual semisupervised model that combines XLM-RoBERTa and mBERT, or more specifically, Generative Adversarial Networks (GANs) and Pretrained Language Models (PLMs). Using only 20% annotated data from the HASOC2019 dataset, our method demonstrates its efficacy in detecting hate speech and offensive language in Indo-European languages (English, German, and Hindi). This results in notably high performances in multilingual, zero-shot crosslingual, and monolingual training situations. The XLM-RoBERTa-based model (SS-GAN-XLM) was beaten by our study’s strong mBERT-based semisupervised GAN model (SS-GAN-mBERT), which achieved an average F1 score boost of 9.23% and an accuracy gain of 5.75% over the baseline semisupervised mBERT model. https://www.researchgate.net/publication/379946490_Multilingual_Hate_Speech_Detection_A_Semi-Supervised_Generative_Adversarial_Approach Share this: Print (Opens in new window) Print Share on Facebook (Opens in new window) Facebook Share on LinkedIn (Opens in new window) LinkedIn Share on Reddit (Opens in new window) Reddit Share on WhatsApp (Opens in new window) WhatsApp Share on Bluesky (Opens in new window) Bluesky Email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Adaptive link dynamics drive online hate networks and their mainstream influence (npj, Nature) Two Weeks in P/CVE: Free Resources on Countering Extremism and Hate, April 2024 (II/II)