Glimpse: This research presents a multilingual semisupervised model that combines XLM-RoBERTa and mBERT, or more specifically, Generative Adversarial Networks (GANs) and Pretrained Language Models (PLMs). Using only 20% annotated data from the HASOC2019 dataset, the method demonstrates its efficacy in detecting hate speech and offensive language in Indo-European languages (English, German, and Hindi). This results in notably high performances in multilingual, zero-shot crosslingual, and monolingual training situations. The study presents a strong mBERT-based semisupervised GAN model (SS-GAN-mBERT) that achieved an accuracy gain of 5.75% and an average F1 score boost of 9.23% over the baseline semisupervised mBERT model, outperforming the XLM-RoBERTa-based model (SS-GAN-XLM). https://lnkd.in/eccmwQX9 Share this: Print (Opens in new window) Print Share on Facebook (Opens in new window) Facebook Share on LinkedIn (Opens in new window) LinkedIn Share on Reddit (Opens in new window) Reddit Share on WhatsApp (Opens in new window) WhatsApp Share on Bluesky (Opens in new window) Bluesky Email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Two Weeks in P/CVE: Free Resources on Countering Extremism and Hate, April 2024 (I/II) Adaptive link dynamics drive online hate networks and their mainstream influence (npj, Nature)