Glimpse: This research presents a multilingual semisupervised model that combines XLM-RoBERTa and mBERT, or more specifically, Generative Adversarial Networks (GANs) and Pretrained Language Models (PLMs). Using only 20% annotated data from the HASOC2019 dataset, the method demonstrates its efficacy in detecting hate speech and offensive language in Indo-European languages (English, German, and Hindi). This results in notably high performances in multilingual, zero-shot crosslingual, and monolingual training situations. The study presents a strong mBERT-based semisupervised GAN model (SS-GAN-mBERT) that achieved an accuracy gain of 5.75% and an average F1 score boost of 9.23% over the baseline semisupervised mBERT model, outperforming the XLM-RoBERTa-based model (SS-GAN-XLM). https://lnkd.in/eccmwQX9 Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Two Weeks in P/CVE: Free Resources on Countering Extremism and Hate, April 2024 (I/II) Adaptive link dynamics drive online hate networks and their mainstream influence (npj, Nature)