(Gemini Audio Overview) (preventhate.org) – Multiple research efforts are focused on improving the detection and regulation of hate speech across diverse linguistic and legal contexts, according to new content added to preventhate.org. One approach, called SMARTER, uses a two-stage, data-efficient framework with Large Language Models (LLMs) that generate synthetic explanations to achieve up to a 13.5% macro-F1 improvement in toxicity detection over few-shot baselines. Another study addresses bias in automatic detection by proposing a weakly supervised framework that combines prompt-based learning and contrastive strategies with limited expert annotations to improve sensitivity to sociolinguistic subtleties. In the French language, a new dataset was assembled to evaluate models, with DistilCamemBert achieving the highest F1-score of 80% for binary hate speech classification. Furthermore, a new system for Roman Urdu expanded an existing dataset and used techniques like mBERT, which reached 92% accuracy, to effectively identify abusive and racist language patterns. Finally, comparative legal research examined hate speech regulations in five Global South countries—South Africa, Argentina, Colombia, India, and Mexico—to suggest more comprehensive and successful legal measures for unequal societies. Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Misogynous Memes Recognition: Training vs Inference Bias Mitigation Strategies (Italian Journal of Computational Linguistics)