A federated learning system with differential privacy for detecting hate speech that is adapted for low-resource languages is shown in the study. By optimizing pre-trained language models, AL-BERT became the best choice for striking a balance between privacy and performance. Although datasets with less than 20 phrases per client suffered from excessive noise, experiments showed that federated learning with differential privacy functions well in low-resource environments. Improving model utility required balanced datasets and balancing hateful data with non-hateful instances. In order to protect user privacy and mitigate online harm, our findings provide a scalable and privacy-conscious framework for incorporating hate speech detection into social media sites and browsers. https://aclanthology.org/2025.naacl-srw.13.pdf Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Two Weeks in Soft Security: Free Resources on Countering Extremism, Hate, and Disinformation, April 2025 (I/II) Personalisation or Prejudice? Addressing Geographic Bias in Hate Speech Detection using Debias Tuning in Large Language Models (arXiv)