A federated learning system with differential privacy for detecting hate speech that is adapted for low-resource languages is shown in the study. By optimizing pre-trained language models, AL-BERT became the best choice for striking a balance between privacy and performance. Although datasets with less than 20 phrases per client suffered from excessive noise, experiments showed that federated learning with differential privacy functions well in low-resource environments. Improving model utility required balanced datasets and balancing hateful data with non-hateful instances. In order to protect user privacy and mitigate online harm, our findings provide a scalable and privacy-conscious framework for incorporating hate speech detection into social media sites and browsers.https://aclanthology.org/2025.naacl-srw.13.pdfShare this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X Like this:Like Loading... Post navigation Two Weeks in Soft Security: Free Resources on Countering Extremism, Hate, and Disinformation, April 2025 (I/II) Personalisation or Prejudice? Addressing Geographic Bias in Hate Speech Detection using Debias Tuning in Large Language Models (arXiv)