A federated learning system with differential privacy for detecting hate speech that is adapted for low-resource languages is shown in the study. By optimizing pre-trained language models, AL-BERT became the best choice for striking a balance between privacy and performance. Although datasets with less than 20 phrases per client suffered from excessive noise, experiments showed that federated learning with differential privacy functions well in low-resource environments. Improving model utility required balanced datasets and balancing hateful data with non-hateful instances. In order to protect user privacy and mitigate online harm, our findings provide a scalable and privacy-conscious framework for incorporating hate speech detection into social media sites and browsers.

https://aclanthology.org/2025.naacl-srw.13.pdf

By author

Leave a Reply

Your email address will not be published. Required fields are marked *