In light of social media contributing to violence in volatile regions, the Allard K. Lowenstein International Human Rights Clinic at Yale Law School has released a report proposing how social media giant Meta can take a human rights approach to moderating a particular kind of hate speech in conflict or crisis situations. … The report proposes that Meta adopt a signals framework for content moderation, both to determine whether specific content constitutes indirect hate speech and to help moderators decide which content to prioritize within large-scale enforcement. By examining case studies across several countries, most of which are experiencing emerging or active conflict, the report illustrates how this framework could be useful in moderating hate speech.https://law.yale.edu/yls-today/news/lowenstein-clinic-proposes-framework-moderate-indirect-hate-speech-onlineShare this:FacebookXLike this:Like Loading... Post navigation MetaHate: AI-based hate speech detection for secured online gaming in metaverse using blockchain (Wiley) Pars-HAO: Hate Speech and OffensiveLanguage Detection on Persian Social MediaUsing Ensemble Learning (TechRxiv)