In light of social media contributing to violence in volatile regions, the Allard K. Lowenstein International Human Rights Clinic at Yale Law School has released a report proposing how social media giant Meta can take a human rights approach to moderating a particular kind of hate speech in conflict or crisis situations. … The report proposes that Meta adopt a signals framework for content moderation, both to determine whether specific content constitutes indirect hate speech and to help moderators decide which content to prioritize within large-scale enforcement. By examining case studies across several countries, most of which are experiencing emerging or active conflict, the report illustrates how this framework could be useful in moderating hate speech. https://law.yale.edu/yls-today/news/lowenstein-clinic-proposes-framework-moderate-indirect-hate-speech-online Share this: Print (Opens in new window) Print Share on Facebook (Opens in new window) Facebook Share on LinkedIn (Opens in new window) LinkedIn Share on Reddit (Opens in new window) Reddit Share on WhatsApp (Opens in new window) WhatsApp Share on Bluesky (Opens in new window) Bluesky Email a link to a friend (Opens in new window) Email Like this:Like Loading… Post navigation MetaHate: AI-based hate speech detection for secured online gaming in metaverse using blockchain (Wiley) Pars-HAO: Hate Speech and OffensiveLanguage Detection on Persian Social MediaUsing Ensemble Learning (TechRxiv)