In light of social media contributing to violence in volatile regions, the Allard K. Lowenstein International Human Rights Clinic at Yale Law School has released a report proposing how social media giant Meta can take a human rights approach to moderating a particular kind of hate speech in conflict or crisis situations. … The report proposes that Meta adopt a signals framework for content moderation, both to determine whether specific content constitutes indirect hate speech and to help moderators decide which content to prioritize within large-scale enforcement. By examining case studies across several countries, most of which are experiencing emerging or active conflict, the report illustrates how this framework could be useful in moderating hate speech. https://law.yale.edu/yls-today/news/lowenstein-clinic-proposes-framework-moderate-indirect-hate-speech-online Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation MetaHate: AI-based hate speech detection for secured online gaming in metaverse using blockchain (Wiley) Pars-HAO: Hate Speech and OffensiveLanguage Detection on Persian Social MediaUsing Ensemble Learning (TechRxiv)