“The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence.” https://huggingface.co/Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two?text=I+like+you.+I+love+you Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Combating Hate Speech and Disinformation Against Social Polarization and How to be a Fact-Checking Influencer (Friedrich Naumann Foundation for Freedom) The Role of Context in Detecting the Target of Hate Speech (ACL Anthology)