The research at hand tackles bias in automatic hate speech detection on social media, where linguistic heterogeneity within speaker groups is frequently overlooked by machine learning models trained on datasets labeled by general annotators. A weakly supervised framework that uses contrastive and prompt-based learning strategies based on huge language models in conjunction with a limited number of expert annotations is suggested as a solution to this problem. A group estimator, pair generator, and knowledge injection are all included into the suggested architecture to improve the model’s sensitivity to sociolinguistic subtleties. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5487166 Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Hate Speech Regulation: Comparative Analysis in Global South Countries (SSRN)