One form of damaging internet content is hate speech, which targets or incites hatred toward a particular group or its members based on their real or imagined identity, including their sexual orientation, religion, or ethnicity. One area of growing interest in natural language processing is automatic hate speech identification, given the surge in hate speech on the internet. The fact that current models don’t perform well when applied to new data, however, has only lately come to light. In an effort to summarize the generalizability of current hate speech detection models and the reasons behind their limited capacity to do so, this survey study also reviews previous attempts to tackle the primary roadblocks and suggests future research avenues aimed at enhancing generalization in hate speech detection. https://www.researchgate.net/publication/352490211_Towards_generalisable_hate_speech_detection_a_review_on_obstacles_and_solutions Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Two Weeks in P/CVE: Free Resources on Countering Extremism and Hate, February 2024 (II/II) Subjective Isms? On the Danger of Conflating Hate and Offence in Abusive Language Detection (arXiv)