The goal of the current project is to use machine learning (ML) and natural language processing (NLP) techniques to create a system that can automatically identify offensive and hostile statements in text. To determine whether or not the information is damaging, the algorithm will be trained using a dataset containing tagged text, such as comments and posts on social media. The program will detect objectionable language and flag it for moderation by examining the context and meaning of words, so assisting in the reduction of online toxicity. The ultimate objective is to develop an automated program that effectively detects and manages bad speech, helping to maintain safer online environments. https://ijrpr.com/uploads/V6ISSUE6/IJRPR47723.pdf Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation The Role of Context in Detecting the Target of Hate Speech (ACL Anthology) Video… Digital Divides: Online Hate Speech, Disinformation and AI in Africa | 20 May 2025 (RUSI)