The study offers a thorough framework that makes it easier to use artificial neural networks (ANN), random forests (RF), and support vector machines (SVM). By employing the Offensive Language Identification Dataset (OLID), the suggested technique produces significant improvements in tasks related to offensive language recognition, automatic offensiveness classification, and offense target identification. According to the simulation findings, SVM works very well, displaying outstanding precision scores (76%, 87%, and 67%), F1 scores (57%, 88%, and 68%), recall rates (45%, 88%, and 68%), and accuracy scores (77%, 88%, and 68%). These results demonstrate the practical efficacy of SVM in recognizing and filtering objectionable information on social media. Using careful hyperparameter tuning and advanced preprocessing, our model performs better than some previous studies in identifying and classifying tasks using foul language.

https://www.mdpi.com/2227-7390/12/13/2123

By author

Leave a Reply

Your email address will not be published. Required fields are marked *