“To address this research gap, we collect a total of 197,566 comments from four platforms: YouTube, Reddit, Wikipedia, and Twitter, with 80% of the comments labeled as non-hateful and the remaining 20% labeled as hateful. We then experiment with several classifcation algorithms (Logistic Regression, Naïve Bayes, Support Vector Machines, XGBoost, and Neural Networks) and feature representations (Bag-of-Words, TF-IDF, Word2Vec, BERT, and their combination). While all the models signifcantly outperform the keyword-based baseline classifer, XGBoost using all features performs the best (F1=0.92). Feature importance analysis indicates that BERT features are the most impactful for the predictions.” https://d-nb.info/1208085050/34 Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation An Empirical Study of Offensive Language in Online Interactions (Rochester Institute of Technology) Classification of Hate Speech Using Deep Neural Networks (HAL)