Research on hate speech identification has mostly examined monolingual situations; multilingual and code-switched languages, which provide unique linguistic complications, have received less attention. By contrasting transformer-based models—XLM-RoBERTa, DistilBERT, Multilingual BERT, and mT5—with more conventional machine learning techniques, such as logistic regression, support vector machines, and multinomial naïve Bayes utilizing TF-IDF features, the current work investigates … Continue reading Assessing Transformers and Traditional Models for Spanish-English Code-Switched Hate Detection (Tech Rxiv)
Copy and paste this URL into your WordPress site to embed
Copy and paste this code into your site to embed