Only when the applied criteria match users’ views of hate speech can content moderation be successful. We employed a multi-methods approach to investigate what media consumers consider to be hate speech and what factors shape this impression. First, a representative sample of the Swiss population (N = 2000) participated in our survey. Second, participants in a two-week mobile longitudinal linkage research who reported regular exposure to hate speech answered questionnaires and uploaded screenshots of hate speech. We examined N = 564 screenshots to see if they fit the standard scholarly categories of hate speech. Our results demonstrate that insults and rudeness are more likely to be classified as hate speech when they negatively impact a person’s social identity and that self-reports reveal more exposure to hate speech than screenshots did.https://www.tandfonline.com/doi/full/10.1080/1369118X.2025.2461646?src=Share this:FacebookXLike this:Like Loading... Post navigation Generate, Prune, Select: A Pipeline for Counterspeech Generation against Online Hate Speech (ACL Anthology) Hate Speech Detection Using Social Media Discourse: A Multilingual Approach with Large Language Model (African Journal of Biomedical Research)