Conspiracy theories, hate speech directed toward weaker groups, and disinformation pose a danger to democracy, justice, and peace. Because hate is expressed through context rather than just words, automatic detection is still difficult. Both machine learning and traditional approaches are becoming better, but they are still not perfect. As a remedy, we suggest that students develop prosocial mobile applications that encourage compassion by cultivating empathy. Research from the University of Zurich and ETH demonstrates that encouraging words are particularly successful for victims of hate speech. Based on our expertise in ethics education and CS capstone project supervision, we think students may develop morally sound, compassionate applications. Contests such as UAB’s “Ethics in Action” design contest can help achieve this objective.https://dl.acm.org/doi/10.1145/3696673.3723088Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X Like this:Like Loading... Post navigation SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter (ACL Anthology) Personalisation or Prejudice? Addressing Geographic Bias in Hate Speech Detection using Debias Tuning in Large Language Models (arXiv)