In order to maintain a positive dialogue on social media, content moderators are essential. Although the moderation pipeline is hampered by the large amount of information they must evaluate, no research has looked into how models may help them make choices more quickly. Although there is now a large amount of research on hate speech detection, some of it is specifically driven by the goal of enhancing content moderation. However, there is a dearth of published studies that use actual content moderators. The current study examines how explanations affect the speed of moderators in the real world. Tests reveal that organized explanations reduce moderators’ decision-making time by 7.4%, but generic explanations have no effect on their speed and are frequently disregarded.https://aclanthology.org/2024.acl-short.38Share this:FacebookXLike this:Like Loading... Post navigation Digital Media Markets: How to Curb Hate Speech and Fake News? (ZEW) Enhancing cross-lingual hate speech detection through contrastive and adversarial learning (Engineering Applications of Artificial Intelligence)