One technology used to identify dangerous information, such hate speech, is automated classifiers. People’s experiences of damage occurring online can be greatly decreased by implementing these safety precautions. These techniques are also used by researchers to determine the effects of platform modifications (such as altering rules or banning certain users or content) on the prevalence of hate speech. But as per a recent Ofcom investigation, it is crucial for researchers to disclose the classifiers they have employed and their respective performance metrics. This is because to the wide range of performance that classifiers can exhibit. For instance, popular classifiers might not work well with certain datasets. https://www.ofcom.org.uk/news-centre/2024/how-accurate-are-online-hate-speech-detection-tools Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation Online hate and aggression among young people in Canada (Statistics Canada) Evaluation in Online Safety, a discussion of hate speech classification and safety measures (Ofcom)