A methodological approach for automatically identifying misogynous memes is presented in the current research, with an emphasis on reducing model prediction bias. To find textual and visual components that unintentionally affect classification results, the authors present a bias estimation technique. To improve fairness and robustness, they suggest and assess two types of debiasing techniques that are used in the training and inference stages. Experimental findings show that the suggested methods greatly enhance generalization performance and predictive accuracy, leading to more dependable and equitable multimodal hate speech detection systems. https://journals.openedition.org/ijcol/1644 Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation What is online hate and how can you counter it? (Center for Countering Digital Hate)