A methodological approach for automatically identifying misogynous memes is presented in the current research, with an emphasis on reducing model prediction bias. To find textual and visual components that unintentionally affect classification results, the authors present a bias estimation technique. To improve fairness and robustness, they suggest and assess two types of debiasing techniques that are used in the training and inference stages. Experimental findings show that the suggested methods greatly enhance generalization performance and predictive accuracy, leading to more dependable and equitable multimodal hate speech detection systems. https://journals.openedition.org/ijcol/1644 Share this: Print (Opens in new window) Print Share on Facebook (Opens in new window) Facebook Share on LinkedIn (Opens in new window) LinkedIn Share on Reddit (Opens in new window) Reddit Share on WhatsApp (Opens in new window) WhatsApp Share on Bluesky (Opens in new window) Bluesky Email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation What is online hate and how can you counter it? (Center for Countering Digital Hate) Two Weeks in Soft Security: Free Resources on Countering Extremism, Hate, and Disinformation, September 2025 (I/II)