A methodological approach for automatically identifying misogynous memes is presented in the current research, with an emphasis on reducing model prediction bias. To find textual and visual components that unintentionally affect classification results, the authors present a bias estimation technique. To improve fairness and robustness, they suggest and assess two types of debiasing techniques that are used in the training and inference stages. Experimental findings show that the suggested methods greatly enhance generalization performance and predictive accuracy, leading to more dependable and equitable multimodal hate speech detection systems.

https://journals.openedition.org/ijcol/1644

By author

Leave a Reply

Your email address will not be published. Required fields are marked *