MultiHateClip divides videos into three categories: offensive, normal, and hateful, using hate lexicons and human annotations centered on gender-based hatred. Discrimination against a particular group of individuals on the basis of particular characteristics, such sexual orientation, is known as hateful content. The researchers selected 1,000 annotated short clips from YouTube and Bilibili, respectively, to represent the English and Chinese languages for MultiHateClip after analyzing more than 10,000 films. A recurring theme of hate speech directed at women based on gender was discernible in these video. The necessity for a multimodal approach to comprehending hate speech is shown by the fact that the majority of these films included text, visual, and audio components to spread hatred. The researchers anticipated that because hostile and provocative films share characteristics like incendiary language and contentious subjects, it would be challenging to tell them apart. While unpleasant information creates discomfort without aiming to discriminate, hate speech targets certain groups. It is difficult for machine learning models and human annotators to distinguish between offensive and hostile information because of the minute variations in tone, context, and purpose.https://techxplore.com/news/2024-10-multilingual-dataset-video-youtube-bilibili.htmlShare this:FacebookXLike this:Like Loading... Post navigation Detecting Hate Speech in Amharic Using Multimodal Analysis of Social Media Memes (ACL Anthology) ProvocationProbe: Instigating Hate Speech Dataset from Twitter (arXiv)