“In a report that based its findings on a 2021 study, the Vienna-based EU Agency for Fundamental Rights (FRA) said algorithms based on poor data quality could harm people’s lives. The study comes against the backdrop of the proposed AI Act by the European Commission, which drew criticism from lawmakers and consumer groups from EU countries for not fully addressing risks from AI systems that could violate fundamental rights.” https://www.reuters.com/legal/litigation/eu-rights-watchdog-warns-bias-ai-based-detection-crime-hate-speech-2022-12-08/ Share this: Print (Opens in new window) Print Share on Facebook (Opens in new window) Facebook Share on LinkedIn (Opens in new window) LinkedIn Share on Reddit (Opens in new window) Reddit Share on WhatsApp (Opens in new window) WhatsApp Share on Bluesky (Opens in new window) Bluesky Email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation HateCircle and Unsupervised Hate Speech Detection incorporating Emotion and Contextual Semantic (ACM Journals) Addressing religious hate online: from taxonomy creation to automated detection (PeerJ)