“In a report that based its findings on a 2021 study, the Vienna-based EU Agency for Fundamental Rights (FRA) said algorithms based on poor data quality could harm people’s lives. The study comes against the backdrop of the proposed AI Act by the European Commission, which drew criticism from lawmakers and consumer groups from EU countries for not fully addressing risks from AI systems that could violate fundamental rights.” https://www.reuters.com/legal/litigation/eu-rights-watchdog-warns-bias-ai-based-detection-crime-hate-speech-2022-12-08/ Share this: Click to print (Opens in new window) Print Click to share on Facebook (Opens in new window) Facebook Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Reddit (Opens in new window) Reddit Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Bluesky (Opens in new window) Bluesky Click to email a link to a friend (Opens in new window) Email Like this:Like Loading... Post navigation HateCircle and Unsupervised Hate Speech Detection incorporating Emotion and Contextual Semantic (ACM Journals) Addressing religious hate online: from taxonomy creation to automated detection (PeerJ)