Bias in Artificial Intelligence (AI) is a critical and timely issue due to its sociological, economic and legal impact, as decisions made by biased algorithms could lead to unfair treatment of specific individuals or groups. Multiple surveys have emerged to provide a multidisciplinary view of bias or to review bias in specific areas such as social sciences, business research, criminal justice, or data mining. Given the ability of Semantic Web (SW) technologies to support multiple AI systems, we review the extent to which semantics can be a “tool” to address bias in different algorithmic scenarios. We provide an in-depth categorisation and analysis of bias assessment, representation, and mitigation approaches that use SW technologies. We discuss their potential in dealing with issues such as representing disparities of specific demographics or reducing data drifts, sparsity, and missing values. We find research works on AI bias that apply semantics mainly in information retrieval, recommendation and natural language processing applications and argue through multiple use cases that semantics can help deal with technical, sociological, and psychological challenges.
Due to the rise in toxic speech on social media and other online platforms, there is a growing need for systems that could automatically flag or filter such content. Various supervised machine learning approaches have been proposed, trained from manually-annotated toxic speech corpora. However, annotators sometimes struggle to judge or to agree on which text is toxic and which group is being targeted in a given text. This could be due to bias, subjectivity, or unfamiliarity with used terminology (e.g. domain language, slang). In this paper, we propose the use of a knowledge graph to help in better understanding such toxic speech annotation issues. Our empirical results show that 3% in a sample of 19k texts mention terms associated with frequently attacked gender and sexual orientation groups that were not correctly identified by the annotators.
Cybersickness involves all the adverse efects that can occur during a Virtual Reality (VR) immersion, which can compromise the quality of the user experience and limit the usability, functionality and duration of use of VR systems. Standardised protocols help detect stimuli that may cause cybersickness in multiple users but do not fully discriminate which specifc users experience cybersickness. Of the biometric measures used to monitor cybersickness in an individual, Heart Rate Variability (HRV) is one of the most used in previous work. However, these only considered its temporal components and did not allow for rest periods between sessions, even though these can afect users' immersion. Our analysis addresses these limitations in that changes in HRV can measure specifc levels of discomfort or "alertness" associated with the initial cybersickness stimulus induced in the 360 videos. Primarily, our empirical results show signifcant diferences in the frequency components of HRV in response to cybersickness stimuli. These initial measurements can compete with standard subjective assessment protocols, especially for detecting whether a subject responds to a VR immersion with cybersickness symptoms. CCS CONCEPTS• Human-centered computing → Laboratory experiments; Virtual reality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.