En route to the unravelling of today’s multiplicity of societal challenges, making sense of social data has become a crucial endeavour in Information Systems (IS) research. In this context, Social Media Analytics (SMA) has evolved to a promising field of data-driven approaches, guiding researchers in the process of collecting, analysing, and visualising social media data. However, the handling of such sensitive data requires careful ethical considerations to protect data subjects, online communities, and researchers. Hitherto, the field lacks consensus on how to safeguard ethical conduct throughout the research process. To address this shortcoming, this study proposes an extended version of a SMA framework by incorporating ethical reflection phases as an addition to methodical steps. Following a design science approach, existing ethics guidelines and expert interviews with SMA researchers and ethicists serve as the basis for redesigning the framework. It was eventually assessed through multiple rounds of evaluation in the form of focus group discussions and questionnaires with ethics board members and SMA experts. The extended framework, encompassing a total of five iterative ethical reflection phases, provides simplified ethical guidance for SMA researchers and facilitates the ethical self-examination of research projects involving social media data.
Assuming that potential biases of Artificial Intelligence (AI)-based systems can be identified and controlled for (e.g., by providing high quality training data), employing such systems to augment human resource (HR)-decision makers in candidate selection provides an opportunity to make selection processes more objective. However, as the final hiring decision is likely to remain with humans, prevalent human biases could still cause discrimination. This work investigates the impact of an AI-based system’s candidate recommendations on humans’ hiring decisions and how this relation could be moderated by an Explainable AI (XAI) approach. We used a self-developed platform and conducted an online experiment with 194 participants. Our quantitative and qualitative findings suggest that the recommendations of an AI-based system can reduce discrimination against older and female candidates but appear to cause fewer selections of foreign-race candidates. Contrary to our expectations, the same XAI approach moderated these effects differently depending on the context.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.