Following the "control dilemma" of Collingridge, influencing technological developments is easy when their implications are not yet manifest, yet once we know these implications, they are difficult to change. This article revisits the Collingridge dilemma in the context of contemporary ethics of technology, when technologies affect both society and the value frameworks we use to evaluate them. Early in its development, we do not know how a technology will affect the value frameworks from which it will be evaluated, while later, when the implications for society and morality are clearer, it is more difficult to guide the development in a desirable direction. Present-day approaches to this dilemma focus on methods to anticipate ethical impacts of a technology ("technomoral scenarios"), being too speculative to be reliable, or on ethically regulating technological developments ("sociotechnical experiments"), discarding anticipation of the future implications. We present the approach of technological
This article explores the understanding of values in Responsible Research and Innovation (RRI). First, it analyses how two mainstream RRI approaches, the largely substantial one by Von Schomberg and the procedural one by Stilgoe and colleagues, identify and conceptualize values. We argue that by treating values as relatively stable entities, directly available for reflection, both fall into an 'entity trap'. As a result, the hermeneutic work required to identify values is overlooked. We therefore seek to bolster a practice-based take on values, which approaches values as the evolving results of valuing processes. We highlight how this approach views values as lived realities, interactive and dynamic, discuss methodological implications for RRI, and explore potential limitations. Overall, the strength of this approach is that it enables RRI scholars and practitioners to better acknowledge the complexities involved in valuing.
Automatic speech recognition (ASR) systems promise to deliver objective interpretation of human speech. Practice and recent evidence suggests that the state-of-the-art (SotA) ASRs struggle with the large variation in speech due to e.g., gender, age, speech impairment, race, and accents. Many factors can cause the bias of an ASR system. Our overarching goal is to uncover bias in ASR systems to work towards proactive bias mitigation in ASR. This paper is a first step towards this goal and systematically quantifies the bias of a Dutch SotA ASR system against gender, age, regional accents and non-native accents. Word error rates are compared, and an in-depth phoneme-level error analysis is conducted to understand where bias is occurring. We primarily focus on bias due to articulation differences in the dataset. Based on our findings, we suggest bias mitigation strategies for ASR development.
This paper explores the role of moral uncertainty in explaining the morally disruptive character of new technologies. We argue that existing accounts of technomoral change do not fully explain its disruptiveness. This explanatory gap can be bridged by examining the epistemic dimensions of technomoral change, focusing on moral uncertainty and inquiry. To develop this account, we examine three historical cases: the introduction of the early pregnancy test, the contraception pill, and brain death. The resulting account highlights what we call “differential disruption” and provides a resource for fields such as technology assessment, ethics of technology, and responsible innovation.
In this paper, I argue that AI-powered voice assistants, just as all technologies, actively mediate our interpretative structures, including values. I show this by explaining the productive role of technologies in the way people make sense of themselves and those around them. More specifically, I rely on the hermeneutics of Gadamer and the material hermeneutics of Ihde to develop a hermeneutic lemniscate as a principle of technologically mediated sense-making. The lemniscate principle links people, technologies and the sociocultural world in the joint production of meaning and explicates the feedback channels between the three counterparts. When people make sense of technologies, they necessarily engage their moral histories to comprehend new technologies and fit them in daily practices. As such, the lemniscate principle offers a chance to explore the moral dynamics taking place during technological appropriation. Using digital voice assistants as an example, I show how these AI-guided devices mediate our moral inclinations, decisions and even our values, while in parallel suggesting how to use and design them in an informed and critical way.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.