The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.
In this paper, in line with the general framework of value-sensitive design, we aim to operationalize the general concept of “Meaningful Human Control” (MHC) in order to pave the way for its translation into more specific design requirements. In particular, we focus on the operationalization of the first of the two conditions (Santoni de Sio and Van den Hoven 2018) investigated: the so-called ‘tracking’ condition. Our investigation is led in relation to one specific subcase of automated system: dual-mode driving systems (e.g. Tesla ‘autopilot’). First, we connect and compare meaningful human control with a concept of control very popular in engineering and traffic psychology (Michon 1985), and we explain to what extent tracking resembles and differs from it. This will help clarifying the extent to which the idea of meaningful human control is connected to, but also goes beyond, current notions of control in engineering and psychology. Second, we take the systematic analysis of practical reasoning as traditionally presented in the philosophy of human action (Anscombe, Bratman, Mele) and we adapt it to offer a general framework where different types of reasons and agents are identified according to their relation to an automated system’s behaviour. This framework is meant to help explaining what reasons and what agents (should) play a role in controlling a given system, thereby enabling policy makers to produce usable guidelines and engineers to design systems that properly respond to selected human reasons. In the final part, we discuss a practical example of how our framework could be employed in designing automated driving systems.
The future adoption of automated vehicles poses many challenges, with one of the more important being the preservation of control over vehicles that are no longer (fully) operated by drivers. There is consensus that vehicles should not perform actions that are unacceptable to humans. In this paper, we introduce the concept of Meaningful Human Control (MHC) as a function of a framework of the Automated Driving System (ADS). This framework is constructed through the core components that make up the ADS, primarily considered within the categories of the vehicle and driver. Identification of these components and the chain of control allow traceability of MHC to be performed, and aids vehicle manufacturers, software developers, other vehicle component designers, and vehicle-and driver licensing authorities to address many challenges related to the design and preservation of human control in automated vehicles. Operationalisation of MHC is discussed in the paper including a suggested approach that should aid understanding and the application of the concept. Four application examples are given and recommendations are made in regard to vehicle design, human machine interaction, transition of control, driver training, vehicle approval, and other topics. The framework and presented concept also allow researchers to identify areas to perform more explicit and relevant research and develop models that can be applied to perform projections of future impacts. Relevance to human factors/Relevance to ergonomics theory The preservation of control over vehicles that are no longer (fully) operated by drivers is of major importance and a highly relevant topic in human factors and ergonomics research. This paper introduces the concept of Meaningful Human Control (MHC) as a function of a framework of the Automated Driving System (ADS) to address some of these challenges. The framework is essential for the construction of the theory of MHC and the operationalisation in various field connected to human factors, HMI, automated vehicle software design. This aids vehicle manufacturers, software developers, other vehicle component
Contemporary brain reading technologies promise to provide the possibility to decode and interpret mental states and processes. Brain reading could have numerous societally relevant implications. In particular, the private character of mind might be affected, generating ethical and legal concerns. This paper aims at equipping ethicists and policy makers with conceptual tools to support an evaluation of the potential applicability and the implications of current and near future brain reading technology. We start with clarifying the concepts of mind reading and brain reading, and the different kinds of mental states that could in principle be read. Subsequently, we devise an evaluative framework that is composed of five criteria-accuracy, reliability, informativity, concealability and enforceability-aimed at enabling a clearer estimation of the degree to which brain reading might be realistically deployed in contexts where mental privacy could be at stake. While accuracy and reliability capture how well a certain method can access mental content, informativity indicates the relevance the obtainable data have for practical purposes. Concealability and enforceability are particularly important for the evaluation of concerns about potential violations of mental privacy and civil rights. The former concerns the degree with which a brain reading method can be concealed from an individual's perception or awareness. The latter regards the extent to which a method can be used against somebody's will. With the help of these criteria, stakeholders can orient themselves in the rapidly developing field of brain reading.
Automated driving systems (ADS) with partial automation are currently available for the consumer. They are potentially beneficial to traffic flow, fuel consumption, and safety, but human behaviour whilst driving with ADS is poorly understood. Human behaviour is currently expected to lead to dangerous circumstances as ADS could place human drivers 'out-of-the-loop' or cause other types of adverse behavioural adaptation. This article introduces the concept of 'meaningful human control' to better address the challenges raised by ADS, and presents a new framework of human control over ADS by means of literature-based categorisation. Using standards set by European authorities for driver skills and road rules, this framework offers a unique, quantified perspective into the effects of ADS on human behaviour. One main result is a rapid and inconsistent decrease in required skill-and rule-based behaviour mismatching with the increasing amount of required knowledge-based behaviour. Furthermore, the development of higher levels of automation currently requires different human behaviour than feasible, as a mismatch between supply and demand in terms of behaviour arises. Implications, discrepancies and emerging mismatches this framework elicits are discussed, and recommendations towards future design strategies and research opportunities are made to provide a meaningful transition of human control over ADS. Relevance to human factors/Relevance to ergonomics theoryHuman Factors in automated driving systems (ADS) are currently poorly understood. The relevance of this paper is that it adds to the understanding in a way that it introduces the innovative concept of "meaningful human control", and applies it to the domain of Human Factors in ADS, as well as that it presents a new framework of human control over ADS. With it, this paper elicits several mismatches between what is currently demanded from a driver of an ADS, and what such a driver is actually capable of doing. Furthermore, the discussion of these implications raises directions to future design strategies and research opportunities in the fields of Human Factors. ARTICLE HISTORY
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.