There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be 'explicable'. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of 'explicability'. Roughly, the principle states that "for AI to promote and not constrain human autonomy, our 'decision about who should decide' must be informed by knowledge of how AI would act instead of us" (Floridi et al. in Minds Mach 28(4):689-707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability-not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.
Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents (AMA). Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs (from funders like Elon Musk) coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further.
With Artificial Intelligence (AI) entering our lives in novel ways-both known and unknown to us-there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the 'black box' of modern machine-learning algorithms to understand the reasoning behind their decisions-especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be transparent we should focus on constraining AI and those machines powered by AI within microenvironments-both physical and virtual-which allow these machines to realize their function whilst preventing harm to humans. In the field of robotics this is called 'envelopment'. However, to put an 'envelope' around AIpowered machines we need to know some basic things about them which we are often in the dark about. The properties we need to know are the: training data, inputs, functions, outputs, and boundaries. This knowledge is a necessary first step towards the envelopment of AI-powered machines. It is only with this knowledge that we can responsibly regulate, use, and live in a world populated by these machines.
Contemporary literature investigating the significant impact of technology on our lives leads many to conclude that ethics must be a part of the discussion at an earlier stage in the design process i.e., before a commercial product is developed and introduced. The problem, however, is the question regarding how ethics can be incorporated into an earlier stage of technological development and it is this question that we argue has not yet been answered adequately. There is no consensus amongst scholars as to the kind of ethics that should be practiced, nor the individual selected to perform this ethical analysis. One school of thought holds that ethics should have pragmatic value in research and design and that it should be implemented by the (computer) engineers and/or (computer) scientists themselves, while another school of thought holds that ethics need not be so pragmatic. For the latter, the ethical reflection can aim at a variety of goals, and be carried out by an ethicist. None of the approaches resulting from these lines of thinking have been adopted on a wide-scale basis. To that end, the approach presented here is intended to bridge the gap between these schools of thought. It is our contention that ethics ought to be pragmatic and to provide utility for the design process and we maintain that adequate ethical reflection, and all that it entails, ought to be conducted by an ethicist. Thus, we propose a novel role for the ethicist--the ethicist as designer--who subscribes to a pragmatic view of ethics in order to bring ethics into the research and design of artifacts-no matter the stage of development. In this paper we outline the series of steps that a pragmatic value analysis entails: uncovering relevant values, scrutinizing these values and, working towards the translation of values into technical content. In conclusion, we present a list of tasks for the ethicist in his/her role as designer on the interdisciplinary team.
Transparency is important for liberal democracies; however, the value of transparency is difficult to articulate. In this article we articulate transparency as an instrumental value for providing what we call ensurance and assurance to liberal democratic citizens. Ensurance refers to the property of liberal democracies which prevents it from sliding into authoritarianism and assurance is the property whereby citizens are assured that ensurance exists. Looking at the rise of bulk data collection and use afforded by information communication technologies, this paper focuses on the way that technologies disrupt relations between the state and its citizens, and suggests Value Sensitive Design as a methodology to protect key aspects of liberal democracies.Bulk data collection makes the achieving of ensurance and assurance more difficult due to two types of opacity which arise as a result of the practice: technical opacity-the difficulty for citizens to understand the technology behind bulk data collection; and, algorithmic opacity-opacity which results from properties inherent to algorithms which guide the collection and processing of bulk data. Design requirements will be suggested to respond to the disruptions caused by ICTs between liberal democracies and their citizens which threaten the necessary value for liberal democracies of representativeness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.