The fast pace of technological change necessitates new evaluative and deliberative tools. This article develops a general, functional approach to evaluating technological change, inspired by Nissenbaum’s theory of contextual integrity. Nissenbaum (2009) introduced the concept of contextual integrity to help analyze how technological changes can produce privacy problems. Reinterpreted, the concept of contextual integrity can aid our thinking about how technological changes affect the full range of human concerns and values—not only privacy. I propose a generalized concept of contextual integrity that is applicable to a broader variety of circumstances, and I outline a new, general procedure for technological evaluation. Among the attractive features of the proposed approach to evaluating technological change are its context-sensitivity, adaptability, and principled presumptive conservatism, enabled by the mechanism the approach supplies for reevaluating existing practices, norms, and values.
In the last few decades, several philosophers have written on the topic of moral revolutions, distinguishing them from other kinds of society-level moral change. This article surveys recent accounts of moral revolutions in moral philosophy. Different authors use quite different criteria to pick out moral revolutions. Features treated as relevant include radicality, depth or fundamentality, pervasiveness, novelty and particular causes. We also characterize the factors that have been proposed to cause moral revolutions, including anomalies in existing moral codes, changing honour codes, art, economic conditions and individuals or groups. Finally, we discuss what accounts of moral revolutions have in common, how they differ and how moral revolutions are distinguished from other kinds of moral change, such as drift and reform.
This chapter examines the possibility of using artificial intelligence (AI) technologies to improve human moral reasoning and decision-making. The authors characterize such technologies as artificial ethics assistants (AEAs). The authors focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. The authors distinguish three broad areas in which an individual might think their own moral reasoning and decision-making could be improved: one’s actions, character, or other attributes fall short of one’s values and moral beliefs; one sometimes misjudges or is uncertain about what the right thing to do is, given one’s values; or one is uncertain about some fundamental moral questions or recognizes a possibility that some of one’s core moral beliefs and values are mistaken. The authors sketch why one might think AI tools could be used to support moral improvement in those areas and distinguish two types of assistance: preparatory assistance, including advice and training supplied in advance of moral deliberation, and on-the-spot assistance, including on-the-spot advice and facilitation of moral functioning over the course of moral deliberation. Then, the authors turn to ethical issues that AEAs might raise, looking in particular at three under-appreciated problems posed by the use of AI for moral self-improvement: namely, reliance on sensitive moral data, the inescapability of outside influences on AEAs, and AEA usage prompting the user to adopt beliefs and make decisions without adequate reasons.
Several philosophers have recently advanced wager-based arguments for the existence of irreducibly normative truths or against normative nihilism. Here I consider whether these wager-based arguments would cause a normative Pyrrhonian skeptic to lose her skepticism. I conclude they would not do so directly. However, if prompted to consider a different decision problem, which I call the normativity wager for skeptics, the normative Pyrrhonian skeptic would be motivated to attempt to act in accordance with any normative reasons to which she might be subject. Consideration of the normativity wager will not inevitably cause the normative Pyrrhonian skeptic to lose her skepticism, but there are at least three routes by which it might: first, in considering the wager the agent may spontaneously (non-rationally) acquire a normative belief; second, considering the wager can motivate the agent to cause herself to (non-rationally) acquire a normative belief. Via either of these indirect, non-rational routes, she would cease to be a normative Pyrrhonian skeptic. Thus, consideration of the normativity wager may have value, even if it does not supply a rational argument that will dissuade skeptics. In addition, I consider the possibility of a third (rational) route by which the agent might lose her skepticism.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.