Forms of Artificial Intelligence (AI), like deep learning algorithms and neural networks, are being intensely explored for novel healthcare applications in areas such as imaging and diagnoses, risk analysis, lifestyle management and monitoring, health information management, and virtual health assistance. Expected benefits in these areas are wide-ranging and include increased speed in imaging, greater insight into predictive screening, and decreased healthcare costs and inefficiency. However, AI-based clinical tools also create a host of situations wherein commonly-held values and ethical principles may be challenged. In this short column, we highlight three potentially problematic aspects of AI use in healthcare: (1) dynamic information and consent, (2) transparency and ownership, and (3) privacy and discrimination. We discuss their impact on patient/client, clinician, and health institution values and suggest ways to tackle this impact. We propose that AI-related ethical challenges may represent an opportunity for growth in organizations.
Stimulant drugs, transcranial magnetic stimulation, brain-computer interfaces, and even genetic modifications are all discussed as forms of potential cognitive enhancement. Cognitive enhancement can be conceived as a benefit-seeking strategy used by healthy individuals to enhance cognitive abilities such as learning, memory, attention, or vigilance. This phenomenon is hotly debated in the public, professional, and scientific literature. Many of the statements favoring cognitive enhancement (e.g., related to greater productivity and autonomy) or opposing it (e.g., related to health-risks and social expectations) rely on claims about human welfare and human flourishing. But with real-world evidence from the social and psychological sciences often missing to support (or invalidate) these claims, the debate about cognitive enhancement is stalled. In this paper, we describe a set of crucial debated questions about psychological and social aspects of cognitive enhancement (e.g., intrinsic motivation, well-being) and explain why they are of fundamental importance to address in the cognitive enhancement debate and in future research. We propose studies targeting social and psychological outcomes associated with cognitive enhancers (e.g., stigmatization, burnout, mental well-being, work motivation). We also voice a call for scientific evidence, inclusive of but not limited to biological health outcomes, to thoroughly assess the impact of enhancement. This evidence is needed to engage in empirically informed policymaking, as well as to promote the mental and physical health of users and non-users of enhancement.
Over the last two decades, researchers have promised “neuroprosthetics” for use in physical rehabilitation and to treat patients with paralysis. Fulfilling this promise is not merely a technical challenge but is accompanied by consequential practical, ethical, and social implications that warrant sociological investigation and careful deliberation. In response, this paper explores how rehabilitation professionals evaluate the development and application of BCIs. It thereby also asks how the BCIs come to be seen as desirable or not, and implicitly, what types of persons, rights, and responsibilities are assumed in this discourse. To this end, we conducted a web-based survey (N=135) and follow-up interviews (N=15) with Canadian professionals in physical therapy, occupational therapy, and speech-language pathology. We find that rehabilitation professionals, like other publics, express hope and enthusiasm regarding the use of BCIs for assistive purposes. They envision BCI devices as powerful means to reintegrate patients and disabled people into social life but also express practical and ethical reservations about the technology, positioning themselves as uniquely qualified to inform responsible BCI design and implementation. These results further illustrate the nascent “co-production” of neural technologies and social order. More immediately, they also pose a serious challenge for implementing frameworks of responsible innovation; merely prescribing more inclusive technology development may not counteract technocratic processes and widely held ableist views about the need to augment certain bodies using technology.
As brain-computer interfaces are promoted as assistive devices, some researchers worry that this promise to “restore” individuals worsens stigma toward disabled people and fosters unrealistic expectations. In three web-based survey experiments with vignettes, we tested how refusing a brain-computer interface in the context of disability affects cognitive (blame), emotional (anger), and behavioral (coercion) stigmatizing attitudes (Experiment 1, N = 222) and whether the effect of a refusal is affected by the level of brain-computer interface functioning (Experiment 2, N = 620) or the risk of malfunctioning (Experiment 3, N = 620). We found that refusing a brain-computer interface increased blame and anger, while brain-computer interface functioning did change the effect of a refusal. Higher risks of device malfunctioning partially reduced stigmatizing attitudes and moderated the effect of refusal. This suggests that information about disabled people who refuse a technology can increase stigma toward them. This finding has serious implications for brain-computer interface regulation, media coverage, and the prevention of ableism.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.