BackgroundBrain-Computer Interface (BCI) is a set of technologies that are of increasing interest to researchers. BCI has been proposed as assistive technology for individuals who are non-communicative or paralyzed, such as those with amyotrophic lateral sclerosis or spinal cord injury. The technology has also been suggested for enhancement and entertainment uses, and there are companies currently marketing BCI devices for those purposes (e.g., gaming) as well as health-related purposes (e.g., communication). The unprecedented direct connection created by BCI between human brains and computer hardware raises various ethical, social, and legal challenges that merit further examination and discussion.MethodsTo identify and characterize the key issues associated with BCI use, we performed a scoping review of biomedical ethics literature, analyzing the ethics concerns cited across multiple disciplines, including philosophy and medicine.ResultsBased on this investigation, we report that BCI research and its potential translation to therapeutic intervention generate significant ethical, legal, and social concerns, notably with regards to personhood, stigma, autonomy, privacy, research ethics, safety, responsibility, and justice. Our review of the literature determined, furthermore, that while these issues have been enumerated extensively, few concrete recommendations have been expressed.ConclusionsWe conclude that future research should focus on remedying a lack of practical solutions to the ethical challenges of BCI, alongside the collection of empirical data on the perspectives of the public, BCI users, and BCI researchers.Electronic supplementary materialThe online version of this article (10.1186/s12910-017-0220-y) contains supplementary material, which is available to authorized users.
Since the 1960s, scientists, engineers, and healthcare professionals have developed brain–computer interface (BCI) technologies, connecting the user’s brain activity to communication or motor devices. This new technology has also captured the imagination of publics, industry, and ethicists. Academic ethics has highlighted the ethical challenges of BCIs, although these conclusions often rely on speculative or conceptual methods rather than empirical evidence or public engagement. From a social science or empirical ethics perspective, this tendency could be considered problematic and even technocratic because of its disconnect from publics. In response, our trinational survey (Germany, Canada, and Spain) reports public attitudes toward BCIs ( N = 1,403) on ethical issues that were carefully derived from academic ethics literature. The results show moderately high levels of concern toward agent-related issues (e.g., changing the user’s self) and consequence-related issues (e.g., new forms of hacking). Both facets of concern were higher among respondents who reported as female or as religious, while education, age, own and peer disability, and country of residence were associated with either agent-related or consequence-related concerns. These findings provide a first look at BCI attitudes across three national contexts, suggesting that the language and content of academic BCI ethics may resonate with some publics and their values.
Forms of Artificial Intelligence (AI), like deep learning algorithms and neural networks, are being intensely explored for novel healthcare applications in areas such as imaging and diagnoses, risk analysis, lifestyle management and monitoring, health information management, and virtual health assistance. Expected benefits in these areas are wide-ranging and include increased speed in imaging, greater insight into predictive screening, and decreased healthcare costs and inefficiency. However, AI-based clinical tools also create a host of situations wherein commonly-held values and ethical principles may be challenged. In this short column, we highlight three potentially problematic aspects of AI use in healthcare: (1) dynamic information and consent, (2) transparency and ownership, and (3) privacy and discrimination. We discuss their impact on patient/client, clinician, and health institution values and suggest ways to tackle this impact. We propose that AI-related ethical challenges may represent an opportunity for growth in organizations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.