This paper examines how the governance in AI policy documents have been framed as way to resolve public controversies surrounding AI. It draws on the studies of governance of emerging technologies, the concept of policy framing, and analysis of 49 recent policy documents dedicated to AI which have been prepared in the context of technological hype expecting fast advances of AI that will fundamentally change economy and society. The hype about AI is accompanied by major public controversy about positive and negative effects of AI. Against the backdrop of this policy controversy, governance emerges as one of the frames that diagnoses the problems and offers prescriptions. Accordingly, the current governance characterized by oligopoly of a small number of large companies is indicated as one of the reasons for problems such as lack of consideration of societal needs and concerns. To address these problems, governance frame in AI policy documents assigns more active and collaborative roles to the state and society. Amid public controversies, the state is assigned the roles of promoting and facilitating AI development while at the same time being a guarantor of risk mitigation and enabler of societal engagement. High expectations are assigned to public engagement with multiple publics as a way to increase diversity, representation and equality in AI development and use. While this governance frame might have a normative appeal, it is not specific about addressing some well-known challenges of the proposed governance mode such as risks of capture by vested interests or difficulties to achieve consensus.
Recent advances in Artificial Intelligence (AI) have led to intense debates about benefits and concerns associated with this powerful technology. These concerns and debates have similarities with developments in other emerging technologies characterized by prominent impacts and uncertainties. Against this background, this paper asks, What can AI governance, policy and ethics learn from other emerging technologies to address concerns and ensure that AI develops in a socially beneficial way? From recent literature on governance, policy and ethics of emerging technologies, six lessons are derived focusing on inclusive governance with balanced and transparent involvement of government, civil society and private sector; diverse roles of the state including mitigating risks, enabling public participation and mediating diverse interests; objectives of technology development prioritizing societal benefits; international collaboration supported by science diplomacy, as well as learning from computing ethics and Responsible Innovation.
Current discussions of the ethical aspects of big data are shaped by concerns regarding the social consequences of both the widespread adoption of machine learning and the ways in which biases in data can be replicated and perpetuated. We instead focus here on the ethical issues arising from the use of big data in international neuroscience collaborations. Neuroscience innovation relies upon neuroinformatics, large-scale data collection and analysis enabled by novel and emergent technologies. Each step of this work involves aspects of ethics, ranging from concerns for adherence to informed consent or animal protection principles and issues of data re-use at the stage of data collection, to data protection and privacy during data processing and analysis, and issues of attribution and intellectual property at the data-sharing and publication stages. Significant dilemmas and challenges with far-reaching implications are also inherent, including reconciling the ethical imperative for openness and validation with data protection compliance and considering future innovation trajectories or the potential for misuse of research results. Furthermore, these issues are subject to local interpretations within different ethical cultures applying diverse legal systems emphasising different aspects. Neuroscience big data require a concerted approach to research across boundaries, wherein ethical aspects are integrated within a transparent, dialogical data governance process. We address this by developing the concept of “responsible data governance,” applying the principles of Responsible Research and Innovation (RRI) to the challenges presented by the governance of neuroscience big data in the Human Brain Project (HBP).
The increasing use of information and communication technologies (ICTs) to help facilitate neuroscience adds a new level of complexity to the question of how ethical issues of such research can be identified and addressed. Current research ethics practice, based on ethics reviews by institutional review boards (IRB) and underpinned by ethical principlism, has been widely criticized. In this article, we develop an alternative way of approaching ethics in neuro-ICT research, based on discourse ethics, which implements Responsible Research and Innovation (RRI) through dialogues. We draw on our work in Ethics Support, using the Human Brain Project (HBP) as empirical evidence of the viability of this approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.