The article clarifies at what stages of the life cycle of artificial intelligence systems (AIS) ethical issues arise and tells about global and domestic trends in this area. The international and national experience concerning ethical issues of the AIS use in healthcare is described. The international and national strategies for the development of AI in healthcare are analyzed and their main differences from each other are identified. Special attention is paid to the national strategy for developing AI in domestic healthcare. The main conclusions are summarized, and the importance of a strong successful healthcare system based on artificial intelligence that contributes to building trust and compliance with ethical standards is emphasized.
The use of artificial intelligence (AI) technologies in Russian healthcare is one of the priority areas for implementing the national strategy for the development of AI in the country. The introduction of AI-based digital solutions in healthcare facilities should improve the standard of living of the population and the quality of medical care, including areas of preventive examinations, diagnostics based on image analysis, prediction of disease development, selection of the optimal dosages of drugs, reducing the threat of pandemics, automating and increasing the accuracy of surgical interventions, etc. Policy management and technical regulation in the field of AI in healthcare are under development. The domestic market for relevant solutions has been created, and some products have received registration certificates as medical devices from Roszdravnadzor (Federal Service for Surveillance in Healthcare). Various teams of scientists carry out research work. At the same time, Russia is still behind the leading countries in the field of AI, such as the United States and China. Investments in AI healthcare products dropped significantly in 2021. The main reasons for the lag, at least in terms of market indicators, are low demand and the inability of state medical organizations to fund AI projects. Other issues also lie in the area of trust in the safety and effectiveness of such solutions
Artificial intelligence technologies in medical practice are a promising direction in the world. Artificial intelligence medical decision support systems, diagnostic and screening programs can help medical personnel in routine and complex tasks and improve the level of medical care provided to patients. At the same time, the development, production and distribution of artificial intelligence systems must be regulated without fail. Registration and subsequent control (post-registration monitoring) of artificial intelligence systems in medicine require the creation, adjustment of the legal framework and technological regulation. The Russian Federation has developed a promising development strategy in this area. Seven national standards have been developed by experts in the field of Artificial intelligence in healthcare. These standards establish the procedures for conducting clinical and technical trials, performance requirements and the concept of life cycle, a quality management system and risk management. Aseparate standards is devoted to dataset creation for training and testing the developed algorithms, requirements for them and a metadata format. There are plans to bring the developed national standards to the international level, which will allow Russian manufacturers of artificial intelligence systems implemented these national standards to comply with foreign counterparts and become more competitive at the international level. The international community has already supported the development of an ISO standard based on the national standard for clinical trials. The development will be performed based on the technical committee ISO/TC215 (Health informatics) in conjunction with ISO/IEC JTC1/SC42 (Artificial intelligence), this will allow bringing the national requirements for the Artificial intelligence to the international level. The cycle of these standards will summarize recognized methodologies, helping both manufacturers and medical organizations, doctors and patients to produce and use aquality, safe and effective product.
Environmental problems have a tremendous impact on the entire world population, particularly on human health, which plays a leading role in individual well-being. Environmental pollution, according to some estimates, kills approximately 9 million people every year. The introduction of artificial intelligence (AI) systems in many areas has enormous potential in reducing human impact on the environment; however, such systems have negative effects. The potential of AI systems to improve healthcare is inextricably linked to the ethical challenges posed by the complexity of these systems and their impact on the lives and health of communities, patients, and staff. In addition to aspects that relate directly to the algorithms, data, and clinical application of AI systems, long-term risks exist that are not obvious at first glance. One of these risks is the negative impact of AI systems on the environment, which may harm human health indirectly. AI systems are more than software, having physical components that are necessary for their functioning, such as processors, memory, and sensors. The manufacture and the energy consumption of the components has a profound effect on the environment. One study showed that when a single AI algorithm is trained, carbon emissions may reach values corresponding to the total carbon emissions from five cars lifetime. This study analyzes existing literature linking the development of AI systems, especially in healthcare, to their effects on the environment. The study is intended to complement the emerging AI Ethics Code for healthcare, specifically the principles of sustainability that will be included in this code. The study concludes that the environmental impact of AI systems should be considered when formulating ethical standards for AI in healthcare. These standards must be considered during the development, testing, and application phases of AI systems. All the people involved in the creation and use of AI systems (developers, physicians, and regulators) must monitor the environmental impact and minimize the environmental consequences of such systems at all stages of their existence. This principle calls for minimizing negative impacts, improving the energy efficiency, and disposing physical components in strict compliance with current legislation. Moreover, the rapid development of AI systems and the ethical dilemmas require that solutions be proposed jointly and ethical standards be developed in a manner that is consistent and sensitive to emerging technologies.
Artificial intelligence is increasingly being used in medicine, including clinical physiology. This is facilitated by the increase in computing processing power, the development of cloud services and datasets, and numerous scientific articles demonstrating the effectiveness and viability of such intelligent solutions. Although the approach to medical datasets development is generally similar, there are a number of key features and significant differences in clinical physiology ones. Artificial intelligence systems in clinical physiology may be effectively trained and applied in practice by following the recommendations proposed in this article.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.