Artificial intelligence (AI) is quickly making inroads into medical practice, especially in forms that rely on machine learning, with a mix of hope and hype. 1 Multiple AI-based products have now been approved or cleared by the US Food and Drug Administration (FDA), and health systems and hospitals are increasingly deploying AI-based systems. 2 For example, medical AI can support clinical decisions, such as recommending drugs or dosages or interpreting radiological images. 2 One key difference from most traditional clinical decision support software is that some medical AI may communicate results or recommendations to the care team without being able to communicate the underlying reasons for those results. 3 Medical AI may be trained in inappropriate environments, using imperfect techniques, or on incomplete data. Even when algorithms are trained as well as possible, they may, for example, miss a tumor in a radiological image or suggest the incorrect dose for a drug or an inappropriate drug. Sometimes, patients will be injured as a result. In this Viewpoint, we discuss when a physician could likely be held liable under current law when using medical AI.
Artificial intelligence (AI) and Machine learning (ML) systems in medicine are poised to significantly improve health care, for example, by offering earlier diagnoses of diseases or recommending optimally individualized treatment plans. However, the emergence of AI/ML in medicine also creates challenges, which regulators must pay attention to. Which medical AI/ML-based products should be reviewed by regulators? What evidence should be required to permit marketing for AI/ML-based software as a medical device (SaMD)? How can we ensure the safety and effectiveness of AI/ML-based SaMD that may change over time as they are applied to new data? The U.S. Food and Drug Administration (FDA), for example, has recently proposed a discussion paper to address some of these issues. But it misses an important point: we argue that regulators like the FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective-from a product view to a system view-is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition.
Reimbursement is a key challenge for many new digital health solutions, whose importance and value have been highlighted and expanded by the current COVID-19 pandemic. Germany’s new Digital Healthcare Act (Digitale–Versorgung–Gesetz or DVG) entitles all individuals covered by statutory health insurance to reimbursement for certain digital health applications (i.e., insurers will pay for their use). Since Germany, like the United States (US), is a multi-payer health care system, the new Act provides a particularly interesting case study for US policymakers. We first provide an overview of the new German DVG and outline the landscape for reimbursement of digital health solutions in the US, including recent changes to policies governing telehealth during the COVID-19 pandemic. We then discuss challenges and unanswered questions raised by the DVG, ranging from the limited scope of the Act to privacy issues. Lastly, we highlight early lessons and opportunities for other countries.
Companies and healthcare providers are developing and implementing new applications of medical artificial intelligence, including the artificial intelligence sub-type of medical machine learning (MML). MML is based on the application of machine learning (ML) algorithms to automatically identify patterns and act on medical data to guide clinical decisions. MML poses challenges and raises important questions, including (1) How will regulators evaluate MML-based medical devices to ensure their safety and effectiveness? and (2) What additional MML considerations should be taken into account in the international context? To address these questions, we analyze the current regulatory approaches to MML in the USA and Europe. We then examine international perspectives and broader implications, discussing considerations such as data privacy, exportation, explanation, training set bias, contextual bias, and trade secrecy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.