A lthough decision-making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access "the knowledge within the machine." 1 Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision-making to black box systems as contravening the profound moral responsibilities of clinicians. As William Swartout puts it, when a physician consults an expert, "[t]he physician may question whether some factor was considered or what effect a particular finding had on the final outcome and the expert is expected to be able to justify his answer and show that sound medical principles and knowledge were used to obtain it. . . . In addition to providing diagnoses or prescriptions, a consultant program must be able to explain what it is doing and justify why it is doing it." 2 To the extent that deep learning systems cannot explain their findings, some have questioned whether medical systems should avoid such approaches and "sacrifice predictive power in favor of simplicity of a model." 3 As far back as the ancient Greeks, trust has been connected to the ability to explain expert recommendations. We expect that experts can marshal well-developed causal knowledge to explain their actions or recommendations, a feat that is a reality in some modern scientific domains. Against that background expectation, the most powerful machine learning techniques seem woefully incomplete because they are atheoretical, associationist, and opaque. A major problem with this view about the importance of explanation, I argue below, is that empirical findings in medicine often have better epistemic footing than the theories that might explain them and that atheoretical, associationist, and opaque decisions are more common in medicine than critics realize. Moreover, as Aristotle noted over two millennia ago, when our knowledge of causal systems is incomplete and precarious-as it often is in medicine-the ability to explain how results are produced can be less important than the ability to produce such results and empirically verify their accuracy. I conclude with some reasons that a blanket requirement that machine learning systems in medicine be explainable or interpretable is unfounded and potentially harmful. Justification, Explanation, and CausationT rust in experts is often grounded in their ability to produce certain results and to justify their actions. As a result, it is sometimes claimed that trust in computational decision-makers must be grounded in more than predictive or diagnostic accuracy. It also requires the ability to justify their recommendations. As Swartout notes, "By justifica...
Crises are no excuse for lowering scientific standards
Algorithms play a key role in the functioning of autonomous systems, and so concerns have periodically been raised about the possibility of algorithmic bias. However, debates in this area have been hampered by different meanings and uses of the term, "bias." It is sometimes used as a purely descriptive term, sometimes as a pejorative term, and such variations can promote confusion and hamper discussions about when and how to respond to algorithmic bias. In this paper, we first provide a taxonomy of different types and sources of algorithmic bias, with a focus on their different impacts on the proper functioning of autonomous systems. We then use this taxonomy to distinguish between algorithmic biases that are neutral or unobjectionable, and those that are problematic in some way and require a response. In some cases, there are technological or algorithmic adjustments that developers can use to compensate for problematic bias. In other cases, however, responses require adjustments by the agent, whether human or autonomous system, who uses the results of the algorithm. There is no "one size fits all" solution to algorithmic bias.
The debate over when medical research may be performed in developing countries has steered clear of the broad issues of social justice in favor of what seem more tractable, practical issues. A better approach will reframe the question of justice in international research in a way that makes explicit the links between medical research, the social determinants of health, and global justice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.