Background. Computer aids can affect decisions in complex ways, potentially even making them worse; common assessment methods may miss these effects. We developed a method for estimating the quality of decisions, as well as how computer aids affect it, and applied it to computer-aided detection (CAD) of cancer, reanalyzing data from a published study where 50 professionals (“readers”) interpreted 180 mammograms, both with and without computer support. Method. We used stepwise regression to estimate how CAD affected the probability of a reader making a correct screening decision on a patient with cancer (sensitivity), thereby taking into account the effects of the difficulty of the cancer (proportion of readers who missed it) and the reader’s discriminating ability (Youden’s determinant). Using regression estimates, we obtained thresholds for classifying a posteriori the cases (by difficulty) and the readers (by discriminating ability). Results. Use of CAD was associated with a 0.016 increase in sensitivity (95% confidence interval [CI], 0.003–0.028) for the 44 least discriminating radiologists for 45 relatively easy, mostly CAD-detected cancers. However, for the 6 most discriminating radiologists, with CAD, sensitivity decreased by 0.145 (95% CI, 0.034–0.257) for the 15 relatively difficult cancers. Conclusions. Our exploratory analysis method reveals unexpected effects. It indicates that, despite the original study detecting no significant average effect, CAD helped the less discriminating readers but hindered the more discriminating readers. Such differential effects, although subtle, may be clinically significant and important for improving both computer algorithms and protocols for their use. They should be assessed when evaluating CAD and similar warning systems.
This is the unspecified version of the paper.This version of the publication may differ from the final published version. Permanent repository link SHORTENED VERSION OF THE TITLE:Multidisciplinary study of CAD use in mammography FUNDING:The work described in this paper has been partly funded by the UK Engineering and Physical Sciences Research Council (EPSRC) through DIRC, the Dependability Interdisciplinary Research Collaboration, a project investigating the dependability of computer based systems. KEYWORDS:Breast cancer screening, digital imaging, mammography 2 ABSTRACTWe summarise a set of analyses and studies we have conducted to assess the effects of the use of a Computer Aided Detection (CAD) tool in breast screening. We have used an interdisciplinary approach which combines: a) statistical analyses inspired by reliability modelling in engineering, b) experimental studies of mammography experts' decisions using the tool, interpreted in the light of human factors psychology;and c) ethnographic observations of the use of the tool both in trial conditions and in everyday screening practice. Our investigations have shown patterns of human behaviour and effects of computer-based advice that would not have been revealed by a standard clinical trial approach. For example, we found that the negligible measured effect of CAD could be explained by a range of effects on experts' decisions, beneficial in some cases and detrimental in others. There is some evidence of the latter effects being due to the experts using the computer tool differently from the developers' intentions. We integrate insights from the different pieces of evidence and highlight their implications for the design, evaluation and deployment of this sort of computer tool.3
Computer-based advisory systems form with their users composite, human-machine systems. Redundancy and diversity between the human and the machine are often important for the dependability of such systems. We describe a case study on assessing failure probabilities for the analysis of X-ray films for detecting cancer, performed by a person assisted by a computerbased tool. Differently from most approaches to human reliability assessment, we focus on the effects of failure diversity -or correlation -between humans and machines. We illustrate some of the modelling and prediction problems, especially those caused by the presence of the human component. We show two alternative models, with their pros and cons, and illustrate, via numerical examples and analytically, some interesting and non-intuitive answers to questions about reliability assessment and design choices for human-computer systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.