Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. Computer-aided detection systems mark regions of an image that may reveal specific abnormalities and are used to alert clinicians to these regions during image interpretation. Computer-aided diagnosis systems provide an assessment of a disease using image-based information alone or in combination with other relevant diagnostic data and are used by clinicians as a decision support in developing their diagnoses. While CAD systems are commercially available, standardized approaches for evaluating and reporting their performance have not yet been fully formalized in the literature or in a standardization effort. This deficiency has led to difficulty in the comparison of CAD devices and in understanding how the reported performance might translate into clinical practice. To address these important issues, the American Association of Physicists in Medicine (AAPM) formed the Computer Aided Detection in Diagnostic Imaging Subcommittee (CADSC), in part, to develop recommendations on approaches for assessing CAD system performance. The purpose of this paper is to convey the opinions of the AAPM CADSC members and to stimulate the development of consensus approaches and "best practices" for evaluating CAD systems. Both the assessment of a standalone CAD system and the evaluation of the impact of CAD on end-users are discussed. It is hoped that awareness of these important evaluation elements and the CADSC recommendations will lead to further development of structured guidelines for CAD performance assessment. Proper assessment of CAD system performance is expected to increase the understanding of a CAD system's effectiveness and limitations, which is expected to stimulate further research and development efforts on CAD technologies, reduce problems due to improper use, and eventually improve the utility and efficacy of CAD in clinical practice.
Purpose: To describe and test a new fully automatic lesion detection system for breast DCE-MRI.Materials and Methods: Studies were collected from two institutions adopting different DCE-MRI sequences, one with and the other one without fat-saturation. The detection pipeline consists of (i) breast segmentation, to identify breast size and location; (ii) registration, to correct for patient movements; (iii) lesion detection, to extract contrast-enhanced regions using a new normalization technique based on the contrast-uptake of mammary vessels; (iv) false positive (FP) reduction, to exclude contrastenhanced regions other than lesions. Detection rate (number of system-detected malignant and benign lesions over the total number of lesions) and sensitivity (systemdetected malignant lesions over the total number of malignant lesions) were assessed. The number of FPs was also assessed.Results: Forty-eight studies with 12 benign and 53 malignant lesions were evaluated. Median lesion diameter was 6 mm (range, 5-15 mm) for benign and 26 mm (range, 5-75 mm) for malignant lesions. Detection rate was 58/65 (89%; 95% confidence interval [CI] 79%-95%) and sensitivity was 52/53 (98%; 95% CI 90%-99%). Mammary median FPs per breast was 4 (1st-3rd quartiles 3-7.25). Conclusion:The system showed promising results on MR datasets obtained from different scanners producing fatsat or non-fat-sat images with variable temporal and spatial resolution and could potentially be used for early diagnosis and staging of breast cancer to reduce reading time and to improve lesion detection. Further evaluation is needed before it may be used in clinical practice.
Automatic segmentation of the breast and axillary region is an important preprocessing step for automatic lesion detection in breast MR and dynamic contrast-enhanced-MR studies. In this paper, we present a fully automatic procedure based on the detection of the upper border of the pectoral muscle. Compared with previous methods based on thresholding, this method is more robust to noise and field inhomogeneities. The method was quantitatively evaluated on 31 cases acquired from two centers by comparing the results with a manual segmentation. Results indicate good overall agreement within the reference segmentation (overlap=0.79 ± 0.09, recall=0.95 ± 0.02, precision=0.82 ± 0.1).
Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer‐aided diagnosis (CAD) development and applications using both “traditional” machine learning methods and newer DL‐based methods. We use the term CAD‐AI to refer to this expanded clinical decision support environment that uses traditional and DL‐based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer‐aided, or AI‐assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer‐Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer‐aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD‐AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD‐AI systems for clinical decision support.
A digital breast tomosynthesis CAD system can allow detection of a large percentage (89%, 99 of 111) of breast cancers manifesting as masses and microcalcification clusters, with an acceptable false-positive rate (2.7 per breast view). Further studies with larger datasets acquired with equipment from multiple vendors are needed to replicate the findings and to study the interaction of radiologists and CAD systems.
Considering preparation quality alone, GFPH was the best regimen, but SD provided the best balance between bowel preparation quality and patient acceptability.
The investigation of factors contributing at making humans trust Autonomous Vehicles (AVs) will play a fundamental role in the adoption of such technology. The user's ability to form a mental model of the AV, which is crucial to establish trust, depends on effective user-vehicle communication; thus, the importance of Human-Machine Interaction (HMI) is poised to increase. In this work, we propose a methodology to validate the user experience in AVs based on continuous, objective information gathered from physiological signals, while the user is immersed in a Virtual Reality-based driving simulation. We applied this methodology to the design of a head-up display interface delivering visual cues about the vehicle' sensory and planning systems. Through this approach, we obtained qualitative and quantitative evidence that a complete picture of the vehicle's surrounding, despite the higher cognitive load, is conducive to a less stressful experience. Moreover, after having been exposed to a more informative interface, users involved in the study were also more willing to test a real AV. The proposed methodology could be extended by adjusting the simulation environment, the HMI and/or the vehicle's Artificial Intelligence modules to dig into other aspects of the user experience.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.