Background: The purpose of this study was to conduct a systematic review for understanding the availability and limitations of artificial intelligence (AI) approaches that could automatically identify and quantify computed tomography (CT) findings in traumatic brain injury (TBI). Methods: Systematic review, in accordance with PRISMA 2020 and SPIRIT-AI extension guidelines, with a search of 4 databases (Medline, Embase, IEEE Xplore, and Web of Science) was performed to find AI studies that automated the clinical tasks for identifying and quantifying CT findings of TBI-related abnormalities. Results: A total of 531 unique publications were reviewed, which resulted in 66 articles that met our inclusion criteria. The following components for identification and quantification regarding TBI were covered and automated by existing AI studies: identification of TBI-related abnormalities; classification of intracranial hemorrhage types; slice-, pixel-, and voxel-level localization of hemorrhage; measurement of midline shift; and measurement of hematoma volume. Automated identification of obliterated basal cisterns was not investigated in the existing AI studies. Most of the AI algorithms were based on deep neural networks that were trained on 2- or 3-dimensional CT imaging datasets. Conclusion: We identified several important TBI-related CT findings that can be automatically identified and quantified with AI. A combination of these techniques may provide useful tools to enhance reproducibility of TBI identification and quantification by supporting radiologists and clinicians in their TBI assessments and reducing subjective human factors.
Background Current artificial intelligence studies for supporting CT screening tasks depend on either supervised learning or detecting anomalies. However, the former involves a heavy annotation workload owing to requiring many slice-wise annotations (ground truth labels); the latter is promising, but while it reduces the annotation workload, it often suffers from lower performance. This study presents a novel weakly supervised anomaly detection (WSAD) algorithm trained based on scan-wise normal and anomalous annotations to provide better performance than conventional methods while reducing annotation workload . Methods Based on surveillance video anomaly detection methodology, feature vectors representing each CT slice were trained on an AR-Net-based convolutional network using a dynamic multiple-instance learning loss and a center loss function. The following two publicly available CT datasets were retrospectively analyzed: the RSNA brain hemorrhage dataset (normal scans: 12,862; scans with intracranial hematoma: 8882) and COVID-CT set (normal scans: 282; scans with COVID-19: 95). Results Anomaly scores of each slice were successfully predicted despite inaccessibility to any slice-wise annotations. Slice-level area under the curve (AUC), sensitivity, specificity, and accuracy from the brain CT dataset were 0.89, 0.85, 0.78, and 0.79, respectively. The proposed method reduced the number of annotations in the brain dataset by 97.1% compared to an ordinary slice-level supervised learning method. Conclusion This study demonstrated a significant annotation reduction in identifying anomalous CT slices compared to a supervised learning approach. The effectiveness of the proposed WSAD algorithm was verified through higher AUC than existing anomaly detection techniques. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-023-02965-4.
Background: Training machine learning (ML) models in medical imaging requires large amounts of labeled data. To minimize labeling workload, it is common to divide training data among multiple readers for separate annotation without consensus and then combine the labeled data for training a ML model. This can lead to a biased training dataset and poor ML algorithm prediction performance. The purpose of this study is to determine if ML algorithms can overcome biases caused by multiple readers’ labeling without consensus. Methods: This study used a publicly available chest X-ray dataset of pediatric pneumonia. As an analogy to a practical dataset without labeling consensus among multiple readers, random and systematic errors were artificially added to the dataset to generate biased data for a binary-class classification task. The Resnet18-based convolutional neural network (CNN) was used as a baseline model. A Resnet18 model with a regularization term added as a loss function was utilized to examine for improvement in the baseline model. Results: The effects of false positive labels, false negative labels, and random errors (5–25%) resulted in a loss of AUC (0–14%) when training a binary CNN classifier. The model with a regularized loss function improved the AUC (75–84%) over that of the baseline model (65–79%). Conclusion: This study indicated that it is possible for ML algorithms to overcome individual readers’ biases when consensus is not available. It is recommended to use regularized loss functions when allocating annotation tasks to multiple readers as they are easy to implement and effective in mitigating biased labels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.