• Medical students are aware of the potential applications and implications of AI in radiology and medicine in general. • Medical students do not worry that the human radiologist or physician will be replaced. • Artificial intelligence should be included in medical training.
Background and aims Deciding when to repeat and when to stop transarterial chemoembolization (TACE) in patients with hepatocellular carcinoma (HCC) can be difficult even for experienced investigators. Our aim was to develop a survival prediction model for such patients undergoing TACE using novel machine learning algorithms and to compare it to conventional prediction scores, ART, ABCR and SNACOR. Methods For this retrospective analysis, 282 patients who underwent TACE for HCC at our tertiary referral centre between January 2005 and December 2017 were included in the final analysis. We built an artificial neural network (ANN) including all parameters used by the aforementioned risk scores and other clinically meaningful parameters. Following an 80:20 split, the first 225 patients were used for training; the more recently treated 20% were used for validation. Results The ANN had a promising performance at predicting 1‐year survival, with an area under the ROC curve (AUC) of 0.77 ± 0.13. Internal validation yielded an AUC of 0.83 ± 0.06, a positive predictive value of 87.5% and a negative predictive value of 68.0%. The sensitivity was 77.8% and specificity 81.0%. In a head‐to‐head comparison, the ANN outperformed the aforementioned scoring systems, which yielded lower AUCs (SNACOR 0.73 ± 0.07, ABCR 0.70 ± 0.07 and ART 0.54 ± 0.08). This difference reached significance for ART (P < .001); for ABCR and SNACOR significance was not reached (P = .143 and P = .201). Conclusions Artificial neural networks could be better at predicting patient survival after TACE for HCC than traditional scoring systems. Once established, such prediction models could easily be deployed in clinical routine and help determine optimal patient care.
BackgroundData used for training of deep learning networks usually needs large amounts of accurate labels. These labels are usually extracted from reports using natural language processing or by time-consuming manual review. The aim of this study was therefore to develop and evaluate a workflow for using data from structured reports as labels to be used in a deep learning application.Materials and methodsWe included all plain anteriorposterior radiographs of the ankle for which structured reports were available. A workflow was designed and implemented where a script was used to automatically retrieve, convert, and anonymize the respective radiographs of cases where fractures were either present or absent from the institution’s picture archiving and communication system (PACS). These images were then used to retrain a pretrained deep convolutional neural network. Finally, performance was evaluated on a set of previously unseen radiographs.ResultsOnce implemented and configured, completion of the whole workflow took under 1 h. A total of 157 structured reports were retrieved from the reporting platform. For all structured reports, corresponding radiographs were successfully retrieved from the PACS and fed into the training process. On an unseen validation subset, the model showed a satisfactory performance with an area under the curve of 0.850 (95% CI 0.634–1.000) for detection of fractures.ConclusionWe demonstrate that data obtained from structured reports written in clinical routine can be used to successfully train deep learning algorithms. This highlights the potential role of structured reporting for the future of radiology, especially in the context of deep learning.Electronic supplementary materialThe online version of this article (10.1186/s13244-019-0777-8) contains supplementary material, which is available to authorized users.
Objectives The goal of the present study was to classify the most common types of plain radiographs using a neural network and to validate the network’s performance on internal and external data. Such a network could help improve various radiological workflows. Methods All radiographs from the year 2017 (n = 71,274) acquired at our institution were retrieved from the PACS. The 30 largest categories (n = 58,219, 81.7% of all radiographs performed in 2017) were used to develop and validate a neural network (MobileNet v1.0) using transfer learning. Image categories were extracted from DICOM metadata (study and image description) and mapped to the WHO manual of diagnostic imaging. As an independent, external validation set, we used images from other institutions that had been stored in our PACS (n = 5324). Results In the internal validation, the overall accuracy of the model was 90.3% (95%CI: 89.2–91.3%), whereas, for the external validation set, the overall accuracy was 94.0% (95%CI: 93.3–94.6%). Conclusions Using data from one single institution, we were able to classify the most common categories of radiographs with a neural network. The network showed good generalizability on the external validation set and could be used to automatically organize a PACS, preselect radiographs so that they can be routed to more specialized networks for abnormality detection or help with other parts of the radiological workflow (e.g., automated hanging protocols; check if ordered image and performed image are the same). The final AI algorithm is publicly available for evaluation and extension. Key Points • Data from one single institution can be used to train a neural network for the correct detection of the 30 most common categories of plain radiographs. • The trained model achieved a high accuracy for the majority of categories and showed good generalizability to images from other institutions. • The neural network is made publicly available and can be used to automatically organize a PACS or to preselect radiographs so that they can be routed to more specialized neural networks for abnormality detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.