Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Abstract. Magnetic Resonance Imaging (MRI) is widely used in routine clinical diagnosis and treatment. However, variations in MRI acquisition protocols result in different appearances of normal and diseased tissue in the images. Convolutional neural networks (CNNs), which have shown to be successful in many medical image analysis tasks, are typically sensitive to the variations in imaging protocols. Therefore, in many cases, networks trained on data acquired with one MRI protocol, do not perform satisfactorily on data acquired with different protocols. This limits the use of models trained with large annotated legacy datasets on a new dataset with a different domain which is often a recurring situation in clinical settings. In this study, we aim to answer the following central questions regarding domain adaptation in medical image analysis: Given a fitted legacy model, 1) How much data from the new domain is required for a decent adaptation of the original network?; and, 2) What portion of the pre-trained model parameters should be retrained given a certain number of the new domain training samples? To address these questions, we conducted extensive experiments in white matter hyperintensity segmentation task. We trained a CNN on legacy MR images of brain and evaluated the performance of the domain-adapted network on the same task with images from a different domain. We then compared the performance of the model to the surrogate scenarios where either the same trained network is used or a new network is trained from scratch on the new dataset. The domain-adapted network tuned only by two training examples achieved a Dice score of 0.63 substantially outperforming a similar network trained on the same set of examples from scratch.⋆ Mohsen Ghafoorian and Alireza Mehrtash contributed equally to this work.
Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. Automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their method on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge (https://wmh.isi.uu.nl/). Sixty T1+FLAIR images from three MR scanners were released with manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. Segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: (1) Dice similarity coefficient, (2) modified Hausdorff distance (95th percentile), (3) absolute log-transformed volume difference, (4) sensitivity for detecting individual lesions, and (5) F1-score for individual lesions. Additionally, methods were ranked on their inter-scanner robustness.Twenty participants submitted their method for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all methods generalize to unseen scanners.The challenge remains open for future submissions and provides a public platform for method evaluation.
BACKGROUND Although survival statistics in patients with glioblastoma multiforme (GBM) are well-defined at the group level, predicting individual patient survival remains challenging because of significant variation within strata. OBJECTIVE To compare statistical and machine learning algorithms in their ability to predict survival in GBM patients and deploy the best performing model as an online survival calculator. METHODS Patients undergoing an operation for a histopathologically confirmed GBM were extracted from the Surveillance Epidemiology and End Results (SEER) database (2005-2015) and split into a training and hold-out test set in an 80/20 ratio. Fifteen statistical and machine learning algorithms were trained based on 13 demographic, socioeconomic, clinical, and radiographic features to predict overall survival, 1-yr survival status, and compute personalized survival curves. RESULTS In total, 20 821 patients met our inclusion criteria. The accelerated failure time model demonstrated superior performance in terms of discrimination (concordance index = 0.70), calibration, interpretability, predictive applicability, and computational efficiency compared to Cox proportional hazards regression and other machine learning algorithms. This model was deployed through a free, publicly available software interface (https://cnoc-bwh.shinyapps.io/gbmsurvivalpredictor/). CONCLUSION The development and deployment of survival prediction tools require a multimodal assessment rather than a single metric comparison. This study provides a framework for the development of prediction tools in cancer patients, as well as an online survival calculator for patients with GBM. Future efforts should improve the interpretability, predictive applicability, and computational efficiency of existing machine learning algorithms, increase the granularity of population-based registries, and externally validate the proposed prediction tool.
BackgroundDiffusion imaging tractography is increasingly used to trace critical fiber tracts in brain tumor patients to reduce the risk of post-operative neurological deficit. However, the effects of peritumoral edema pose a challenge to conventional tractography using the standard diffusion tensor model. The aim of this study was to present a novel technique using a two-tensor unscented Kalman filter (UKF) algorithm to track the arcuate fasciculus (AF) in brain tumor patients with peritumoral edema.MethodsTen right-handed patients with left-sided brain tumors in the vicinity of language-related cortex and evidence of significant peritumoral edema were retrospectively selected for the study. All patients underwent 3-Tesla magnetic resonance imaging (MRI) including a diffusion-weighted dataset with 31 directions. Fiber tractography was performed using both single-tensor streamline and two-tensor UKF tractography. A two-regions-of-interest approach was applied to perform the delineation of the AF. Results from the two different tractography algorithms were compared visually and quantitatively.ResultsUsing single-tensor streamline tractography, the AF appeared disrupted in four patients and contained few fibers in the remaining six patients. Two-tensor UKF tractography delineated an AF that traversed edematous brain areas in all patients. The volume of the AF was significantly larger on two-tensor UKF than on single-tensor streamline tractography (p < 0.01).ConclusionsTwo-tensor UKF tractography provides the ability to trace a larger volume AF than single-tensor streamline tractography in the setting of peritumoral edema in brain tumor patients.
Prostate cancer (PCa) remains a leading cause of cancer mortality among American men. Multi-parametric magnetic resonance imaging (mpMRI) is widely used to assist with detection of PCa and characterization of its aggressiveness. Computer-aided diagnosis (CADx) of PCa in MRI can be used as clinical decision support system to aid radiologists in interpretation and reporting of mpMRI. We report on the development of a convolution neural network (CNN) model to support CADx in PCa based on the appearance of prostate tissue in mpMRI, conducted as part of the SPIE-AAPM-NCI PROSTATEx challenge. The performance of different combinations of mpMRI inputs to CNN was assessed and the best result was achieved using DWI and DCE-MRI modalities together with the zonal information of the finding. On the test set, the model achieved an area under the receiver operating characteristic curve of 0.80.
Purpose The aim of this study was to present a tractography algorithm using a two-tensor unscented Kalman filter (UKF) to improve the modeling of the corticospinal tract (CST) by tracking through regions of peritumoral edema and crossing fibers. Methods Ten patients with brain tumors in the vicinity of motor cortex and evidence of significant peritumoral edema were retrospectively selected for the study. All patients underwent 3-Tesla magnetic resonance imaging (MRI) including functional MRI (fMRI) and a diffusion-weighted data set with 31 directions. Fiber tracking was performed using both single-tensor streamline and two-tensor UKF tractography methods. A two-regions-of-interest approach was used to delineate the CST. Results from the two tractography methods were compared visually and quantitatively. fMRI was applied to identify the functional fiber tracts. Results Single-tensor streamline tractography underestimated the extent of tracts running through the edematous areas and could only track the medial projections of the CST. In contrast, two-tensor UKF tractography tracked fanning projections of the CST despite peritumoral edema and crossing fibers. The two-tensor UKF tractography delineated tracts that were closer to motor fMRI activations, and it was more sensitive than single-tensor streamline tractography to define the tracts directed to the motor sites. The volume of the CST was significantly larger on two-tensor UKF than on single-tensor streamline tractography (p < 0.001). Conclusions Two-tensor UKF tractography tracks the CST better than single-tensor streamline tractography in the setting of peritumoral edema and crossing fibers in brain tumor patients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.