Background - Decision scores and ethically mindful algorithms are being established to adjudicate mechanical ventilation in the context of potential resources shortage due to the current onslaught of COVID-19 cases. There is a need for a reproducible and objective method to provide quantitative information for those scores. Purpose - Towards this goal, we present a retrospective study testing the ability of a deep learning algorithm at extracting features from chest x-rays (CXR) to track and predict radiological evolution. Materials and Methods - We trained a repurposed deep learning algorithm on the CheXnet open dataset (224,316 chest X-ray images of 65,240 unique patients) to extract features that mapped to radiological labels. We collected CXRs of COVID-19-positive patients from two open-source datasets (last accessed on April 9, 2020)(Italian Society for Medical and Interventional Radiology and MILA). Data collected form 60 pairs of sequential CXRs from 40 COVID patients (mean age +/- standard deviation: 56 +/- 13 years; 23 men, 10 women, seven not reported) and were categorized in three categories: Worse, Stable, or Improved on the basis of radiological evolution ascertained from images and reports. Receiver operating characteristic analyses, Mann-Whitney tests were performed. Results - On patients from the CheXnet dataset, the area under ROC curves ranged from 0.71 to 0.93 for seven imaging features and one diagnosis. Deep learning features between Worse and Improved outcome categories were significantly different for three radiological signs and one diagnostic (Consolidation, Lung Lesion, Pleural Effusion and Pneumonia; all P < 0.05). Features from the first CXR of each pair could correctly predict the outcome category between Worse and Improved cases with 82.7% accuracy. Conclusion - CXR deep learning features show promise for classifying the disease trajectory. Once validated in studies incorporating clinical data and with larger sample sizes, this information may be considered to inform triage decisions.
Radiological findings on chest X-ray (CXR) have shown to be essential for the proper management of COVID-19 patients as the maximum severity over the course of the disease is closely linked to the outcome. As such, evaluation of future severity from current CXR would be highly desirable. We trained a repurposed deep learning algorithm on the CheXnet open dataset (224,316 chest X-ray images of 65,240 unique patients) to extract features that mapped to radiological labels. We collected CXRs of COVID-19-positive patients from an open-source dataset (COVID-19 image data collection) and from a multi-institutional local ICU dataset. The data was grouped into pairs of sequential CXRs and were categorized into three categories: ‘Worse’, ‘Stable’, or ‘Improved’ on the basis of radiological evolution ascertained from images and reports. Classical machine-learning algorithms were trained on the deep learning extracted features to perform immediate severity evaluation and prediction of future radiological trajectory. Receiver operating characteristic analyses and Mann-Whitney tests were performed. Deep learning predictions between “Worse” and “Improved” outcome categories and for severity stratification were significantly different for three radiological signs and one diagnostic (‘Consolidation’, ‘Lung Lesion’, ‘Pleural effusion’ and ‘Pneumonia’; all P < 0.05). Features from the first CXR of each pair could correctly predict the outcome category between ‘Worse’ and ‘Improved’ cases with a 0.81 (0.74–0.83 95% CI) AUC in the open-access dataset and with a 0.66 (0.67–0.64 95% CI) AUC in the ICU dataset. Features extracted from the CXR could predict disease severity with a 52.3% accuracy in a 4-way classification. Severity evaluation trained on the COVID-19 image data collection had good out-of-distribution generalization when testing on the local dataset, with 81.6% of intubated ICU patients being classified as critically ill, and the predicted severity was correlated with the clinical outcome with a 0.639 AUC. CXR deep learning features show promise for classifying disease severity and trajectory. Once validated in studies incorporating clinical data and with larger sample sizes, this information may be considered to inform triage decisions.
To propose good practices for using the structural similarity metric (SSIM) and reporting its value. SSIM is one of the most popular image quality metrics in use in the medical image synthesis community because of its alleged superiority over voxel-by-voxel measurements like the average error or the peak signal noise ratio (PSNR). It has seen massive adoption since its introduction, but its limitations are often overlooked. Notably, SSIM is designed to work on a strictly positive intensity scale, which is generally not the case in medical imaging. Common intensity scales such as the Houndsfield units (HU) contain negative numbers, and they can also be introduced by image normalization techniques such as the z-normalization. Methods: We created a series of experiments to quantify the impact of negative values in the SSIM computation. Specifically, we trained a three-dimensional (3D) U-Net to synthesize T2-weighted MRI from T1-weighted MRI using the BRATS 2018 dataset. SSIM was computed on the synthetic images with a shifted dynamic range. Next, to evaluate the suitability of SSIM as a loss function on images with negative values, it was used as a loss function to synthesize znormalized images. Finally, the difference between two-dimensional (2D) SSIM and 3D SSIM was investigated using multiple 2D U-Nets trained on different planes of the images. Results:The impact of the misuse of the SSIM was quantified; it was established that it introduces a large downward bias in the computed SSIM. It also introduces a small random error that can change the relative ranking of models. The exact values for this bias and error depend on the quality and the intensity histogram of the synthetic images. Although small, the reported error is significant considering the small SSIM difference between state-of -the-art models. It was shown therefore that SSIM cannot be used as a loss function when images contain negative values due to major errors in the gradient calculation, resulting in under-performing models. 2D SSIM was also found to be overestimated in 2D image synthesis models when computed along the plane of synthesis, due to the discontinuities between slices that is typical of 2D synthesis methods. Conclusion: Various types of misuse of the SSIM were identified, and their impact was quantified. Based on the findings, this paper proposes good practices when using SSIM, such as reporting the average over the volume of the image containing tissue and appropriately defining the dynamic range.
Multiplayer online battle arena games (MOBAs) are one of the most popular types of online games. Annual tournaments draw large online viewership and reward the winning teams with large monetary prizes. Character selection prior to the start of the game (draft) plays a major role in the way the game is played and can give a large advantage to either team. Hence, professional teams try to maximize their winning chances by selecting the optimal team composition to counter their opponents. However, drafting is a complex process that requires deep game knowledge and preparation, which makes it stressful and error-prone. In this paper, we present an automatic drafter system based on the suggestions of a discriminative neural network and evaluate how it performs on the MOBAs Heroes of the Storm and DOTA 2. We propose a method to appropriately exploit very heterogeneous datasets that aggregates data from various versions of the games. Drafter testing on professional games shows that the actual selected hero was present in the top 3 determined by our drafting tool 30.4% of the time for HotS and 17.6% for DOTA 2. The performance obtained by this method exceed all previously reported results.
The COVID-19 pandemic repeatedly overwhelms healthcare systems capacity and forced the development and implementation of triage guidelines in ICU for scarce resources (e.g. mechanical ventilation). These guidelines were often based on known risk factors for COVID-19. It is proposed that image data, specifically bedside computed X-ray (CXR), provide additional predictive information on mortality following mechanical ventilation that can be incorporated in the guidelines. Deep transfer learning was used to extract convolutional features from a systematically collected, multi-institutional dataset of COVID-19 ICU patients. A model predicting outcome of mechanical ventilation (remission or mortality) was trained on the extracted features and compared to a model based on known, aggregated risk factors. The model reached a 0.702 area under the curve (95% CI 0.707-0.694) at predicting mechanical ventilation outcome from pre-intubation CXRs, higher than the risk factor model. Combining imaging data and risk factors increased model performance to 0.743 AUC (95% CI 0.746-0.732). Additionally, a post-hoc analysis showed an increase performance on high-quality than low-quality CXRs, suggesting that using only high-quality images would result in an even stronger model.
Purpose Medical linear accelerators (linac) are delivering increasingly complex treatments using modern techniques in radiation therapy. Complete and precise mechanical QA of the linac is therefore necessary to ensure that there is no unexpected deviation from the gantry's planned course. However, state‐of‐the‐art EPID‐based mechanical QA procedures often neglect some degrees of freedom (DOF) like the in‐plane rotations of the gantry and imager or the source movements inside the gantry head. Therefore, the purpose of this work is to characterize a 14 DOF method for the mechanical QA of linacs. This method seeks to measure every mechanical deformation in a linac, including source movements, in addition to relevant clinical parameters like mechanical and radiation isocenters. Methods A widely available commercial phantom and a custom‐made accessory inserted in the linac's interface mount are imaged using the electronic portal imaging device (EPID) at multiple gantry angles. Then, simulated images are generated using the nominal geometry of the linac and digitized models of the phantoms. The nominal geometry used to generate these images can be modified using 14 DOF (3 rigid rotations and 3 translations for the imager and the gantry, and 2 in‐plane translations of the source) and any change will modify the simulated image. The set of mechanical deformations that minimizes the differences between the simulated and measured image is found using a genetic algorithm coupled with a gradient‐descent optimizer. Phantom mispositioning and gantry angular offset were subsequently calculated and extracted from the results. Simulations of the performances of the method for different levels of noise in the phantom models were performed to calculate the absolute uncertainty of the measured mechanical deformations. The measured source positions and the center of collimation were used to define the beam central axis and calculate the radiation isocenter position and radius. Results After the simultaneous optimization of the 14 DOF, the average distance between the center of the measured and simulated ball bearings on the imager was 0.086 mm. Over the course of a full counter‐clockwise gantry rotation, all mechanical deformations were measured, showing sub‐millimeter translations and rotations smaller than 1° along every axis. The average absolute uncertainty of the 14 DOF (1 SD) was 0.15 mm or degree. Phantom positioning errors were determined with more than 0.1 mm precision. Errors introduced in the experimental setup like phantom positioning errors, source movements or gantry angular offsets were all successfully detected by our QA method. The mechanical deformations measured are shown to be reproducible over the course of a few weeks and are not sensitive to the experimental setup. Conclusion This work presents of new method for an accurate mechanical QA of the linacs. It features a 14 DOF model of the mechanical deformations that is both more complete and precise than other available methods. It has demonstrated sub‐millimeter ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.