Background Chest x-ray is a relatively accessible, inexpensive, fast imaging modality that might be valuable in the prognostication of patients with COVID-19. We aimed to develop and evaluate an artificial intelligence system using chest x-rays and clinical data to predict disease severity and progression in patients with COVID-19. Methods We did a retrospective study in multiple hospitals in the University of Pennsylvania Health System in Philadelphia, PA, USA, and Brown University affiliated hospitals in Providence, RI, USA. Patients who presented to a hospital in the University of Pennsylvania Health System via the emergency department, with a diagnosis of COVID-19 confirmed by RT-PCR and with an available chest x-ray from their initial presentation or admission, were retrospectively identified and randomly divided into training, validation, and test sets (7:1:2). Using the chest x-rays as input to an EfficientNet deep neural network and clinical data, models were trained to predict the binary outcome of disease severity (ie, critical or non-critical). The deep-learning features extracted from the model and clinical data were used to build time-to-event models to predict the risk of disease progression. The models were externally tested on patients who presented to an independent multicentre institution, Brown University affiliated hospitals, and compared with severity scores provided by radiologists. Findings 1834 patients who presented via the University of Pennsylvania Health System between March 9 and July 20, 2020, were identified and assigned to the model training (n=1285), validation (n=183), or testing (n=366) sets. 475 patients who presented via the Brown University affiliated hospitals between March 1 and July 18, 2020, were identified for external testing of the models. When chest x-rays were added to clinical data for severity prediction, area under the receiver operating characteristic curve (ROC-AUC) increased from 0·821 (95% CI 0·796–0·828) to 0·846 (0·815–0·852; p<0·0001) on internal testing and 0·731 (0·712–0·738) to 0·792 (0·780–0 ·803; p<0·0001) on external testing. When deep-learning features were added to clinical data for progression prediction, the concordance index (C-index) increased from 0·769 (0·755–0·786) to 0·805 (0·800–0·820; p<0·0001) on internal testing and 0·707 (0·695–0·729) to 0·752 (0·739–0·764; p<0·0001) on external testing. The image and clinical data combined model had significantly better prognostic performance than combined severity scores and clinical data on internal testing (C-index 0·805 vs 0·781; p=0·0002) and external testing (C-inde 0·752 vs 0·715; p<0·0001). Interpretation In patients with COVID-19, artificial intelligence based on chest x-rays had better prognostic performance than clinical data or radiologist-derived severity scores. Using artificial intelligence, chest x-rays can augment clinical data i...
While convolutional neural network (CNN) has been demonstrating powerful ability to learn hierarchical spatial features from medical images, it is still difficult to apply it directly to restingstate functional MRI (rs-fMRI) and the derived brain functional networks (BFNs). We propose a novel CNN framework to simultaneously learn embedded features from BFNs for brain disease diagnosis. Since BFNs can be built by considering both static and dynamic functional connectivity (FC), we first decompose rs-fMRI into multiple static BFNs with modified independent component analysis. Then, voxel-wise variability in dynamic FC is used to quantify BFN dynamics. A set of paired 3D images representing static/dynamic BFNs can be fed into 3D CNNs, from which we can hierarchically and simultaneously learn static/dynamic BFN features. As a result, dynamic BFN features can complement static BFN features and, at meantime, different BFNs can help each other towards a joint and better classification. We validate our method with a publicly accessible, large cohort of rs-fMRI dataset in earlystage mild cognitive impairment (eMCI) diagnosis, which is one of the most challenging problems to the clinicians. By comparing with a conventional method, our method shows significant diagnostic performance improvement by almost 10%. This result demonstrates the effectiveness of deep learning in preclinical Alzheimer's disease diagnosis, based on the complex and highdimensional voxel-wise spatiotemporal patterns of the resting-state brain functional connectomics. The framework provides a new but intuitive way to fully exploit deeply embedded diagnostic features from rs-fMRI for better individualized diagnosis of various neurological diseases.
Cephalometric tracing method is usually used in orthodontic diagnosis and treatment planning. In this paper, we propose a deep learning based framework to automatically detect anatomical landmarks in cephalometric X-ray images. We train the deep encoder-decoder for landmark detection, and combine global landmark configuration with local high-resolution feature responses. The proposed framework is based on 2-stage u-net, regressing the multi-channel heatmaps for landmark detection. In this framework, we embed attention mechanism with global stage heatmaps, guiding the local stage inferring, to regress the local heatmap patches in a high resolution. Besides, the Expansive Exploration strategy improves robustness while inferring, expanding the searching scope without increasing model complexity. We have evaluated our framework in the most widely-used public dataset of landmark detection in cephalometric X-ray images. With less computation and manually tuning, our framework achieves state-of-the-art results. Keywords: Landmark detection · Deep learning · Heatmap regressionAttention mechanism · 2D X-ray cephalometric analysis Recently, the orthodontic computer-aided software can only calculate the distances and angles automatically according to manually annotated landmarks. Therefore, an automatic method would release orthodontists from the time-consuming work and especially avoid the observation errors. Our study concentrates on detecting the 19 landmarks from the 2D radiograph automatically. Related work: More recently, the automatic detection was held as a Grand Challenge at ISBI 2015. The organizers provided the dataset [1] and published the benchmark of the dental radiography analysis algorithms [2]. Ibragimov et al. [3] computerized cephalometry by game-strategy with a shape-based model, Lindner et al. [4] won first place with Random Forest regression-voting method. After that, Lindner et al. [5] expanded their experiments and showed the results with comprehensive experimental analysis.Deep learning methods have achieved great success in many computer vision applications, especially in facial point detection. The cascade and hierarchy are the basic idea to improve performance from coarse to fine. Lee et al. [6] applied deep learning method to cephalometric landmark detection for the first time. They trained 38 independent CNN structures to regress the 19 landmarks' x-and y-coordinate variables separately. Although high accuracy can be achieved, most of the existing landmark detection methods need to train a number of models to refine each point on a small scale one by one, which demands massive but inefficient computation.Different from the traditional coordinate regression methods, deep encoder-decoder methods, such as u-net [7] and fully convolutional networks (FCN) [8], achieve goal with target transform. In medical landmark detection, by regressing heatmaps for landmarks simultaneously instead of absolute landmark coordinates, Payer et al. [9] transformed the coordinate regression problems to the pixel class...
Objectives Early recognition of coronavirus disease 2019 (COVID-19) severity can guide patient management. However, it is challenging to predict when COVID-19 patients will progress to critical illness. This study aimed to develop an artificial intelligence system to predict future deterioration to critical illness in COVID-19 patients. Methods An artificial intelligence (AI) system in a time-to-event analysis framework was developed to integrate chest CT and clinical data for risk prediction of future deterioration to critical illness in patients with COVID-19. Results A multi-institutional international cohort of 1,051 patients with RT-PCR confirmed COVID-19 and chest CT was included in this study. Of them, 282 patients developed critical illness, which was defined as requiring ICU admission and/or mechanical ventilation and/or reaching death during their hospital stay. The AI system achieved a C-index of 0.80 for predicting individual COVID-19 patients’ to critical illness. The AI system successfully stratified the patients into high-risk and low-risk groups with distinct progression risks ( p < 0.0001). Conclusions Using CT imaging and clinical data, the AI system successfully predicted time to critical illness for individual patients and identified patients with high risk. AI has the potential to accurately triage patients and facilitate personalized treatment. Key Point • AI system can predict time to critical illness for patients with COVID-19 by using CT imaging and clinical data. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-08049-8.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.