Coronavirus disease 2019 (COVID-19) caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is spreading rapidly around the world, resulting in a massive death toll. Lung infection or pneumonia is the common complication of COVID-19, and imaging techniques, especially computed tomography (CT), have played an important role in diagnoses and treatment assessment of the disease. Herein, we review the use of imaging characteristics and computing models that have been applied for the management of COVID-19. CT, positron emission tomography -CT (PET/CT), lung ultrasound, and magnetic resonance imaging (MRI) have been used for detection, treatment, and follow-up. The quantitative analysis of imaging data using artificial intelligence (AI) is also explored. Our findings indicate that typical imaging characteristics and their changes can play an important role in the detection and management of COVID-19. In addition, AI or other quantitative image analysis methods are urgently needed to maximize the value of imaging in the management of COVID-19.
Coronavirus disease 2019 (COVID-19) has spread globally, and medical resources become insufficient in many regions. Fast diagnosis of COVID-19, and finding high-risk patients with worse prognosis for early prevention and medical resources optimisation is important. Here, we proposed a fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis by routinely used computed tomography.We retrospectively collected 5372 patients with computed tomography images from 7 cities or provinces. Firstly, 4106 patients with computed tomography images were used to pre-train the DL system, making it learn lung features. Afterwards, 1266 patients (924 with COVID-19, and 471 had follow-up for 5+ days; 342 with other pneumonia) from 6 cities or provinces were enrolled to train and externally validate the performance of the deep learning system.In the 4 external validation sets, the deep learning system achieved good performance in identifying COVID-19 from other pneumonia (AUC=0.87 and 0.88) and viral pneumonia (AUC=0.86). Moreover, the deep learning system succeeded to stratify patients into high-risk and low-risk groups whose hospital-stay time have significant difference (p=0.013 and 0.014). Without human-assistance, the deep learning system automatically focused on abnormal areas that showed consistent characteristics with reported radiological findings.Deep learning provides a convenient tool for fast screening COVID-19 and finding potential high-risk patients, which may be helpful for medical resource optimisation and early prevention before patients show severe symptoms.
1Coronavirus disease 2019 has spread globally, and medical 2 resources become insufficient in many regions. Fast diagnosis of COVID-19, and 3 finding high-risk patients with worse prognosis for early prevention and medical 4 resources optimization is important. Here, we proposed a fully automatic deep 5 learning system for COVID-19 diagnostic and prognostic analysis by routinely used 6 computed tomography.
Rationale: Given the rapid spread of COVID-19, an updated risk-stratify prognostic tool could help clinicians identify the high-risk patients with worse prognoses. We aimed to develop a non-invasive and easy-to-use prognostic signature by chest CT to individually predict poor outcome (death, need for mechanical ventilation, or intensive care unit admission) in patients with COVID-19. Methods: From November 29, 2019 to February 19, 2020, a total of 492 patients with COVID-19 from four centers were retrospectively collected. Since different durations from symptom onsets to the first CT scanning might affect the prognostic model, we designated the 492 patients into two groups: 1) the early-phase group: CT scans were performed within one week after symptom onset (0-6 days, n = 317); and 2) the late-phase group: CT scans were performed one week later after symptom onset (≥7 days, n = 175). In each group, we divided patients into the primary cohort (n = 212 in the early-phase group, n = 139 in the late-phase group) and the external independent validation cohort (n = 105 in the early-phase group, n = 36 in the late-phase group) according to the centers. We built two separate radiomics models in the two patient groups. Firstly, we proposed an automatic segmentation method to extract lung volume for radiomics feature extraction. Secondly, we applied several image preprocessing procedures to increase the reproducibility of the radiomics features: 1) applied a low-pass Gaussian filter before voxel resampling to prevent aliasing; 2) conducted ComBat to harmonize radiomics features per scanner; 3) tested the stability of the features in the radiomics signature by several image transformations, such as rotating, translating, and growing/shrinking. Thirdly, we used least absolute shrinkage and selection operator (LASSO) to build the radiomics signature (RadScore). Afterward, we conducted a Fine-Gray competing risk regression to build the clinical model and the clinic-radiomics signature (CrrScore). Finally, performances of the three prognostic signatures (clinical model, RadScore, and CrrScore) were estimated from the two aspects: 1) cumulative poor outcome probability prediction; 2) 28-day poor outcome prediction. We also did stratified analyses to explore the potential association between the CrrScore and the poor outcomes regarding different age, type, and comorbidity subgroups. Ivyspring International PublisherResults: In the early-phase group, the CrrScore showed the best performance in estimating poor outcome (C-index = 0.850), and predicting the probability of 28-day poor outcome (AUC = 0.862). In the late-phase group, the RadScore alone achieved similar performance to the CrrScore in predicting poor outcome (C-index = 0.885), and 28-day poor outcome probability (AUC = 0.976). Moreover, the RadScore in both groups successfully stratified patients with COVID-19 into low-or high-RadScore groups with significantly different survival time in the training and validation cohorts (all P < 0.05). The CrrScore in both groups can ...
In comparison with person re-identification (ReID), which has been widely studied in the research community, vehicle ReID has received less attention. Vehicle ReID is challenging due to 1) high intra-class variability (caused by the dependency of shape and appearance on viewpoint), and 2) small inter-class variability (caused by the similarity in shape and appearance between vehicles produced by different manufacturers). To address these challenges, we propose a Pose-Aware Multi-Task Re-Identification (PAMTRI) framework. This approach includes two innovations compared with previous methods. First, it overcomes viewpointdependency by explicitly reasoning about vehicle pose and shape via keypoints, heatmaps and segments from pose estimation. Second, it jointly classifies semantic vehicle attributes (colors and types) while performing ReID, through multi-task learning with the embedded pose representations. Since manually labeling images with detailed pose and attribute information is prohibitive, we create a largescale highly randomized synthetic dataset with automatically annotated vehicle attributes for training. Extensive experiments validate the effectiveness of each proposed component, showing that PAMTRI achieves significant improvement over state-of-the-art on two mainstream vehicle ReID benchmarks: VeRi and CityFlow-ReID. Code and models are available at https://github.com/NVlabs/PAMTRI. * Work done as an intern at NVIDIA. Zheng is now with Amazon.
BackgroundIdentification of pregnancies with postpartum haemorrhage (PPH) antenatally rather than intrapartum would aid delivery planning, facilitate transfusion requirements and decrease maternal complications. MRI has been increasingly used for placenta evaluation. Here, we aim to build a nomogram incorporating both clinical and radiomic features of placenta to predict the risk for PPH in pregnancies during caesarian delivery (CD).MethodsA total of 298 pregnant women were retrospectively enrolled from Henan Provincial People's Hospital (training cohort: n = 207) and from The Third Affiliated Hospital of Zhengzhou University (external validation cohort: n = 91). These women were suspected with placenta accreta spectrum (PAS) disorders and underwent MRI for placenta evaluation. All of them underwent CD and were singleton. PPH was defined as more than 1000 mL estimated blood loss (EBL) during CD. Radiomic features were selected based on their correlations with EBL. Radiomic, clinical, radiological, clinicoradiological and clinicoradiomic models were built to predict the risk of PPH for each patient. The model with the best prediction performance was validated with its discrimination ability, calibration curve and clinical application.FindingsThirty-five radiomic features showed strong correlation with EBL. The clinicoradiomic model resulted in the best discrimination ability for risk prediction of PPH, with AUC of 0.888 (95% CI, 0.844–0.933) and 0.832 (95% CI, 0.746–0.913), sensitivity of 91.2% (95% CI, 85.8%-96.7%) and 97.6% (95% CI, 92.7%-100%) in the training and validation cohort respectively. For patients with severe PPH (EBL more than 2000 mL), 53 out of 55 pregnancies (96.4%) in the training cohort and 18 out of 18 (100%) pregnancies in the validation cohort were identified by the clinicoradiomic model. The model performed better in patients without placenta previa (PP) than in patients with PP, with AUC of 0.983 compared with 0.867, sensitivity of 100% compared with 90.8% in the training cohort, AUC of 0.832 compared with 0.815, sensitivity of 97.6% compared with 97.2% in the validation cohort.InterpretationThe clinicoradiomic model incorporating both prenatal clinical factors and radiomic signature of placenta on T2WI showed good performance for risk prediction of PPH. The predictive model can identify severe PPH with high sensitivity and can be applied in patients with and without PP.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.