The COVID-19 pandemic is inevitably changing the world in a dramatic way, and the role of computed tomography (CT) scans can be pivotal for the prognosis of COVID-19 patients. Since the start of the pandemic, great care has been given to the relationship between interstitial pneumonia caused by the infection and the onset of thromboembolic phenomena. In this preliminary study, we collected n = 20 CT scans from the Polyclinic of Bari, all from patients positive with COVID-19, nine of which developed pulmonary thromboembolism (PTE). For eight CT scans, we obtained masks of the lesions caused by the infection, annotated by expert radiologists; whereas for the other four CT scans, we obtained masks of the lungs (including both healthy parenchyma and lesions). We developed a deep learning-based segmentation model that utilizes convolutional neural networks (CNNs) in order to accurately segment the lung and lesions. By considering the images from publicly available datasets, we also realized a training set composed of 32 CT scans and a validation set of 10 CT scans. The results obtained from the segmentation task are promising, allowing to reach a Dice coefficient higher than 97%, posing the basis for analysis concerning the assessment of PTE onset. We characterized the segmented region in order to individuate radiomic features that can be useful for the prognosis of PTE. Out of 919 extracted radiomic features, we found that 109 present different distributions according to the Mann–Whitney U test with corrected p-values less than 0.01. Lastly, nine uncorrelated features were retained that can be exploited to realize a prognostic signature.
The accurate segmentation and identification of vertebrae presents the foundations for spine analysis including fractures, malfunctions and other visual insights. The large-scale vertebrae segmentation challenge (VerSe), organized as a competition at the Medical Image Computing and Computer Assisted Intervention (MICCAI), is aimed at vertebrae segmentation and labeling. In this paper, we propose a framework that addresses the tasks of vertebrae segmentation and identification by exploiting both deep learning and classical machine learning methodologies. The proposed solution comprises two phases: a binary fully automated segmentation of the whole spine, which exploits a 3D convolutional neural network, and a semi-automated procedure that allows locating vertebrae centroids using traditional machine learning algorithms. Unlike other approaches, the proposed method comes with the added advantage of no requirement for single vertebrae-level annotations to be trained. A dataset of 214 CT scans has been extracted from VerSe’20 challenge data, for training, validating and testing the proposed approach. In addition, to evaluate the robustness of the segmentation and labeling algorithms, 12 CT scans from subjects affected by severe, moderate and mild scoliosis have been collected from a local medical clinic. On the designated test set from Verse’20 data, the binary spine segmentation stage allowed to obtain a binary Dice coefficient of 89.17%, whilst the vertebrae identification one reached an average multi-class Dice coefficient of 90.09%. In order to ensure the reproducibility of the algorithms hereby developed, the code has been made publicly available.
Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems.
The coronavirus disease 2019 (COVID-19) pandemic has affected hundreds of millions of individuals and caused millions of deaths worldwide. Predicting the clinical course of the disease is of pivotal importance to manage patients. Several studies have found hematochemical alterations in COVID-19 patients, such as inflammatory markers. We retrospectively analyzed the anamnestic data and laboratory parameters of 303 patients diagnosed with COVID-19 who were admitted to the Polyclinic Hospital of Bari during the first phase of the COVID-19 global pandemic. After the pre-processing phase, we performed a survival analysis with Kaplan–Meier curves and Cox Regression, with the aim to discover the most unfavorable predictors. The target outcomes were mortality or admission to the intensive care unit (ICU). Different machine learning models were also compared to realize a robust classifier relying on a low number of strongly significant factors to estimate the risk of death or admission to ICU. From the survival analysis, it emerged that the most significant laboratory parameters for both outcomes was C-reactive protein min; HR=17.963 (95% CI 6.548–49.277, p < 0.001) for death, HR=1.789 (95% CI 1.000–3.200, p = 0.050) for admission to ICU. The second most important parameter was Erythrocytes max; HR=1.765 (95% CI 1.141–2.729, p < 0.05) for death, HR=1.481 (95% CI 0.895–2.452, p = 0.127) for admission to ICU. The best model for predicting the risk of death was the decision tree, which resulted in ROC-AUC of 89.66%, whereas the best model for predicting the admission to ICU was support vector machine, which had ROC-AUC of 95.07%. The hematochemical predictors identified in this study can be utilized as a strong prognostic signature to characterize the severity of the disease in COVID-19 patients.
Lung cancer is one of the deadliest diseases worldwide. Computed Tomography (CT) images are a powerful tool for investigating the structure and texture of lung nodules. For a long time, trained radiologists have performed the grading and staging of cancer severity by relying on radiographic images. Recently, radiomics has been changing the traditional workflow for lung cancer staging by providing the technical and methodological means to analytically quantify lesions so that more accurate predictions could be performed while reducing the time required from each specialist to perform such tasks. In this work, we implemented a pipeline for identifying a radiomic signature composed of a reduced number of features to discriminate between adenocarcinomas and other cancer types. In addition, we also investigated the reproducibility of this radiomic study analysing the performances of the classification models on external validation data. In detail, we first considered two publicly available datasets, namely D1 and D2, composed of n = 262 and n = 89 samples, respectively. Ten significant features, according to univariate AUC evaluated on D1, were retained. Mann–Whitney U tests recognised three of these features to have a statistically different distribution, with a p-value < 0.05. Then, we collected n = 51 CT images from patients with lung nodules at the Azienda Ospedaliero—Universitaria “Policlinico Riuniti” in Foggia. Resident radiologists manually annotated the lung lesions in images to allow the subsequent analysis of the malignancy regions. We designed a pipeline for feature extraction from the Volumes of Interest in order to generate a third dataset, i.e., D3. Several experiments have been performed showing that the selected radiomic signature not only allowed the discrimination of lung adenocarcinoma from other cancer types independently from the input dataset used for training the models, but also allowed reaching good classification performances also on external validation data; in fact, the radiomic signature computed on D1 and evaluated on the local cohort allowed reaching an AUC of 0.70 (p<0.001) for the task of predicting the histological subtype.
Liver segmentation is a crucial step in surgical planning from computed tomography scans. The possibility to obtain a precise delineation of the liver boundaries with the exploitation of automatic techniques can help the radiologists, reducing the annotation time and providing more objective and repeatable results. Subsequent phases typically involve liver vessels’ segmentation and liver segments’ classification. It is especially important to recognize different segments, since each has its own vascularization, and so, hepatic segmentectomies can be performed during surgery, avoiding the unnecessary removal of healthy liver parenchyma. In this work, we focused on the liver segments’ classification task. We exploited a 2.5D Convolutional Neural Network (CNN), namely V-Net, trained with the multi-class focal Dice loss. The idea of focal loss was originally thought as the cross-entropy loss function, aiming at focusing on “hard” samples, avoiding the gradient being overwhelmed by a large number of falsenegatives. In this paper, we introduce two novel focal Dice formulations, one based on the concept of individual voxel’s probability and another related to the Dice formulation for sets. By applying multi-class focal Dice loss to the aforementioned task, we were able to obtain respectable results, with an average Dice coefficient among classes of 82.91%. Moreover, the knowledge of anatomic segments’ configurations allowed the application of a set of rules during the post-processing phase, slightly improving the final segmentation results, obtaining an average Dice coefficient of 83.38%. The average accuracy was close to 99%. The best model turned out to be the one with the focal Dice formulation based on sets. We conducted the Wilcoxon signed-rank test to check if these results were statistically significant, confirming their relevance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.