“…Moreover, the lungs were automatically extracted via Convolutional Neural Network (CNN) algorithms to create binary mask (27). Then, a logical "and", between these masks and the segmentations obtained by the radiology residents, was performed (using "3dcalc") to exclude automated segmented pixels beyond the lungs, thus obtaining the nal ROIs (28).…”
Ground-Glass Opacities (GGOs) are a non-specific CT finding observed in the early phase of COVID-19 pneumonia. However, GGOs are also seen in other acute interstitial and alveolar lung diseases, thus making the differential diagnosis a diagnostic challenge. In this poof-of-concept study, we aimed to differentiate COVID-19 pneumonia presenting with GGOs from acute non-COVID-19 lung disease using a novel radiomic-based model in patients who underwent a high-resolution CT (HRCT) scan at hospital admission during the first pandemic peak in Italy. HRCT scans of 28 RT-PCR diagnosed COVID-19 pneumonia (COVID) and 30 acute non-COVID-lung disease (nCOVID) were retrospectively included. All patients showed GGOs as the predominant CT pattern. Two readers, blinded to the final diagnosis, independently segmented GGOs on CT scans by using a semi-automated approach, and radiomic features were extracted from segmented images. Partial least square (PLS) regression was used as the multivariate machine-learning algorithm. A leave-one-out nested cross-validation was implemented to optimize the hyperparameter of PLS and to assess the model generalization. The diagnostic performance of the radiomic model to differentiate between COVID and nCOVID lung disease was assessed through receiver operating characteristic (ROC) analysis. The radiomics-based machine learning model differentiated COVID and nCOVID with an AUC = 0.868 (p = 4.2·10− 7). After a careful prospective evaluation in larger multicentric studies, it may help radiologists to rule out COVID-19 pneumonia thus improving the COVID-19 triaging in epidemic areas.
“…Moreover, the lungs were automatically extracted via Convolutional Neural Network (CNN) algorithms to create binary mask (27). Then, a logical "and", between these masks and the segmentations obtained by the radiology residents, was performed (using "3dcalc") to exclude automated segmented pixels beyond the lungs, thus obtaining the nal ROIs (28).…”
Ground-Glass Opacities (GGOs) are a non-specific CT finding observed in the early phase of COVID-19 pneumonia. However, GGOs are also seen in other acute interstitial and alveolar lung diseases, thus making the differential diagnosis a diagnostic challenge. In this poof-of-concept study, we aimed to differentiate COVID-19 pneumonia presenting with GGOs from acute non-COVID-19 lung disease using a novel radiomic-based model in patients who underwent a high-resolution CT (HRCT) scan at hospital admission during the first pandemic peak in Italy. HRCT scans of 28 RT-PCR diagnosed COVID-19 pneumonia (COVID) and 30 acute non-COVID-lung disease (nCOVID) were retrospectively included. All patients showed GGOs as the predominant CT pattern. Two readers, blinded to the final diagnosis, independently segmented GGOs on CT scans by using a semi-automated approach, and radiomic features were extracted from segmented images. Partial least square (PLS) regression was used as the multivariate machine-learning algorithm. A leave-one-out nested cross-validation was implemented to optimize the hyperparameter of PLS and to assess the model generalization. The diagnostic performance of the radiomic model to differentiate between COVID and nCOVID lung disease was assessed through receiver operating characteristic (ROC) analysis. The radiomics-based machine learning model differentiated COVID and nCOVID with an AUC = 0.868 (p = 4.2·10− 7). After a careful prospective evaluation in larger multicentric studies, it may help radiologists to rule out COVID-19 pneumonia thus improving the COVID-19 triaging in epidemic areas.
“…U-Net has shown good performance in fields of medical image segmentation. It has become a popular neural network architecture for biomedical image segmentation tasks (LaLonde and Bagci, 2018 ; Fan et al, 2019 ; Song et al, 2019 ). Li et al ( 2019 ) proposed a new dual-U-Net architecture to solve the problem of nuclei segmentation.…”
Aiming at the limitation of the convolution kernel with a fixed receptive field and unknown prior to optimal network width in U-Net, multi-scale U-Net (MSU-Net) is proposed by us for medical image segmentation. First, multiple convolution sequence is used to extract more semantic features from the images. Second, the convolution kernel with different receptive fields is used to make features more diverse. The problem of unknown network width is alleviated by efficient integration of convolution kernel with different receptive fields. In addition, the multi-scale block is extended to other variants of the original U-Net to verify its universality. Five different medical image segmentation datasets are used to evaluate MSU-Net. A variety of imaging modalities are included in these datasets, such as electron microscopy, dermoscope, ultrasound, etc. Intersection over Union (IoU) of MSU-Net on each dataset are 0.771, 0.867, 0.708, 0.900, and 0.702, respectively. Experimental results show that MSU-Net achieves the best performance on different datasets. Our implementation is available at https://github.com/CN-zdy/MSU_Net.
“…Some recent work applies the capsule network to segmentation tasks by transforming the segmentation into the classification problem. A capsule network model named SegCaps [22] was proposed by LaLonde for binary segmentation. Kromm and Rohr proposed an inception-based capsule network for the segmentation of vessel images [23] .…”
Deep neural networks (DNNs) have been extensively studied in medical image segmentation. However, existing DNNs often need to train shape models for each object to be segmented, which may yield results that violate cardiac anatomical structure when segmenting cardiac magnetic resonance imaging (MRI). In this paper, we propose a capsule-based neural network, named Seg-CapNet, to model multiple regions simultaneously within a single training process. The Seg-CapNet model consists of the encoder and the decoder. The encoder transforms the input image into feature vectors that represent objects to be segmented by convolutional layers, capsule layers, and fully-connected layers. And the decoder transforms the feature vectors into segmentation masks by up-sampling. Feature maps of each down-sampling layer in the encoder are connected to the corresponding up-sampling layers, which are conducive to the backpropagation of the model. The output vectors of Seg-CapNet contain low-level image features such as grayscale and texture, as well as semantic features including the position and size of the objects, which is beneficial for improving the segmentation accuracy. The proposed model is validated on the open dataset of the Automated Cardiac Diagnosis Challenge 2017 (ACDC 2017) and the Sunnybrook Cardiac Magnetic Resonance Imaging (MRI) segmentation challenge. Experimental results show that the mean Dice coefficient of Seg-CapNet is increased by 4.7% and the average Hausdorff distance is reduced by 22%. The proposed model also reduces the model parameters and improves the training speed while obtaining the accurate segmentation of multiple regions.
Supplementary Information
The online version contains supplementary material available at 10.1007/s11390-021-0782-5.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.