The coronavirus disease (COVID-19) is rapidly spreading all over the world, and has infected more than 1,436,000 people in more than 200 countries and territories as of April 9, 2020. Detecting COVID-19 at early stage is essential to deliver proper healthcare to the patients and also to protect the uninfected population. To this end, we develop a dual-sampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT). In particular, we propose a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses. Note that there exists imbalanced distribution of the sizes of the infection regions between COVID-19 and CAP, partially due to fast progress of COVID-19 after symptom onset. Therefore, we develop a dual-sampling strategy to mitigate the imbalanced learning. Our method is evaluated (to our best knowledge) upon the largest multi-center CT data for COVID-19 from 8 hospitals. In the training-validation stage, we collect 2186 CT scans from 1588 patients for a 5-fold cross-validation. In the testing stage, we employ another independent large-scale testing dataset including 2796 CT scans from 2057 patients. Results show that our algorithm can identify the COVID-19 images with the area under the receiver operating characteristic curve (AUC) value of 0.944,
Recently, the outbreak of Coronavirus Disease 2019 has spread rapidly across the world. Due to the large number of infected patients and heavy labor for doctors, computer-aided diagnosis with machine learning algorithm is urgently needed, and could largely reduce the efforts of clinicians and accelerate the diagnosis process. Chest computed tomography (CT) has been recognized as an informative tool for diagnosis of the disease. In this study, we propose to conduct the diagnosis of COVID-19 with a series of features extracted from CT images. To fully explore multiple features describing CT images from different views, a unified latent representation is learned which can completely encode information from different aspects of features and is endowed with promising class structure for separability. Specifically, the completeness is guaranteed with a group of backward neural networks (each for one type of features), while by using class labels the representation is enforced to be compact within COVID-19/community-acquired pneumonia (CAP) and also a large margin is guaranteed between different types of pneumonia. In this way, our model can well avoid overfitting compared to the case of directly projecting highdimensional features into classes. Extensive experimental results show that the proposed method outperforms all comparison methods, and rather stable performances are observed when varying the number of training data.
The worldwide spread of coronavirus disease (COVID-19) has become a threat to global public health. It is of great importance to rapidly and accurately screen and distinguish patients with COVID-19 from those with community-acquired pneumonia (CAP). In this study, a total of 1,658 patients with COVID-19 and 1,027 CAP patients underwent thin-section CT and were enrolled. All images were preprocessed to obtain the segmentations of infections and lung fields. A set of handcrafted location-specific features was proposed to best capture the COVID-19 distribution pattern, in comparison to the conventional CT severity score (CT-SS) and radiomics features. An infection size-aware random forest method (iSARF) was proposed for discriminating COVID-19 from CAP. Experimental results show that the proposed method yielded its best performance when using the handcrafted features, with a sensitivity of 90.7%, a specificity of 87.2%, and an accuracy of 89.4% over state-of-the-art classifiers. Additional tests on 734 subjects, with thick slice images, demonstrates great generalizability. It is anticipated that our proposed framework could assist clinical decision making.
Chest computed tomography (CT) becomes an effective tool to assist the diagnosis of coronavirus disease-19 (COVID-19). Due to the outbreak of COVID-19 worldwide, using the computed-aided diagnosis technique for COVID-19 classification based on CT images could largely alleviate the burden of clinicians. In this paper, we propose an Adaptive Feature Selection guided Deep Forest (AFS-DF) for COVID-19 classification based on chest CT images. Specifically, we first extract location-specific features from CT images. Then, in order to capture the highlevel representation of these features with the relatively small-scale data, we leverage a deep forest model to learn high-level representation of the features. Moreover, we propose a feature selection method based on the trained deep forest model to reduce the redundancy of features, where the feature selection could be adaptively incorporated with the COVID-19 classification model. We evaluated our proposed AFS-DF on COVID-19 dataset with 1495 patients of COVID-19 and 1027 patients of community acquired pneumonia (CAP). The accuracy (ACC), sensitivity (SEN), specificity (SPE), AUC, precision and F1-score achieved by our method are 91.79%, 93.05%, 89.95%, 96.35%, 93.10% and 93.07%, respectively. Experimental results on the COVID-19 dataset suggest that the proposed AFS-DF achieves
Objectives To develop radiomics-based nomograms for preoperative microvascular invasion (MVI) and recurrence-free survival (RFS) prediction in patients with solitary hepatocellular carcinoma (HCC) ≤ 5 cm. Methods Between March 2012 and September 2019, 356 patients with pathologically confirmed solitary HCC ≤ 5 cm who underwent preoperative gadoxetate disodium–enhanced MRI were retrospectively enrolled. MVI was graded as M0, M1, or M2 according to the number and distribution of invaded vessels. Radiomics features were extracted from DWI, arterial, portal venous, and hepatobiliary phase images in regions of the entire tumor, peritumoral area ≤ 10 mm, and randomly selected liver tissue. Multivariate analysis identified the independent predictors for MVI and RFS, with nomogram visualized the ultimately predictive models. Results Elevated alpha-fetoprotein, total bilirubin and radiomics values, peritumoral enhancement, and incomplete or absent capsule enhancement were independent risk factors for MVI. The AUCs of MVI nomogram reached 0.920 (95% CI: 0.861–0.979) using random forest and 0.879 (95% CI: 0.820–0.938) using logistic regression analysis in validation cohort (n = 106). With the 5-year RFS rate of 68.4%, the median RFS of MVI-positive (M2 and M1) and MVI-negative (M0) patients were 30.5 (11.9 and 40.9) and > 96.9 months (p < 0.001), respectively. Age, histologic MVI, alkaline phosphatase, and alanine aminotransferase independently predicted recurrence, yielding AUC of 0.654 (95% CI: 0.538–0.769, n = 99) in RFS validation cohort. Instead of histologic MVI, the preoperatively predicted MVI by MVI nomogram using random forest achieved comparable accuracy in MVI stratification and RFS prediction. Conclusions Preoperative radiomics-based nomogram using random forest is a potential biomarker of MVI and RFS prediction for solitary HCC ≤ 5 cm. Key Points • The radiomics score was the predominant independent predictor of MVI which was the primary independent risk factor for postoperative recurrence. • The radiomics-based nomogram using either random forest or logistic regression analysis has obtained the best preoperative prediction of MVI in HCC patients so far. • As an excellent substitute for the invasive histologic MVI, the preoperatively predicted MVI by MVI nomogram using random forest (MVI-RF) achieved comparable accuracy in MVI stratification and outcome, reinforcing the radiologic understanding of HCC angioinvasion and progression.
The automatic detection of lung nodules attached to other pulmonary structures is a useful yet challenging task in lung CAD systems. In this paper, we propose a stratified statistical learning approach to recognize whether a candidate nodule detected in CT images connects to any of three other major lung anatomies, namely vessel, fissure and lung wall, or is solitary with background parenchyma. First, we develop a fully automated voxel-by-voxel labeling/segmentation method of nodule, vessel, fissure, lung wall and parenchyma given a 3D lung image, via a unified feature set and classifier under conditional random field. Second, the generated Class Probability Response Maps (PRM) by voxel-level classifiers, are used to form the so-called pairwise Probability Co-occurrence Maps (PCM) which encode the spatial contextual correlations of the candidate nodule, in relation to other anatomical landmarks. Based on PCMs, higher level classifiers are trained to recognize whether the nodule touches other pulmonary structures, as a multi-label problem. We also present a new iterative fissure structure enhancement filter with superior performance. For experimental validation, we create an annotated database of 784 subvolumes with nodules of various sizes, shapes, densities and contextual anatomies, and from 239 patients. High accuracy of multi-class voxel labeling is achieved 89.3% ∼ 91.2%. The Area under ROC Curve (AUC) of vessel, fissure and lung wall connectivity classification reaches 0.8676, 0.8692 and 0.9275, respectively.
Abstract. Patient-specific orthopedic knee surgery planning requires precisely segmenting from 3D CT images multiple knee bones, namely femur, tibia, fibula, and patella, around the knee joint with severe pathologies. In this work, we propose a fully automated, highly precise, and computationally efficient segmentation approach for multiple bones. First, each bone is initially segmented using a model-based marginal space learning framework for pose estimation followed by non-rigid boundary deformation. To recover shape details, we then refine the bone segmentation using graph cut that incorporates the shape priors derived from the initial segmentation. Finally we remove overlap between neighboring bones using multi-layer graph partition. In experiments, we achieve simultaneous segmentation of femur, tibia, patella, and fibula with an overall accuracy of less than 1mm surface-to-surface error in less than 90s on hundreds of 3D CT scans with pathological knee joints.
The problem of learning a proper distance or similarity metric arises in many applications such as content-based image retrieval. In this work, we propose a boosting algorithm, MetricBoost, to learn the distance metric that preserves the proximity relationships among object triplets: object i is more similar to object j than to object k. MetricBoost constructs a positive semi-definite (PSD) matrix that parameterizes the distance metric by combining rank-one PSD matrices. Different options of weak models and combination coefficients are derived. Unlike existing proximity preserving metric learning which is generally not scalable, MetricBoost employs a bipartite strategy to dramatically reduce computation cost by decomposing proximity relationships over triplets into pair-wise constraints. MetricBoost outperforms the state-of-the-art on two real-world medical problems: 1. identifying and quantifying diffuse lung diseases; 2. colorectal polyp matching between different views, as well as on other benchmark datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.