Highlights Radiographic chest images can be used to more accurately detect COVID-19 and assess disease severity. Among different imaging modalities, chest X-ray radiography has advantages of low cost, low radiation dose, wide accessibility and easy-to-operate in general or community hospitals. This study aims to develop and test a new deep learning model of chest X-ray images to detect COVID-19 induced pneumonia. For this purpose, we assembled a relatively large chest X-ray image dataset involving 8,474 cases, which are divided into three groups of COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) cases. After applying a preprocessing algorithm to detect and remove diaphragm regions depicting on images, a histogram equalization algorithm and a bilateral filter are applied to process the original images to generate two sets of filtered images. Then, the original image plus these two filtered images are used as inputs of three channels of the CNN deep learning model, which increase learning information of the model. In order to fully take advantages of the pre-optimized CNN models, this study uses a transfer learning method to build a new model to detect and classify COVID-19 infected pneumonia. A VGG16 based CNN model was originally trained using ImageNet and fine-tuned using chest X-ray images in this study. To reduce the bias in training and testing the CNN model, dataset is randomly divided into 3 subsets namely, training, validation, and testing with respect to the same frequency of cases in each class in all three COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) groups. Testing on a subset of 2544 cases, the CNN model yields 94.5% accuracy in classifying three subsets of cases and 98.1% accuracy in detecting COVID-19 infected pneumonia cases, which are significantly higher than the model directly trained using the original images without applying two image preprocessing steps to remove diaphragm and generate two filtered images.
challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top performing participating solutions. We observe that the top performing approaches utilize a blend of clinical information, data augmentation, and the ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.
Contrast-enhanced digital mammography (CEDM) is a promising imaging modality in breast cancer diagnosis. This study aims to investigate how to optimally develop a computer-aided diagnosis (CAD) scheme of CEDM images to classify breast masses. A CEDM dataset of 111 patients was assembled, which includes 33 benign and 78 malignant cases. Each CEDM includes two types of images namely, low energy (LE) and dual-energy subtracted (DES) images. A CAD scheme was applied to segment mass regions depicting on LE and DES images separately. Optimal segmentation results generated from DES images were also mapped to LE images or vice versa. After computing image features, multilayer perceptron based machine learning classifiers that integrate with a correlation-based feature subset evaluator and leave-one-case-out cross-validation method were built to classify mass regions. When applying CAD to DES and LE images with original segmentation, areas under ROC curves (AUC) were 0.759 ± 0.053 and 0.753 ± 0.047, respectively. After mapping the mass regions optimally segmented on DES images to LE images, AUC significantly increased to 0.848 ± 0.038 (p < 0.01). Study demonstrated that DES images eliminated overlapping effect of dense breast tissue, which helps improve mass segmentation accuracy. The study demonstrated that applying a novel approach to optimally map mass region segmented from DES images to LE images enabled CAD to yield significantly improved performance.
Objective This study aimed to investigate the role of applying quantitative image features computed from CT images for early prediction of tumor response to chemotherapy in the clinical trials for treating ovarian cancer patients. Materials and Methods A dataset involving 91 patients was retrospectively assembled. Each patient had two sets of pre- and post-therapy CT images. A computer-aided detection scheme was applied to segment metastatic tumors previously tracked by radiologists on CT images and computed image features. Two initial feature pools were built using image features computed from pre-therapy CT images only and image feature difference computed from both pre- and post-therapy images. A feature selection method was applied to select optimal features and an equal-weighted fusion method was used to generate a new quantitative imaging marker from each pool to predict 6-month progression free survival. The prediction accuracy between quantitative imaging markers and RECIST criteria was also compared. Results The highest areas under ROC curve (AUC) are 0.684±0.056 and 0.771±0.050 when using a single image feature computed from pre-therapy CT images and feature difference computed from pre- and post-therapy CT images, respectively. Using two corresponding fusion-based image markers, AUCs significantly increased to 0.810±0.045 and 0.829±0.043 (p < 0.05), respectively. Overall prediction accuracy levels are 71.4%, 80.2%, and 74.7% when using two imaging markers and RECIST, respectively. Conclusion This study demonstrated the feasibility of predicting patients’ response to chemotherapy using quantitative imaging markers computed from pre-therapy CT images. However, using image feature difference computed between pre- and post-therapy CT images yielded higher prediction accuracy.
Objective: Radiomics and deep transfer learning are two popular technologies used to develop computer-aided detection and diagnosis (CAD) schemes of medical images. This study aims to investigate and to compare the advantages and the potential limitations of applying these two technologies in developing CAD schemes. Methods: A relatively large and diverse retrospective dataset including 3000 digital mammograms was assembled in which 1496 images depicted malignant lesions and 1504 images depicted benign lesions. Two CAD schemes were developed to classify breast lesions. The first scheme was developed using four steps namely, applying an adaptive multi-layer topographic region growing algorithm to segment lesions, computing initial radiomics features, applying a principal component algorithm to generate an optimal feature vector, and building a support vector machine classifier. The second CAD scheme was built based on a pre-trained residual net architecture (ResNet50) as a transfer learning model to classify breast lesions. Both CAD schemes were trained and tested using a 10-fold cross-validation method. Several score fusion methods were also investigated to classify breast lesions. CAD performances were evaluated and compared by the areas under the ROC curve (AUC). Results: The ResNet50 model-based CAD scheme yielded AUC = 0.85 ± 0.02, which was significantly higher than the radiomics feature-based CAD scheme with AUC = 0.77 ± 0.02 (p < 0.01). Additionally, the fusion of classification scores generated by the two CAD schemes did not further improve classification performance. Conclusion: This study demonstrates that using deep transfer learning is more efficient to develop CAD schemes and it enables a higher lesion classification performance than CAD schemes developed using radiomics-based technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.