Highlights Radiographic chest images can be used to more accurately detect COVID-19 and assess disease severity. Among different imaging modalities, chest X-ray radiography has advantages of low cost, low radiation dose, wide accessibility and easy-to-operate in general or community hospitals. This study aims to develop and test a new deep learning model of chest X-ray images to detect COVID-19 induced pneumonia. For this purpose, we assembled a relatively large chest X-ray image dataset involving 8,474 cases, which are divided into three groups of COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) cases. After applying a preprocessing algorithm to detect and remove diaphragm regions depicting on images, a histogram equalization algorithm and a bilateral filter are applied to process the original images to generate two sets of filtered images. Then, the original image plus these two filtered images are used as inputs of three channels of the CNN deep learning model, which increase learning information of the model. In order to fully take advantages of the pre-optimized CNN models, this study uses a transfer learning method to build a new model to detect and classify COVID-19 infected pneumonia. A VGG16 based CNN model was originally trained using ImageNet and fine-tuned using chest X-ray images in this study. To reduce the bias in training and testing the CNN model, dataset is randomly divided into 3 subsets namely, training, validation, and testing with respect to the same frequency of cases in each class in all three COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) groups. Testing on a subset of 2544 cases, the CNN model yields 94.5% accuracy in classifying three subsets of cases and 98.1% accuracy in detecting COVID-19 infected pneumonia cases, which are significantly higher than the model directly trained using the original images without applying two image preprocessing steps to remove diaphragm and generate two filtered images.
In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.
Contrast-enhanced digital mammography (CEDM) is a promising imaging modality in breast cancer diagnosis. This study aims to investigate how to optimally develop a computer-aided diagnosis (CAD) scheme of CEDM images to classify breast masses. A CEDM dataset of 111 patients was assembled, which includes 33 benign and 78 malignant cases. Each CEDM includes two types of images namely, low energy (LE) and dual-energy subtracted (DES) images. A CAD scheme was applied to segment mass regions depicting on LE and DES images separately. Optimal segmentation results generated from DES images were also mapped to LE images or vice versa. After computing image features, multilayer perceptron based machine learning classifiers that integrate with a correlation-based feature subset evaluator and leave-one-case-out cross-validation method were built to classify mass regions. When applying CAD to DES and LE images with original segmentation, areas under ROC curves (AUC) were 0.759 ± 0.053 and 0.753 ± 0.047, respectively. After mapping the mass regions optimally segmented on DES images to LE images, AUC significantly increased to 0.848 ± 0.038 (p < 0.01). Study demonstrated that DES images eliminated overlapping effect of dense breast tissue, which helps improve mass segmentation accuracy. The study demonstrated that applying a novel approach to optimally map mass region segmented from DES images to LE images enabled CAD to yield significantly improved performance.
This study aimed to investigate the feasibility of integrating image features computed from both spatial and frequency domain to better describe the tumor heterogeneity for precise prediction of tumor response to postsurgical chemotherapy in patients with advanced-stage ovarian cancer. A computer-aided scheme was applied to first compute 133 features from five categories namely, shape and density, fast Fourier transform, discrete cosine transform (DCT), wavelet, and gray level difference method. An optimal feature cluster was then determined by the scheme using the particle swarm optimization algorithm aiming to achieve an enhanced discrimination power that was unattainable with the single features. The scheme was tested using a balanced dataset (responders and non-responders defined using 6 month PFS) retrospectively collected from 120 ovarian cancer patients. By evaluating the performance of the individual features among the five categories, the DCT features achieved the highest predicting accuracy than the features in other groups. By comparison, a quantitative image marker generated from the optimal feature cluster yielded the area under ROC curve (AUC) of 0.86, while the top performing single feature only had an AUC of 0.74. Furthermore, it was observed that the features computed from the frequency domain were as important as those computed from the spatial domain. In conclusion, this study demonstrates the potential of our proposed new quantitative image marker fused with the features computed from both spatial and frequency domain for a reliable prediction of tumor response to postsurgical chemotherapy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.