Highlights
Radiographic chest images can be used to more accurately detect COVID-19 and assess disease severity. Among different imaging modalities, chest X-ray radiography has advantages of low cost, low radiation dose, wide accessibility and easy-to-operate in general or community hospitals.
This study aims to develop and test a new deep learning model of chest X-ray images to detect COVID-19 induced pneumonia. For this purpose, we assembled a relatively large chest X-ray image dataset involving 8,474 cases, which are divided into three groups of COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) cases.
After applying a preprocessing algorithm to detect and remove diaphragm regions depicting on images, a histogram equalization algorithm and a bilateral filter are applied to process the original images to generate two sets of filtered images. Then, the original image plus these two filtered images are used as inputs of three channels of the CNN deep learning model, which increase learning information of the model.
In order to fully take advantages of the pre-optimized CNN models, this study uses a transfer learning method to build a new model to detect and classify COVID-19 infected pneumonia. A VGG16 based CNN model was originally trained using ImageNet and fine-tuned using chest X-ray images in this study.
To reduce the bias in training and testing the CNN model, dataset is randomly divided into 3 subsets namely, training, validation, and testing with respect to the same frequency of cases in each class in all three COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) groups.
Testing on a subset of 2544 cases, the CNN model yields 94.5% accuracy in classifying three subsets of cases and 98.1% accuracy in detecting COVID-19 infected pneumonia cases, which are significantly higher than the model directly trained using the original images without applying two image preprocessing steps to remove diaphragm and generate two filtered images.
In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.
BACKGROUND: Endovascular mechanical thrombectomy (EMT) is an effective method to treat acute ischemic stroke (AIS) patients due to large vessel occlusion (LVO). However, stratifying AIS patients who can and cannot benefit from EMT remains a clinical challenge. OBJECTIVE: To develop a new quantitative image marker computed from pre-intervention computed tomography perfusion (CTP) images and evaluate its feasibility to predict clinical outcome among AIS patients undergoing EMT after diagnosis of LVO. METHODS: A retrospective dataset of 31 AIS patients with pre-intervention CTP images is assembled. A computer-aided detection (CAD) scheme is developed to pre-process CTP images of different scanning series for each study case, perform image segmentation, quantify contrast-enhanced blood volumes in bilateral cerebral hemispheres, and compute features related to asymmetrical cerebral blood flow patterns based on the cumulative cerebral blood flow curves of two hemispheres. Next, image markers based on a single optimal feature and machine learning (ML) models fused with multi-features are developed and tested to classify AIS cases into two classes of good and poor prognosis based on the Modified Rankin Scale. Performance of image markers is evaluated using the area under the ROC curve (AUC) and accuracy computed from the confusion matrix. RESULTS: The ML model using the neuroimaging features computed from the slopes of the subtracted cumulative blood flow curves between two cerebral hemispheres yields classification performance of AUC = 0.878±0.077 with an overall accuracy of 90.3%. CONCLUSIONS: This study demonstrates feasibility of developing a new quantitative imaging method and marker to predict AIS patients’ prognosis in the hyperacute stage, which can help clinicians optimally treat and manage AIS patients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.