The early detection and rapid quantification of acute ischemic lesions play pivotal roles in stroke management. We developed a deep learning algorithm for the automatic binary classification of the Alberta Stroke Program Early Computed Tomographic Score (ASPECTS) using diffusion-weighted imaging (DWI) in acute stroke patients. Three hundred and ninety DWI datasets with acute anterior circulation stroke were included. A classifier algorithm utilizing a recurrent residual convolutional neural network (RRCNN) was developed for classification between low (1–6) and high (7–10) DWI-ASPECTS groups. The model performance was compared with a pre-trained VGG16, Inception V3, and a 3D convolutional neural network (3DCNN). The proposed RRCNN model demonstrated higher performance than the pre-trained models and 3DCNN with an accuracy of 87.3%, AUC of 0.941, and F1-score of 0.888 for classification between the low and high DWI-ASPECTS groups. These results suggest that the deep learning algorithm developed in this study can provide a rapid assessment of DWI-ASPECTS and may serve as an ancillary tool that can assist physicians in making urgent clinical decisions.
In this study, we present a fusion model for emotion recognition based on visual data. The proposed model uses video information as its input and generates emotion labels for each video sample. Based on the video data, we first choose the most significant face regions with the use of a face detection and selection step. Subsequently, we employ three CNN-based architectures to extract the high-level features of the face image sequence. Furthermore, we adjusted one additional module for each CNN-based architecture to capture the sequential information of the entire video dataset. The combination of the three CNN-based models in a late-fusion-based approach yields a competitive result when compared to the baseline approach while using two public datasets: AFEW 2016 and SAVEE.
ObjectiveThis study was conducted in order to investigate the feasibility of using radiomics analysis (RA) with machine learning algorithms based on breast magnetic resonance (MR) images for discriminating malignant from benign MR-detected additional lesions in patients with primary breast cancer.Materials and MethodsOne hundred seventy-four MR-detected additional lesions (benign, n = 86; malignancy, n = 88) from 158 patients with ipsilateral primary breast cancer from a tertiary medical center were included in this retrospective study. The entire data were randomly split to training (80%) and independent test sets (20%). In addition, 25 patients (benign, n = 21; malignancy, n = 15) from another tertiary medical center were included for the external test. Radiomics features that were extracted from three regions-of-interest (ROIs; intratumor, peritumor, combined) using fat-saturated T1-weighted images obtained by subtracting pre- from postcontrast images (SUB) and T2-weighted image (T2) were utilized to train the support vector machine for the binary classification. A decision tree method was utilized to build a classifier model using clinical imaging interpretation (CII) features assessed by radiologists. Area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, and specificity were used to compare the diagnostic performance.ResultsThe RA models trained using radiomics features from the intratumor-ROI showed comparable performance to the CII model (accuracy, AUROC: 73.3%, 69.6% for the SUB RA model; 70.0%, 75.1% for the T2 RA model; 73.3%, 72.0% for the CII model). The diagnostic performance increased when the radiomics and CII features were combined to build a fusion model. The fusion model that combines the CII features and radiomics features from multiparametric MRI data demonstrated the highest performance with an accuracy of 86.7% and an AUROC of 91.1%. The external test showed a similar pattern where the fusion models demonstrated higher levels of performance compared with the RA- or CII-only models. The accuracy and AUROC of the SUB+T2 RA+CII model in the external test were 80.6% and 91.4%, respectively.ConclusionOur study demonstrated the feasibility of using RA with machine learning approach based on multiparametric MRI for quantitatively characterizing MR-detected additional lesions. The fusion model demonstrated an improved diagnostic performance over the models trained with either RA or CII alone.
ObjectiveTo investigate whether support vector machine (SVM) trained with radiomics features based on breast magnetic resonance imaging (MRI) could predict the upgrade of ductal carcinoma in situ (DCIS) diagnosed by core needle biopsy (CNB) after surgical excision.Materials and methodsThis retrospective study included a total of 349 lesions from 346 female patients (mean age, 54 years) diagnosed with DCIS by CNB between January 2011 and December 2017. Based on histological confirmation after surgery, the patients were divided into pure (n = 198, 56.7%) and upgraded DCIS (n = 151, 43.3%). The entire dataset was randomly split to training (80%) and test sets (20%). Radiomics features were extracted from the intratumor region-of-interest, which was semi-automatically drawn by two radiologists, based on the first subtraction images from dynamic contrast-enhanced T1-weighted MRI. A least absolute shrinkage and selection operator (LASSO) was used for feature selection. A 4-fold cross validation was applied to the training set to determine the combination of features used to train SVM for classification between pure and upgraded DCIS. Sensitivity, specificity, accuracy, and area under the receiver-operating characteristic curve (AUC) were calculated to evaluate the model performance using the hold-out test set.ResultsThe model trained with 9 features (Energy, Skewness, Surface Area to Volume ratio, Gray Level Non Uniformity, Kurtosis, Dependence Variance, Maximum 2D diameter Column, Sphericity, and Large Area Emphasis) demonstrated the highest 4-fold mean validation accuracy and AUC of 0.724 (95% CI, 0.619–0.829) and 0.742 (0.623–0.860), respectively. Sensitivity, specificity, accuracy, and AUC using the test set were 0.733 (0.575–0.892) and 0.7 (0.558–0.842), 0.714 (0.608–0.820) and 0.767 (0.651–0.882), respectively.ConclusionOur study suggested that the combined radiomics and machine learning approach based on preoperative breast MRI may provide an assisting tool to predict the histologic upgrade of DCIS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.