Nowadays, motor imagery (MI) electroencephalogram (EEG) signal classification has become a hotspot in the research field of brain computer interface (BCI). More recently, deep learning has emerged as a promising technique to automatically extract features of raw MI EEG signals and then classify them. However, deep learning-based methods still face two challenging problems in practical MI EEG signal classification applications: (1) Generally, training a deep learning model successfully needs a large amount of labeled data. However, most of the EEG signal data is unlabeled and it is quite difficult or even impossible for human experts to label all the signal samples manually. (2) It is extremely time-consuming and computationally expensive to train a deep learning model from scratch. To cope with these two challenges, a deep transfer convolutional neural network (CNN) framework based on VGG-16 is proposed for EEG signal classification. The proposed framework consists of a VGG-16 CNN model pre-trained on the ImageNet and a target CNN model which shares the same structure with VGG-16 except for the softmax output layer. The parameters of the pre-trained VGG-16 CNN model are directly transferred to the target CNN model used for MI EEG signal classification. Then, front-layers parameters in the target model are frozen, while later-layers parameters are fine-tuned by the target MI dataset. The target dataset is composed of timefrequency spectrum images of EEG signals. The performance of the proposed framework is verified on the public benchmark dataset 2b from the BCI competition IV. The experimental results show that the proposed framework improves the accuracy and efficiency performance of EEG signal classification compared with traditional methods, including support vector machine (SVM), artificial neural network (ANN), and standard CNN. INDEX TERMS Motor imagery (MI), electroencephalogram (EEG), signal classification, short time Fourier transform (STFT), VGG-16, transfer learning.
Motivation: Spatially resolved transcriptomics (SRT) shows its impressive power in yielding biological insights into neuroscience, disease study, and even plant biology. However, current methods do not sufficiently explore the expressiveness of the multi-modal SRT data, leaving a large room for improvement of performance. Moreover, the current deep learning based methods lack interpretability due to the "black box" nature, impeding its further applications in the areas that require explanation. Results: We propose conST, a powerful and flexible SRT data analysis framework utilizing contrastive learning techniques. conST can learn low-dimensional embeddings by effectively integrating multi-modal SRT data, i.e. gene expression, spatial information, and morphology (if applicable). The learned embeddings can be then used for various downstream tasks, including clustering, trajectory and pseudotime inference, cell-to-cell interaction, etc. Extensive experiments in various datasets have been conducted to demonstrate the effectiveness and robustness of the proposed conST, achieving up to 10% improvement in clustering ARI in the commonly used benchmark dataset. We also show that the learned embedding can be used in complicated scenarios, such as predicting cancer progression by analyzing the tumour microenvironment and cell-to-cell interaction (CCI) of breast cancer. Our framework is interpretable in that it is able to find the correlated spots that support the clustering, which matches the CCI interaction pairs as well, providing more confidence to clinicians when making clinical decisions.
Three-dimensional convolutional neural networks (3DCNNs), a rapidly evolving modality of deep learning, has gained popularity in many fields. For oral cancers, CT images are traditionally processed using two-dimensional input, without considering information between lesion slices. In this paper, we established a 3DCNNs-based image processing algorithm for the early diagnosis of oral cancers, which was compared with a 2DCNNs-based algorithm. The 3D and 2D CNNs were constructed using the same hierarchical structure to profile oral tumors as benign or malignant. Our results showed that 3DCNNs with dynamic characteristics of the enhancement rate image performed better than 2DCNNS with single enhancement sequence for the discrimination of oral cancer lesions. Our data indicate that spatial features and spatial dynamics extracted from 3DCNNs may inform future design of CT-assisted diagnosis system. INDEX TERMS 2DCNNs, 3DCNNs, CT images, spatial features, spatial dynamics extracted.
Diabetic retinopathy (DR) is an eye abnormality caused by chronic diabetes that affected patients worldwide. Hard exudate is an important and observable sign of DR and can be used for early diagnosis. In this paper, an automatic hard exudates segmentation method is proposed in order to aid ophthalmologists to diagnose DR in the early stage. We utilized the SLIC superpixel algorithm to generate sample patches, thus overcoming the difficulty of the limited and imbalanced dataset. Furthermore, a U-net based network architecture with inception modules and residual connections is proposed to conduct end-toend hard exudate segmentation, and focal loss is utilized as the loss function. Extensive experiments have been conducted on the IDRiD dataset to evaluate the performance of the proposed method. The reported sensitivity, specificity, and accuracy achieve 96.38%, 97.14%, and 97.95% respectively, which demonstrates the effectiveness and superiority of our method. The achieved segmentation results prove the potential of the method for clinical diagnosis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.