The use of SVM (Support Vector Machine) as component classifier in AdaBoost may seem like going against the grain of the Boosting principle since SVM is not an easy classifier to train. Moreover, Wickramaratna et al. [2001. Performance degradation in boosting. In: Proceedings of the Second International Workshop on Multiple Classifier Systems, show that AdaBoost with strong component classifiers is not viable. In this paper, we shall show that AdaBoost incorporating properly designed RBFSVM (SVM with the RBF kernel) component classifiers, which we call AdaBoostSVM, can perform as well as SVM.Furthermore, the proposed AdaBoostSVM demonstrates better generalization performance than SVM on imbalanced classification problems. The key idea of AdaBoostSVM is that for the sequence of trained RBFSVM component classifiers, starting with large s values (implying weak learning), the s values are reduced progressively as the Boosting iteration proceeds. This effectively produces a set of RBFSVM component classifiers whose model parameters are adaptively different manifesting in better generalization as compared to AdaBoost approach with SVM component classifiers using a fixed (optimal) s value. From benchmark data sets, we show that our AdaBoostSVM approach outperforms other AdaBoost approaches using component classifiers such as Decision Trees and Neural Networks. AdaBoostSVM can be seen as a proof of concept of the idea proposed in Valentini and Dietterich [2004. Bias-variance analysis of support vector machines for the development of SVM-based ensemble methods. Journal of Machine Learning Research 5, that Adaboost with heterogeneous SVMs could work well. Moreover, we extend AdaBoostSVM to the Diverse AdaBoostSVM to address the reported accuracy/diversity dilemma of the original Adaboost. By designing parameter adjusting strategies, the distributions of accuracy and diversity over RBFSVM component classifiers are tuned to maintain a good balance between them and promising results have been obtained on benchmark data sets. r
Efficient Human Epithelial-2 cell image classification can facilitate the diagnosis of many autoimmune diseases. This paper proposes an automatic framework for this classification task, by utilizing the deep convolutional neural networks (CNNs) which have recently attracted intensive attention in visual recognition. In addition to describing the proposed classification framework, this paper elaborates several interesting observations and findings obtained by our investigation. They include the important factors that impact network design and training, the role of rotation-based data augmentation for cell images, the effectiveness of cell image masks for classification, and the adaptability of the CNN-based classification system across different datasets. Extensive experimental study is conducted to verify the above findings and compares the proposed framework with the well-established image classification models in the literature. The results on benchmark datasets demonstrate that 1) the proposed framework can effectively outperform existing models by properly applying data augmentation, 2) our CNN-based framework has excellent adaptability across different datasets, which is highly desirable for cell image classification under varying laboratory settings. Our system is ranked high in the cell image classification competition hosted by ICPR 2014.
In the literature of feature selection, different criteria have been proposed to evaluate the goodness of features. In our investigation, we notice that a number of existing selection criteria implicitly select features that preserve sample similarity, and can be unified under a common framework. We further point out that any feature selection criteria covered by this framework cannot handle redundant features, a common drawback of these criteria. Motivated by these observations, we propose a new "Similarity Preserving Feature Selection" framework in an explicit and rigorous way. We show, through theoretical analysis, that the proposed framework not only encompasses many widely used feature selection criteria, but also naturally overcomes their common weakness in handling feature redundancy. In developing this new framework, we begin with a conventional combinatorial optimization formulation for similarity preserving feature selection, then extend it with a sparse multiple-output regression formulation to improve its efficiency and effectiveness. A set of three algorithms are devised to efficiently solve the proposed formulations, each of which has its own advantages in terms of computational complexity and selection performance. As exhibited by our extensive experimental study, the proposed framework achieves superior feature selection performance and attractive properties.
Magnetic resonance (MR) imaging is a widely used medical imaging protocol that can be configured to provide different contrasts between the tissues in human body. By setting different scanning parameters, each MR imaging modality reflects the unique visual characteristic of scanned body part, benefiting the subsequent analysis from multiple perspectives. To utilize the complementary information from multiple imaging modalities, cross-modality MR image synthesis has aroused increasing research interest recently. However, most existing methods only focus on minimizing pixel/voxel-wise intensity difference but ignore the textural details of image content structure, which affects the quality of synthesized images. In this paper, we propose edge-aware generative adversarial networks (Ea-GANs) for cross-modality MR image synthesis. Specifically, we integrate edge information, which reflects the textural structure of image content and depicts the boundaries of different objects in images, to reduce this gap. Corresponding to different learning strategies, two frameworks are proposed, i.e., a generator-induced Ea-GAN (gEa-GAN) and a discriminator-induced Ea-GAN (dEa-GAN). The gEa-GAN incorporates the edge information via its generator, while the dEa-GAN further does this from both the generator and the discriminator so that the edge similarity is also adversarially learned. In addition, the proposed Ea-GANs are 3D-based and utilize hierarchical features to capture contextual information. The experimental results demonstrate that the proposed Ea-GANs, especially the dEa-GAN, outperform multiple state-of-the-art methods for cross-modality MR image synthesis in both qualitative and quantitative measures. Moreover, the dEa-GAN also shows excellent generality to generic image synthesis tasks on benchmark datasets about facades, maps, and cityscapes.
IEEE In this paper, we propose a novel Electroencephalograph (EEG) emotion recognition method inspired by neuroscience with respect to the brain response to different emotions. The proposed method, denoted by R2G-STNN, consists of spatial and temporal neural network models with regional to global hierarchical feature learning process to learn discriminative spatial-temporal EEG features. To learn the spatial features, a bidirectional long short term memory (BiLSTM) network is adopted to capture the intrinsic spatial relationships of EEG electrodes within brain region and between brain regions, respectively. Considering that different brain regions play different roles in the EEG emotion recognition, a regionattention layer into the R2G-STNN model is also introduced to learn a set of weights to strengthen or weaken the contributions of brain regions. Based on the spatial feature sequences, BiLSTM is adopted to learn both regional and global spatial-temporal features and the features are fitted into a classifier layer for learning emotion-discriminative features, in which a domain discriminator working corporately with the classifier is used to decrease the domain shift between training and testing data. Finally, to evaluate the proposed method, we conduct both subject-dependent and subject-independent EEG emotion recognition experiments on SEED database, and the experimental results show that the proposed method achieves state-of-the-art performance.Abstract-In this paper, we propose a novel Electroencephalograph (EEG) emotion recognition method inspired by neuroscience with respect to the brain response to different emotions. The proposed method, denoted by R2G-STNN, consists of spatial and temporal neural network models with regional to global hierarchical feature learning process to learn discriminative spatial-temporal EEG features. To learn the spatial features, a bidirectional long short term memory (BiLSTM) network is adopted to capture the intrinsic spatial relationships of EEG electrodes within brain region and between brain regions, respectively. Considering that different brain regions play different roles in the EEG emotion recognition, a region-attention layer into the R2G-STNN model is also introduced to learn a set of weights to strengthen or weaken the contributions of brain regions. Based on the spatial feature sequences, BiLSTM is adopted to learn both regional and global spatial-temporal features and the features are fitted into a classifier layer for learning emotiondiscriminative features, in which a domain discriminator working corporately with the classifier is used to decrease the domain shift between training and testing data. Finally, to evaluate the proposed method, we conduct both subject-dependent and subject-independent EEG emotion recognition experiments on SEED database, and the experimental results show that the proposed method achieves state-of-the-art performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.