Nowadays, brain signals are employed in various scientific and practical fields such as Medical Science, Cognitive Science, Neuroscience, and Brain Computer Interfaces. Hence, the need for robust signal analysis methods with adequate accuracy and generalizability is inevitable. The brain signal analysis is faced with complex challenges including small sample size, high dimensionality and noisy signals. Moreover, because of the non-stationarity of brain signals and the impacts of mental states on brain function, the brain signals are associated with an inherent uncertainty. In this paper, an evidence-based combining classifiers method is proposed for brain signal analysis. This method exploits the power of combining classifiers for solving complex problems and the ability of evidence theory to model as well as to reduce the existing uncertainty. The proposed method models the uncertainty in the labels of training samples in each feature space by assigning soft and crisp labels to them. Then, some classifiers are employed to approximate the belief function corresponding to each feature space. By combining the evidence raised from each classifier through the evidence theory, more confident decisions about testing samples can be made. The obtained results by the proposed method compared to some other evidence-based and fixed rule combining methods on artificial and real datasets exhibit the ability of the proposed method in dealing with complex and uncertain classification problems.
Nowadays, brain signals are employed in various scientific and practical fields such as Medical Science, Cognitive Science, Neuroscience, and Brain Computer Interfaces. Hence, the need for robust signal analysis methods with adequate accuracy and generalizability is inevitable. The brain signal analysis is faced with complex challenges including small sample size, high dimensionality and noisy signals. Moreover, because of the non-stationarity of brain signals and the impacts of mental states on brain function, the brain signals are associated with an inherent uncertainty. In this paper, an evidence-based combining classifiers method is proposed for brain signal analysis. This method exploits the power of combining classifiers for solving complex problems and the ability of evidence theory to model as well as to reduce the existing uncertainty. The proposed method models the uncertainty in the labels of training samples in each feature space by assigning soft and crisp labels to them. Then, some classifiers are employed to approximate the belief function corresponding to each feature space. By combining the evidence raised from each classifier through the evidence theory, more confident decisions about testing samples can be made. The obtained results by the proposed method compared to some other evidence-based and fixed rule combining methods on artificial and real datasets exhibit the ability of the proposed method in dealing with complex and uncertain classification problems.
“…We will present a framework in which the combination method is updated on-line in a sample-wise, strictly incremental manner. On-line extensions of various well-known batch ensemble (classifier fusion) methods are presented as the combination method, such as Fuzzy Integral [17,4], Decision Templates [26,30] and ensembles based on Dempster-Shafer theory [48,53]. In this sense, it can be seen as an extension of the work in [33], where Naive Bayes [69] and BKS [21] were used as combination rule.…”
Section: Sannen Et Al / Towards Incremental Classifier Fusionmentioning
confidence: 99%
“…Batch Training In [48] a way to apply the Dempster-Shafer theory of evidence to the problem of classifier fusion is described. The Dempster-Shafer combination training is like the Decision Templates training: the c decision templates are calculated from the data set in the same way -see Eq.…”
To process the large amounts of data industrial systems are producing nowadays, machine learning techniques have shown their usefulness in many applications. As the amounts of data being generated are getting huge, the need for machine learning methods which can deal with them in an appropriate way -i.e. methods which can be adapted incrementally -becomes very important. Ensembles of classifiers have been shown to be able to improve the predictive accuracy as well as the robustness of single classification methods. In this work novel incremental variants of several well-known classifier fusion methods (Fuzzy Integral, Decision Templates, Dempster-Shafer Combination and Discounted Dempster-Shafer Combination) are presented. Furthermore, a novel incremental classifier fusion method called Incremental Direct Cluster-based ensemble will be introduced, which exploits an evolving clustering approach. A flexible and interactive framework for on-line learning will be introduced, in which the ensemble methods are adapted incrementally in a sample-wise manner together with their base classifiers. The performance of this framework and the proposed incremental classifiers fusion methods therein are evaluated on five real-world visual quality inspection tasks, captured on-line from an industrial CD imprint production process, together with five data sets from the UCI repository. 5 incorporating new target classes, which might not be available in the initial training data. These methods can be updated in a sample-wise manner and are strictly incremental.
D. Sannen et al. / Towards incremental classifier fusionA number of strategies to update an ensemble on-line have been explored [28], such as:
Dynamic combinersThe ensemble members are trained off-line beforehand; only the combination rule is adapted on-line. An example of this category is the Mixture of Experts method [22]. Also trainable classifier fusion methods (which can be updated incrementally) such as Naive Bayes [69] and BKS [21] can be used to combine the classifier outputs, as is done in [33]. Updating the ensemble members New incoming data is used to update the classifiers on-line; the combination rule might or might not change. Usually the data is sampled appropriately for the different classifiers, which are then updated appropriately. Algorithms in this category include the online Bagging and Boosting algorithms [42], the Pasting Small Votes system [2] and Learn++ [45]. Structural changes The classifiers are re-evaluated and removed or replaced by a newly trained one.An example of this type of algorithm can be found in [63], where the classifiers are evaluated on the most recent block of data.
“…Xu et al, 66 Ho 32 Decision Different principals Giacinto et al 22 Giacinto and Roli 20 Giacinto et al 22 Woods et al 65 Ho et al 30 Different parameters Ng and Singh 45 Giacinto and Roli 20 Giacinto et al 22 Cao et al 7 or initializations Rogova 48 Cao et al 7 Sharkey et al 54 "Multiple features and Cho and Kim 9 "Test and Select" multistage classifiers" Representation Random selection Wolpert 64 Giacinto and Roli 19 "Stacked generalization" "Adaptive selection" His current research interests include statistical pattern recognition, speech recognition, image processing, data fusion and uncertainty modeling in statistical inference.…”
When several classifiers are brought to contribute to the same task of recognition, various strategies of decisions, implying these classifiers in different ways, are possible. A first strategy consists in deciding using different opinions: it corresponds to the combination of classifiers. A second strategy consists in using one or more opinions for better guiding other classifiers in their training stages, and/or to improve the decision-making of other classifiers in the classification stage: it corresponds to the cooperation of classifiers. The third and last strategy consists in giving more importance to one or more classifiers according to various criteria or situations: it corresponds to the selection of classifiers. The temporal aspect of Pattern Recognition (PR), i.e. the possible evolution of the classes to be recognized, can be treated by the strategy of selection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.