2009
DOI: 10.1016/j.ins.2008.12.024
|View full text |Cite
|
Sign up to set email alerts
|

Incremental construction of classifier and discriminant ensembles

Abstract: a b s t r a c tWe discuss approaches to incrementally construct an ensemble. The first constructs an ensemble of classifiers choosing a subset from a larger set, and the second constructs an ensemble of discriminants, where a classifier is used for some classes only. We investigate criteria including accuracy, significant improvement, diversity, correlation, and the role of search direction. For discriminant ensembles, we test subset selection and trees. Fusion is by voting or by a linear model. Using 14 class… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
29
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 63 publications
(30 citation statements)
references
References 54 publications
(73 reference statements)
0
29
0
Order By: Relevance
“…Here, kernels are combined globally, namely the kernels are assigned the same weights for the whole input space. It has been shown by many researchers that using a subset of given classification algorithms increases accuracy rather than using all the classifiers [12,14]. Keeping this in mind, we apply the same idea to incrementally adding kernels to the MKL framework and compare the results.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, kernels are combined globally, namely the kernels are assigned the same weights for the whole input space. It has been shown by many researchers that using a subset of given classification algorithms increases accuracy rather than using all the classifiers [12,14]. Keeping this in mind, we apply the same idea to incrementally adding kernels to the MKL framework and compare the results.…”
Section: Methodsmentioning
confidence: 99%
“…This procedure continues until all kernels (classifiers) are used or the average validation accuracy does not increase [14]. The algorithm starts with E 0 ← ∅, then at each step t, all the kernels (classifiers) M j ∈ E (t−1) are combined with …”
Section: Methodsmentioning
confidence: 99%
“…One can see that the best accuracy of Con is improved from 71.93 % to 74.56 %. It's known that combining a subset of models may lead to better accuracies than using all the models [10,12]. Bearing this in mind, we have performed another experiment where we selected the best seven ROIs.…”
Section: Exp 2: Integration Of Roismentioning
confidence: 99%
“…Many authors proposed one-pass versions of batch learning algorithms. Later, ensembles of classifiers were additionally applied to the problems which had been originally assumed to be solved by single classifier-based incremental learning algorithms [1], [4,5].…”
Section: Introductionmentioning
confidence: 99%