2018
DOI: 10.1016/j.engappai.2018.04.019
|View full text |Cite
|
Sign up to set email alerts
|

Evidential framework for Error Correcting Output Code classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
16
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 27 publications
(17 citation statements)
references
References 32 publications
1
16
0
Order By: Relevance
“…Such an 210 approach is suitable specially when the number of samples if small and when there is no overlapping between the scores of the two considered classes. Otherwise, [45] shows that, in such difficult types of applications, it is hard for SVM to find a very large margin between the two classes and there can be a consistent overlap between samples with different labels for the same score. However, since the number of samples per 215 score would be high, we would paradoxically not assign a high value of imprecision to them.…”
Section: Bba Definition Based On Calibrated Scoresmentioning
confidence: 99%
See 2 more Smart Citations
“…Such an 210 approach is suitable specially when the number of samples if small and when there is no overlapping between the scores of the two considered classes. Otherwise, [45] shows that, in such difficult types of applications, it is hard for SVM to find a very large margin between the two classes and there can be a consistent overlap between samples with different labels for the same score. However, since the number of samples per 215 score would be high, we would paradoxically not assign a high value of imprecision to them.…”
Section: Bba Definition Based On Calibrated Scoresmentioning
confidence: 99%
“…Popular strategies consist in querying the instance whose predicted output is the least confident or with maximum entropy, but in the context of SVM classification the prevailing method is to select the samples which are closer to the separation hyperplane margin [20,21]. More recently, DUAL [22] and QUIRE [23] methods have 45 been proposed. The former is based on density weighted uncertainty sampling while the latter aims at selecting both informative and representative examples on the basis of a prediction of the uncertainty.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to handle both the uncertainty and the imprecision during the classification, we recently proposed an evidential ECOC approach, [12]. Here we are interested in the ECOC decoding part of this work.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…In brief, this part works as follows. An ECOC can be viewed as a set of l dichotomizers (i.e., binary classifiers, SVM in our case) for which the two considered hypotheses are non-overlapping subsets of Ω. Denoting these two hypotheses A i,0 and A i,1 for the i th SVM, its output score for a given sample p is converted (see [12] for details) into an elementary bba m b i having A i,0 , A i,1 and A i,0 ∪A i,1 as focal elements. For non-dense ECOC, a deconditioning on A i,0 ∪ A i,1 allows us to model the absence of information on A i,0 ∪ A i,1 provided by this i th dichotomizer.…”
Section: Experimental Settingsmentioning
confidence: 99%