2012
DOI: 10.1016/j.knosys.2011.10.010
|View full text |Cite
|
Sign up to set email alerts
|

Combining complementary information sources in the Dempster–Shafer framework for solving classification problems with imperfect labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(13 citation statements)
references
References 22 publications
0
13
0
Order By: Relevance
“…In order to assign labels to the uncertain data in each of the selected feature spaces the approach proposed in [6] is employed. Let Ω = {ω 1 ,…,ω M } be a set of M classes and x be an unlabeled data point described by n features.…”
Section: Label Assignmentmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to assign labels to the uncertain data in each of the selected feature spaces the approach proposed in [6] is employed. Let Ω = {ω 1 ,…,ω M } be a set of M classes and x be an unlabeled data point described by n features.…”
Section: Label Assignmentmentioning
confidence: 99%
“…The range of examined K-nearest neighbors for classifiers based on the LMB was [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] and the best test results of the classifiers are shown. Clearly the overall performance of the proposed method is better than other listed classifiers and it is able to provide more satisfactory results than GDA+FLD that is used in the initial structure of the rtCAB system.…”
Section: Performance Comparisonmentioning
confidence: 99%
“…Finally, there are also some approaches, referred to here as hybrid ones, which combine features and properties of the abovementioned classes, e.g. by creating probabilistic models of label noise and then using this information to improve the noise-tolerance of the classifier during its training (Bouveyron & Girard, 2009;Rebbapragada & Brodley, 2007;Tabassian, Ghaderi, & Ebrahimpour, 2012a, Tabassian, Ghaderi, & Ebrahimpour, 2012bWang et al, 2012).…”
Section: Introductionmentioning
confidence: 99%
“…Attribute reduction is regarded as a special type of feature selection in rough set theory [48,52]. Much has been made of rough set theory for this purpose as it is completely data-driven and no additional information is required such as probability distributions in statistics [47], basic probability assignments in Dempster-Shafer theory [23], or a grade of membership in fuzzy set theory [21,22,27]. Rough set theory is an extension of the classical set theory to deal with imprecise, uncertain and vague information [5,38].…”
Section: Introductionmentioning
confidence: 99%