2001
DOI: 10.1111/0824-7935.00162
|View full text |Cite
|
Sign up to set email alerts
|

Incomplete Information Tables and Rough Classification

Abstract: The rough set theory, based on the original definition of the indiscernibility relation, is not useful for analysing incomplete information tables where some values of attributes are unknown. In this paper we distinguish two different semantics for incomplete information: the "missing value" semantics and the "absent value" semantics. The already known approaches, e.g. based on the tolerance relations, deal with the missing value case. We introduce two generalisations of the rough sets theory to handle these s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
137
0

Year Published

2005
2005
2017
2017

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 317 publications
(140 citation statements)
references
References 14 publications
0
137
0
Order By: Relevance
“…To test and compare the performances of the MRPR algorithms and traditional heuristic attribute reduction algorithms based on positive region (RPR) [17] and A general improved feature selection algorithm based on the positive region (FSPR) [18], we download six data sets from UCI [21]. All these data sets are outlined in Table 3.…”
Section: Algorithm a Structure Discernibility Matrix Reduction Algormentioning
confidence: 99%
“…To test and compare the performances of the MRPR algorithms and traditional heuristic attribute reduction algorithms based on positive region (RPR) [17] and A general improved feature selection algorithm based on the positive region (FSPR) [18], we download six data sets from UCI [21]. All these data sets are outlined in Table 3.…”
Section: Algorithm a Structure Discernibility Matrix Reduction Algormentioning
confidence: 99%
“…Decision theoretic approaches have also been used in order to induce classification rules from (partially) inconsistent and/or incomplete data bases [333]. For other examples, see [400,401,402,403]. How can we take advantage of new procedures for combination to build better learning algorithms?…”
Section: Large Databases and Inferencementioning
confidence: 99%
“…The most convenient way to define the characteristic relation is through the characteristic sets. For decision tables, in which all missing attribute values are lost, a special characteristic relation was defined in [18], see also, e.g., [17,19].…”
Section: Blocks Of Attribute-value Pairsmentioning
confidence: 99%
“…Recently rough set theory was extended to handle incomplete data sets (with missing attribute values) [1][2][3][4][5][6][7][8][9][17][18][19][20].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation