2009
DOI: 10.1109/tfuzz.2009.2026639
|View full text |Cite
|
Sign up to set email alerts
|

Are More Features Better? A Response toAttributes Reduction Using Fuzzy Rough Sets

Abstract: A recent TRANSACTIONS ON FUZZY SYSTEMS paper proposing a new fuzzy-rough feature selector (FRFS) has claimed that the more attributes remain in datasets, the better the approximations and hence resulting models. [Tsang et al., IEEE Trans.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(16 citation statements)
references
References 25 publications
(42 reference statements)
1
15
0
Order By: Relevance
“…Therefore, less storage capacity and lower computational cost were required in the proposed approach compared to the traditional condensing by clustering approach [53][54][55]. To assess the representative power of the extracted generalized exemplars from the employed dataset, unpaired t-tests were employed and achieved P-values are summarized in Table 12.…”
Section: Time-ordered Datasetsmentioning
confidence: 99%
“…Therefore, less storage capacity and lower computational cost were required in the proposed approach compared to the traditional condensing by clustering approach [53][54][55]. To assess the representative power of the extracted generalized exemplars from the employed dataset, unpaired t-tests were employed and achieved P-values are summarized in Table 12.…”
Section: Time-ordered Datasetsmentioning
confidence: 99%
“…Just adding features to a model does not guarantee that it will get better. Additional features might represent redundant information, which would not translate into more accurate classifiers for certain machine learning models, or worse, they would contribute to the curse of dimensionality [25]. In order to make sure we are adding meaningful information we further analyzed our features.…”
Section: Measuring the Quality Of Sparcfire Featuresmentioning
confidence: 99%
“…In addition, not all such features may be useful to perform classification [12], [15], [23], [25]. Due to measurement noise the use of extra features may even reduce the overall representational potential of the feature set and hence, the classification accuracy [16]. It is therefore, often necessary to employ a method that can determine the most significant features, based on sample measurements, to simplify the classification process, while ensuring high classification performance.…”
Section: Introductionmentioning
confidence: 99%