2008 3rd International Conference on Innovative Computing Information and Control 2008
DOI: 10.1109/icicic.2008.7
|View full text |Cite
|
Sign up to set email alerts
|

A Approach for Text Classification Feature Dimensionality Reduction and Rule Generation on Rough Set

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…First, storing so many features requires a vast amount of memory storage. Second, high dimensionality might lead to worse performance or even overfitting, when dimensionality increases while training samples remain fixed [ 37 , 38 ]. Overfitting might be a direct result of this so called curse of dimensionality.…”
Section: Methodsmentioning
confidence: 99%
“…First, storing so many features requires a vast amount of memory storage. Second, high dimensionality might lead to worse performance or even overfitting, when dimensionality increases while training samples remain fixed [ 37 , 38 ]. Overfitting might be a direct result of this so called curse of dimensionality.…”
Section: Methodsmentioning
confidence: 99%
“…For S=(U,A,V,f), if the research object attribute set A consists of a conditional attribute C and a decision attribute D , that is, A=C80%trueD,C80%trueD=Φ, then the information system S is called the decision table [12].…”
Section: Rough Set Theorymentioning
confidence: 99%
“…It is advantageous in improving the efficiency of the selected feature subset and suitable for high-volume text classification. The accuracy is higher and the speed of classification is faster than the classification based on vector space comparison [6].…”
Section: Related Workmentioning
confidence: 99%