1999
DOI: 10.1109/5254.784086
|View full text |Cite
|
Sign up to set email alerts
|

Maximizing text-mining performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
54
0
5

Year Published

2000
2000
2017
2017

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 157 publications
(63 citation statements)
references
References 9 publications
2
54
0
5
Order By: Relevance
“…They obtained the results comparable to the best results of Support Vector Machines and k-Nearest Neighbor methods [26,25,20], and better performing than Sleeping-experts, Rocchio, Naive Bayes and PrIF/DF [22].…”
Section: Introductionsupporting
confidence: 64%
“…They obtained the results comparable to the best results of Support Vector Machines and k-Nearest Neighbor methods [26,25,20], and better performing than Sleeping-experts, Rocchio, Naive Bayes and PrIF/DF [22].…”
Section: Introductionsupporting
confidence: 64%
“…O parâmetro k pode ser obtido através de validação cruzada na porção de treino, maximizando-se indicadores de performance de classificação como sensibilidade e especificidade, entre outros. Maiores detalhes sobre KNN podem ser obtidos em Duda et al [17], enquanto que exemplos de aplicações são encontrados em Golub et al [18], Weiss et al [19] e Chaovalitwongse et al [20].…”
Section: Referencial Teóricounclassified
“…Every classifier has its own inductive bias which affects the predictive performance. Research results indicate that ensembles perform better than single classifier in fields of text categorization [11] and data classification, etc. Several rules such as majority voting, i.e., bagging [4], weighted combination where weights represent effectiveness of member classifiers such as boosting [4], dynamic classifier selection [12], [13] and veto voting [9], [10] can be used for combining the decisions and having a final prediction.…”
Section: Veto-based Classificationmentioning
confidence: 99%