Proceedings of the 23rd International Conference on Machine Learning - ICML '06 2006
DOI: 10.1145/1143844.1143984
|View full text |Cite
|
Sign up to set email alerts
|

Efficient lazy elimination for averaged one-dependence estimators

Abstract: Semi-naive Bayesian classifiers seek to retain the numerous strengths of naive Bayes while reducing error by relaxing the attribute independence assumption. Backwards Sequential Elimination (BSE) is a wrapper technique for attribute elimination that has proved effective at this task. We explore a new technique, Lazy Elimination (LE), which eliminates highly related attribute-values at classification time without the computational overheads inherent in wrapper techniques. We analyze the effect of LE and BSE on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2006
2006
2018
2018

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 50 publications
(63 citation statements)
references
References 16 publications
0
63
0
Order By: Relevance
“…Only selected SPODEs will be included in the ensemble. Previous research has suggested that cross validation, forward sequential addition and lazy elimination are more effective than alternative selection methods for SPODEs [22,23].…”
Section: Model Selection Schemesmentioning
confidence: 99%
“…Only selected SPODEs will be included in the ensemble. Previous research has suggested that cross validation, forward sequential addition and lazy elimination are more effective than alternative selection methods for SPODEs [22,23].…”
Section: Model Selection Schemesmentioning
confidence: 99%
“…When there are many more attributes than those that participate in a particular inter-dependency, the majority of ODEs will not factor out the inter-dependency, and hence it is credible that deleting one of the attributes should be beneficial. Why then have previous attempts [15,19] to apply attribute-selection to AODE proved unfruitful? One difference between applying attribute selection in NB compared to AODE may be the greater complexity of an AODE model, resulting in greater variance in estimates of performance as the model is manipulated through attribute elimination and hence reduced reliability in these estimates.…”
Section: Attribute Selection For Aodementioning
confidence: 99%
“…The second approach, called child elimination (CE), deletes attribute indexes from c, effectively deleting an attribute from within every ODE at each step. Parent and child elimination (P∧CE) [15] at each step deletes the same value from both p and c, thus eliminating it from use in any role in the classifier. Parent or child elimination (P∨CE) performs any one of the other types of attribute eliminations in each iteration, selecting the option that best reduces error.…”
Section: Bse For Aodementioning
confidence: 99%
See 2 more Smart Citations