2014
DOI: 10.1007/978-3-319-06605-9_8
|View full text |Cite
|
Sign up to set email alerts
|

Highly Scalable Attribute Selection for Averaged One-Dependence Estimators

Abstract: Abstract. Averaged One-Dependence Estimators (AODE) is a popular and effective approach to Bayesian learning. In this paper, a new attribute selection approach is proposed for AODE. It can search in a large model space, while it requires only a single extra pass through the training data, resulting in a computationally efficient two-pass learning algorithm. The experimental results indicate that the new technique significantly reduces AODE's bias at the cost of a modest increase in training time. Its low bias … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…On the one hand, it is worthwhile to explore the technique of selecting both parents and children. On the other hand, we can also combine the fast attribute selection technique based on leave one out cross validation proposed in [4] to further improve the classification accuracy.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…On the one hand, it is worthwhile to explore the technique of selecting both parents and children. On the other hand, we can also combine the fast attribute selection technique based on leave one out cross validation proposed in [4] to further improve the classification accuracy.…”
Section: Resultsmentioning
confidence: 99%
“…At the same time, we want to obtain a thorough idea on how selective AnDE is compared with other improvements of AODE. As the study in [4] shows that ASAODE outperforms such improvements of AODE as weightily AODE, AODE with Subsumption Resolution and AODE with BSE, we add ASADOE here to give a thorough comparison.…”
Section: Selective Ande Compared To Ande and Asaodementioning
confidence: 98%
See 1 more Smart Citation
“…Because the main purpose for this paper is to design a good weighting model for ensembling SPODEs, and our analysis on the 56 benchmark data sets has already demonstrated the performance of the proposed SODE, we use the text and image learning tasks to demonstrate the generality of the SODE for different applications. Moreover, the SPODE model (e.g., AODE) has already been improved for the highly scalable attribute problem in [10]. Also, SPODE models can be trained incrementally [5].…”
Section: Text Categorization Taskmentioning
confidence: 99%
“…• Attribute selection [16] performs attribute (parent or children or both) eliminations in each iteration to reduce zero-one loss.…”
Section: Introductionmentioning
confidence: 99%