2007
DOI: 10.1007/s10844-006-0016-x
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian networks for imputation in classification problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0
1

Year Published

2011
2011
2017
2017

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(20 citation statements)
references
References 24 publications
0
19
0
1
Order By: Relevance
“…Although they stress that MC and CMC are the less useful approaches, our results indicate that in case of K-NN the contrary is verified when analyzing more data sets. On the other hand, Hruschka et al [28] obtain that the best imputation method depends on the data set for C4.5, PART, and NB. However, due to the limited number of data sets used, these results are not as general as ours.…”
Section: Results For All the Classification Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Although they stress that MC and CMC are the less useful approaches, our results indicate that in case of K-NN the contrary is verified when analyzing more data sets. On the other hand, Hruschka et al [28] obtain that the best imputation method depends on the data set for C4.5, PART, and NB. However, due to the limited number of data sets used, these results are not as general as ours.…”
Section: Results For All the Classification Methodsmentioning
confidence: 99%
“…• Hruschka et al [28] propose two imputation methods based on Bayesian networks. They compare them with 4 classical imputation methods: EM, Data Augmentation, C4.5, and the CMC method, using 4 nominal data sets from the UCI repository [3] with natural MVs (but inducing MVs in them as well).…”
Section: An Overview Of the Analysis Of Imputation Methods In The Litmentioning
confidence: 99%
See 1 more Smart Citation
“…Because the Bayesian network models the dependencies among all the features, it has proved to be effective in handling missing values in the dataset. 35,36 Bayesian statistical methods can smooth the model so that all available data can be used for training; thus, this process efficiently addresses the overfitting issue.…”
Section: Image Processing and Texture Analysismentioning
confidence: 99%
“…The vast majority of MVs studies in classification usually analyze and compare one imputation method against a few others under controlled amounts of MVs, and induce them artificially with known mechanisms and probability distributions (Acuna and Rodriguez 2004;Batista and Monard 2003;Farhangfar et al 2008;Hruschka Jr. et al 2007;Li et al 2004;Luengo et al 2010).…”
Section: Introductionmentioning
confidence: 99%