2010
DOI: 10.1016/j.neunet.2009.07.029
|View full text |Cite
|
Sign up to set email alerts
|

Asymptotic analysis of Bayesian generalization error with Newton diagram

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
7

Relationship

5
2

Authors

Journals

citations
Cited by 43 publications
(19 citation statements)
references
References 17 publications
0
19
0
Order By: Relevance
“…For several statistical models, the coefficient λ or its upper bound was evaluated by analyzing the pole of the zeta function (Yamazaki and Watanabe 2003a, 2003bRusakov and Geiger 2005;Aoyagi and Watanabe 2005;Watanabe 2009;Yamazaki et al 2010). The condition that the true distribution is contained in the model is natural and essential for dealing with model selection problems, which is in fact assumed in these analyses (Schwarz 1978;Yamazaki and Watanabe 2003a, 2003bRusakov and Geiger 2005;Aoyagi and Watanabe 2005;Yamazaki et al 2010).…”
Section: Bayesian Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…For several statistical models, the coefficient λ or its upper bound was evaluated by analyzing the pole of the zeta function (Yamazaki and Watanabe 2003a, 2003bRusakov and Geiger 2005;Aoyagi and Watanabe 2005;Watanabe 2009;Yamazaki et al 2010). The condition that the true distribution is contained in the model is natural and essential for dealing with model selection problems, which is in fact assumed in these analyses (Schwarz 1978;Yamazaki and Watanabe 2003a, 2003bRusakov and Geiger 2005;Aoyagi and Watanabe 2005;Yamazaki et al 2010).…”
Section: Bayesian Learningmentioning
confidence: 99%
“…The condition that the true distribution is contained in the model is natural and essential for dealing with model selection problems, which is in fact assumed in these analyses (Schwarz 1978;Yamazaki and Watanabe 2003a, 2003bRusakov and Geiger 2005;Aoyagi and Watanabe 2005;Yamazaki et al 2010).…”
Section: Bayesian Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The singularities in the parameter space play an important role to determine the performance. Mathematical approaches to reveal the structures of singularities have been developed [7], [8]. Based on the algebraic geometrical method, singularities in many models have been analyzed [9]- [17].…”
Section: Introductionmentioning
confidence: 99%
“…The reason why we contribute only to such singularities is that the Vandermonde matrix type is generic and essential in learning theory. These log canonical thresholds give the learning coefficients of normal mixture models, three-layered neural networks and mixtures of binomial distributions, which are widely used as effective learning models (Sections 3.1 and 3.2 and [13]). Moreover, we prove Theorem 2 (the method for finding a deepest deepest singular point) and Theorem 3 (the method to add variables), which are very beneficial to obtain the log canonical threshold for the homogeneous case.…”
Section: Introductionmentioning
confidence: 99%