Proceedings of the Twelfth Annual Conference on Computational Learning Theory 1999
DOI: 10.1145/307400.307435
|View full text |Cite
|
Sign up to set email alerts
|

PAC-Bayesian model averaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
215
0

Year Published

2004
2004
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 236 publications
(219 citation statements)
references
References 9 publications
4
215
0
Order By: Relevance
“…The appealing aspect of integrating features of PAC-learning with Bayesian inference is that PAC-bounds are robust even when the true hypothesis is not included in the hypothesis space. However, known PAC-Bayesian results only imply that probability matching on the posterior is optimal (i.e., provides the tightest PAC-Bayesian bounds) for tasks that already contain in their description some aspect of probability matching, such as estimating a distribution, selecting a hypothesis stochastically, or providing a weighted average (McAllester, 1999(McAllester, , 2003. In fact, for the task of interest to us⎯selecting the true hypothesis⎯PAC-Bayesian considerations imply the same response as rational choice:…”
Section: Changing the Circumstancesmentioning
confidence: 99%
“…The appealing aspect of integrating features of PAC-learning with Bayesian inference is that PAC-bounds are robust even when the true hypothesis is not included in the hypothesis space. However, known PAC-Bayesian results only imply that probability matching on the posterior is optimal (i.e., provides the tightest PAC-Bayesian bounds) for tasks that already contain in their description some aspect of probability matching, such as estimating a distribution, selecting a hypothesis stochastically, or providing a weighted average (McAllester, 1999(McAllester, , 2003. In fact, for the task of interest to us⎯selecting the true hypothesis⎯PAC-Bayesian considerations imply the same response as rational choice:…”
Section: Changing the Circumstancesmentioning
confidence: 99%
“…the generalization error. Well known examples of this type of bound are the Vapnik-Chervonenkis bounds (VC bounds) [39], Probably Approximately Correct Bayes bounds (PAC Bayes bounds) [23], Occam's Razor bounds [7], Sample Compression bounds [11] and Rademacher Complexity bounds [4]. The author infers from the comparison of these two categories that test set bounds are generally much tighter than training set bounds and are a superior tool in reporting error rates.…”
Section: Preliminaries and Motivationmentioning
confidence: 99%
“…In [3] improved PAC Bayes bounds are provided for a class of linear classifiers. These bounds are tighter than the ones previously introduced in [23]. In [6] bounds are provided for a validation technique called progressive validation which are tighter than those for hold-out-set validation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The algorithm stated here is in a suboptimal form, which is good enough for our purposes (see [McAllester, 1999] for more sophisticated versions): is achieved for at least one c ∈ C = {c 0 , c 1 , . .…”
Section: A Consistent Algorithm: Proof Of Theoremmentioning
confidence: 99%