2018
DOI: 10.1111/sjos.12335
|View full text |Cite
|
Sign up to set email alerts
|

Learning from a lot: Empirical Bayes for high‐dimensional model‐based prediction

Abstract: Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 65 publications
0
14
0
Order By: Relevance
“…Another aspect is that "pile-up" effects can be avoided by choosing a small hotspot propensity variance at the risk of giving up substantial hotspot selection performance. The very sparse nature of molecular QTL analyses also rules out the use of simple empirical Bayes estimates which typically collapse to the degenerate caseσ 2 ω = 0; see, for example, Scott and Berger (2010), van de Wiel, Te Beest and Münch (2019). Thus, a tailored solution is needed.…”
Section: Problem Statementmentioning
confidence: 99%
“…Another aspect is that "pile-up" effects can be avoided by choosing a small hotspot propensity variance at the risk of giving up substantial hotspot selection performance. The very sparse nature of molecular QTL analyses also rules out the use of simple empirical Bayes estimates which typically collapse to the degenerate caseσ 2 ω = 0; see, for example, Scott and Berger (2010), van de Wiel, Te Beest and Münch (2019). Thus, a tailored solution is needed.…”
Section: Problem Statementmentioning
confidence: 99%
“…The intuition behind empirical Bayes and cross-validation is similar: empirical Bayes aims to choose the value for λ that is best in predicting the full data set, while cross-validation aims to choose the value for λ that is best in predicting the validation set given a training set. A possible disadvantage of empirical Bayes and cross-validation is that the (marginal) likelihood can be flat or multimodal when there are multiple penalty parameters (van de Wiel et al, 2017). 4 Throughout this paper, we will focus on the full and empirical Bayes approach to determine λ, and only consider cross-validation for the frequentist penalization methods we will compare the priors to.…”
Section: Bayesian Penalized Regressionmentioning
confidence: 99%
“…Empirical Bayes. Empirical Bayes (EB) methods, also known as the "evidence" procedure (see e.g., Wolpert and Strauss, 1996), first estimate the penalty parameter λ from the data and then plug in this EB estimate for λ in the model (see van de Wiel et al, 2017, for an overview of EB methodology in high-dimensional data). The resulting prior is called an EB prior.…”
Section: Bayesian Penalized Regressionmentioning
confidence: 99%
“…A Bayesian could argue that endowment of α with a hyperprior results in propagation of uncertainty about α and as a result improved regression parameter uncertainty quantification. Van de Wiel, te Beest, and Münch (2019) show in a similar setting that EB estimation of hyperparameters does not necessarily lead to worse uncertainty quantification as measured by frequentist coverage of Bayesian credible intervals, as compared to a full Bayes treatment of the hyperparameters.…”
Section: Discussionmentioning
confidence: 97%