2014
DOI: 10.1038/hdy.2014.79
|View full text |Cite
|
Sign up to set email alerts
|

Empirical Bayesian elastic net for multiple quantitative trait locus mapping

Abstract: In multiple quantitative trait locus (QTL) mapping, a high-dimensional sparse regression model is usually employed to account for possible multiple linked QTLs. The QTL model may include closely linked and thus highly correlated genetic markers, especially when high-density marker maps are used in QTL mapping because of the advancement in sequencing technology. Although existing algorithms, such as Lasso, empirical Bayesian Lasso (EBlasso) and elastic net (EN) are available to infer such QTL models, more power… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
28
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(29 citation statements)
references
References 32 publications
0
28
0
Order By: Relevance
“…Conversely marginal posterior estimates (i.e., MMAP) of θ followed by joint posterior modal inference of g (and β) conditional on MMAP(θ) are typical of a more stable EBbased approach to inference with hierarchical models, similar to using REML followed by BLUP (Robinson 1991). Other researchers have taken yet a completely different approach by treating elements of θ as if they were augmented variables whose uncertainty is accounted for by integrating them out of the joint posterior density, whereas SNP-specific variances (i.e., τ) are considered as parameters to be estimated (Cai et al 2011;Huang et al 2015;Xu 2007). Given that each element of τ defines the relative variance of a single element of g, we are not sure that this is particularly advisable; nevertheless, more rigorous comparisons of their approach with our proposed strategy may be warranted.…”
Section: Discussionmentioning
confidence: 99%
“…Conversely marginal posterior estimates (i.e., MMAP) of θ followed by joint posterior modal inference of g (and β) conditional on MMAP(θ) are typical of a more stable EBbased approach to inference with hierarchical models, similar to using REML followed by BLUP (Robinson 1991). Other researchers have taken yet a completely different approach by treating elements of θ as if they were augmented variables whose uncertainty is accounted for by integrating them out of the joint posterior density, whereas SNP-specific variances (i.e., τ) are considered as parameters to be estimated (Cai et al 2011;Huang et al 2015;Xu 2007). Given that each element of τ defines the relative variance of a single element of g, we are not sure that this is particularly advisable; nevertheless, more rigorous comparisons of their approach with our proposed strategy may be warranted.…”
Section: Discussionmentioning
confidence: 99%
“…For example, both EBlasso-NEG [12,13] and our recent developed EBEN [14] have more parameters and require much more computation in cross validation to identify their optimal values. It will be very useful to have more efficient proximal algorithms for these methods.…”
Section: Discussionmentioning
confidence: 99%
“…Recently we developed an efficient empirical Bayesian Lasso (EBlasso) algorithm using a two-level hierarchical model with normal and exponential priors (EBlasso-NE) or a three-level hierarchical model with normal, exponential, and Gamma priors (EBlasso-NEG) [12,13], and an empirical Bayesian elastic net (EBEN) using a two-level hierarchical model with normal and generalized gamma priors [14] for multiple QTL mapping. Both EBlasso and EBEN outperform other shrinkage methods including Lasso and MCMC-based Bayesian shrinkage methods in terms of PD and FDR.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, the problem of variable selection reduces to the identification of the nonzero regression coefficients (Alhamzawi and Taha Mohammad Ali (2018)). Especially shrinkage approaches such as the lasso (Tibshirani (1996)), the adaptive lasso (Zou (2006)), the elastic net (Zou and Hastie (2005)) and their Bayesian analogues (Park and Casella (2008); Alhamzawi et al (2012); Leng et al (2014); Huang et al (2015)) which simultaneously perform variable selection and coefficient estimation have been shown to be effective and are often the methods of choice in linear regression. These methods estimate β as minimizer of the objective function L(β)+P (β, λ), where L denotes the quadratic loss function (negative log-likelihood) n i=1 (y i −x T i β) 2 and P denotes a method specific penalty function that encourages a sparse solution.…”
Section: Introductionmentioning
confidence: 99%