2012
DOI: 10.1007/s11222-012-9359-z
|View full text |Cite
|
Sign up to set email alerts
|

Variable selection for generalized linear mixed models by L 1-penalized estimation

Abstract: Generalized linear mixed models are a widely used tool for modeling longitudinal data. However, their use is typically restricted to few covariates, because the presence of many predictors yields unstable estimates. The presented approach to the fitting of generalized linear mixed models includes an L 1 -penalty term that enforces variable selection and shrinkage simultaneously. A gradient ascent algorithm is proposed that allows to maximize the penalized loglikelihood yielding models with reduced complexity. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
240
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 222 publications
(253 citation statements)
references
References 69 publications
(59 reference statements)
4
240
1
Order By: Relevance
“…In the famous lasso the log-likelihood is replaced by a penalized log-likelihood that includes a penalty term of the form λ p i=1 |γ i |, see Tibshirani (1996), Park and Hastie (2007) and Zou (2006). As shown by Groll and Tutz (2014), for simple binary random effects models the inclusion of a lasso penalty can be used within the framework of penalized quasi-likelihood yielding selection procedures for random effects models. Groll and Tutz (2014) gave a detailed algorithm which is based on an approximate EM-method.…”
Section: Variable Selection By Regularizationmentioning
confidence: 99%
See 3 more Smart Citations
“…In the famous lasso the log-likelihood is replaced by a penalized log-likelihood that includes a penalty term of the form λ p i=1 |γ i |, see Tibshirani (1996), Park and Hastie (2007) and Zou (2006). As shown by Groll and Tutz (2014), for simple binary random effects models the inclusion of a lasso penalty can be used within the framework of penalized quasi-likelihood yielding selection procedures for random effects models. Groll and Tutz (2014) gave a detailed algorithm which is based on an approximate EM-method.…”
Section: Variable Selection By Regularizationmentioning
confidence: 99%
“…As shown by Groll and Tutz (2014), for simple binary random effects models the inclusion of a lasso penalty can be used within the framework of penalized quasi-likelihood yielding selection procedures for random effects models. Groll and Tutz (2014) gave a detailed algorithm which is based on an approximate EM-method.…”
Section: Variable Selection By Regularizationmentioning
confidence: 99%
See 2 more Smart Citations
“…Classical methods for variable selection, such as the ones based on hypothesis testing or subset selection, are restricted to a few covariates. Notable works are two recent papers by Groll and Tutz (2012) and Schelldorfer et al (2013) which can do variable selection for GLMMs in high dimensions. Their approach first estimates the likelihood by approximating the integrals over the random effects using the Laplace method, then minimizes the sum of this estimated likelihood and a Lasso-type penalty which is the l 1 -norm of the fixed effect coefficients.…”
Section: Introductionmentioning
confidence: 99%