2014
DOI: 10.1371/journal.pone.0084601
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Linear Mixed Models for Binary Data: Are Matching Results from Penalized Quasi-Likelihood and Numerical Integration Less Biased?

Abstract: BackgroundOver time, adaptive Gaussian Hermite quadrature (QUAD) has become the preferred method for estimating generalized linear mixed models with binary outcomes. However, penalized quasi-likelihood (PQL) is still used frequently. In this work, we systematically evaluated whether matching results from PQL and QUAD indicate less bias in estimated regression coefficients and variance parameters via simulation.MethodsWe performed a simulation study in which we varied the size of the data set, probability of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
11
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 19 publications
1
11
0
Order By: Relevance
“…We could see that already in the first step, a bias was introduced, where the mixed model was fitted, and thus an NMA based on the estimated treatment effects from the first step can not be unbiased. This is in line with the results that are presented, for example, in Benedetti et al (), where the issue of biased estimates from mixed models for binary data is discussed. As a side note, in very few situations, the mixed model led to estimated (log) odds ratios with very high estimated variances which leads to doubts about the convergence of the model.…”
Section: Discussionsupporting
confidence: 92%
See 1 more Smart Citation
“…We could see that already in the first step, a bias was introduced, where the mixed model was fitted, and thus an NMA based on the estimated treatment effects from the first step can not be unbiased. This is in line with the results that are presented, for example, in Benedetti et al (), where the issue of biased estimates from mixed models for binary data is discussed. As a side note, in very few situations, the mixed model led to estimated (log) odds ratios with very high estimated variances which leads to doubts about the convergence of the model.…”
Section: Discussionsupporting
confidence: 92%
“…To estimate the (adjusted) log odds ratio in the first step of the two‐step approach, that is, the log odds ratio obtained in the mixed‐effects model, mainly two different approaches can be used, the Gaussian Hermite Quadrature or the penalized quasi‐likelihood method. An abundant amount of discussion exists in the literature about these estimating procedures (for example, Benedetti et al, ; Austin, ). We decided to use the penalized quasi‐likelihood procedure implemented in the R‐package mass (Venables and Ripley, , version 7.3‐22), because it is less CPU‐intensive.…”
Section: Simulation Studymentioning
confidence: 99%
“…The approaches yielded similar results in relation to statistical significance and direction. The results of the QUAD function were retained for analysis because some literature suggests it may yield less biased results when dealing with small numbers of observations within clusters or for binary response variables, as is the case here ( 36 ). A traditional generalized linear model with no random effects was also examined, and it was attempted to account for spatial bias by removing entirely people who walked or rode a bicycle, as they were tightly clustered in areas directly adjacent to campus.…”
Section: Methodsmentioning
confidence: 99%
“…According to previous literature, these factors have all been identified as influencing model performance [ 6 ]. While several simulation studies have been published, these have mainly limited their attention to simple models with only random intercepts [ 13 , 16 ]. Thus, the performance of the random effects models including both a random intercept and a random slope are less well known.…”
Section: Introductionmentioning
confidence: 99%