2016
DOI: 10.20982/tqmp.12.3.p175
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian linear mixed models using Stan: A tutorial for psychologists, linguists, and cognitive scientists

Abstract: With the arrival of the R packages nlme and lme4, linear mixed models (LMMs) have come to be widely used in experimentally-driven areas like psychology, linguistics, and cognitive science. This tutorial provides a practical introduction to fitting LMMs in a Bayesian framework using the probabilistic programming language Stan. We choose Stan (rather than WinBUGS or JAGS) because it provides an elegant and scalable framework for fitting models in most of the standard applications of LMMs. We ease the reader into… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
136
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 153 publications
(141 citation statements)
references
References 31 publications
(57 reference statements)
1
136
0
Order By: Relevance
“…The data analysis was conducted in the R programming environment (R Core Team, ), using Bayesian hierarchical models in Stan (Stan Development Team, ) with the R package RStan (Stan Development Team, ). For details on fitting Stan models, see Nicenboim and Vasishth () and Sorensen, Hohenstein, and Vasishth (). In all the models, the interference condition was sum coded (−1 for low interference and 1 for high interference condition) and covariates presented in the Supplementary Material A were scaled and centered.…”
Section: Experiments 1: Exploratory Analysismentioning
confidence: 99%
“…The data analysis was conducted in the R programming environment (R Core Team, ), using Bayesian hierarchical models in Stan (Stan Development Team, ) with the R package RStan (Stan Development Team, ). For details on fitting Stan models, see Nicenboim and Vasishth () and Sorensen, Hohenstein, and Vasishth (). In all the models, the interference condition was sum coded (−1 for low interference and 1 for high interference condition) and covariates presented in the Supplementary Material A were scaled and centered.…”
Section: Experiments 1: Exploratory Analysismentioning
confidence: 99%
“…One advantage of Bayesian inference in Stan is that a hierarchical linear model can almost always be fit with full variance-covariance matrices for subject and item random effects (Sorensen et al 2016); this is often difficult to achieve with the lme4 function (Bates et al 2015b; see Bates et al 2015a for further discussion). Another advantage is the more straightforward interpretation of results in a Bayesian setting.…”
Section: Discussionmentioning
confidence: 99%
“…The prior distribution for each estimated parameter was a normal distribution with mean zero and a standard deviation of 2.5, except for the intercept, for which a standard deviation of 10 was used. The LKJ prior (Lewandowski et al 2009) with parameter 1 was used for the variance-covariance matrices of the random effects for subjects and items; 5 this imposes a regularization on the prior distribution of the variance-covariance matrix (see Stan Development Team 2016 for details, and Sorensen et al 2016 for a tutorial intended for psycholinguists). Besides fitting models to individual regions of interest, as is commonly done in psycholinguistics, we also fitted a model that took into consideration all data points from the second-to-last region leading up to the ellipsis site (crit-2) to the second region after the ellipsis site (crit+2).…”
Section: Discussionmentioning
confidence: 99%
“…For example, the list includes articles that editors and reviewers might consult as a reference while reviewing manuscripts that apply advanced Bayesian methods such as structural equation models (Kaplan & Depaoli, 2012), hierarchical models (Rouder & Lu, 2005), linear mixed models (Sorensen, Hohenstein, & Vasishth, 2016), and design (i.e., power) analyses (Schönbrodt et al, 2015). The list also includes books that may serve as accessible introductory texts (e.g., Dienes, 2008) or as more advanced textbooks (e.g., .…”
Section: Appendixmentioning
confidence: 99%