2020
DOI: 10.1016/j.jml.2019.104038
|View full text |Cite
|
Sign up to set email alerts
|

How to capitalize on a priori contrasts in linear (mixed) models: A tutorial

Abstract: Factorial experiments in research on memory, language, and in other areas are often analyzed using analysis of variance (ANOVA). However, for effects with more than one numerator degrees of freedom, e.g., for experimental factors with more than two levels, the ANOVA omnibus F-test is not informative about the source of a main effect or interaction.Because researchers typically have specific hypotheses about which condition means differ from each other, a priori contrasts (i.e., comparisons planned before the s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
322
0
4

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 431 publications
(330 citation statements)
references
References 17 publications
2
322
0
4
Order By: Relevance
“…Raincloud plots were produced to visualise behavioural data using the code provided by (Allen et al, 2019). For linear models, contrasts for categorical variables were sum-to-zero contrast coded, with coefficients reflecting differences to the grand mean (Schad et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
“…Raincloud plots were produced to visualise behavioural data using the code provided by (Allen et al, 2019). For linear models, contrasts for categorical variables were sum-to-zero contrast coded, with coefficients reflecting differences to the grand mean (Schad et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
“…A beta regression was used to assess the relationship between Statistical Learning Ability and behavioural performance on the sentence judgement tasks, while effects were plotted using the package effects (Fox et al, 2019) and ggplot2 (Wickham, 2016). Categorical factors were sum-to-zero contrast coded, meaning that factor level estimates were compared to the grandmean (Schad et al, 2020). Further, an 83% confidence interval (CI) threshold was used given that this approach is more conservative than the traditional 95% CI threshold and corresponds to the 5% significance level with non-overlapping estimates (Austin & Hux, 2002;MacGregor-Fors & Payton, 2013).…”
Section: Resultsmentioning
confidence: 99%
“…Parameters were estimated with restricted maximum likelihood. In all models, contrasts for the fixed effects were computed from the generalised inverse function (Schad, Hohenstein, Vasishth, & Kliegl, 2020).…”
Section: Resultsmentioning
confidence: 99%