2015
DOI: 10.3389/fpsyg.2015.00652
|View full text |Cite
|
Sign up to set email alerts
|

A new approach for modeling generalization gradients: a case for hierarchical models

Abstract: A case is made for the use of hierarchical models in the analysis of generalization gradients. Hierarchical models overcome several restrictions that are imposed by repeated measures analysis-of-variance (rANOVA), the default statistical method in current generalization research. More specifically, hierarchical models allow to include continuous independent variables and overcomes problematic assumptions such as sphericity. We focus on how generalization research can benefit from this added flexibility. In a s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
28
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(32 citation statements)
references
References 38 publications
1
28
0
Order By: Relevance
“…To deal with these dependencies in the data we used a hierarchical modeling approach [27]. Use of hierarchical models is a more appropriate way of analyzing generalization data than using repeated measures ANOVA, as the latter treats the generalization stimuli as a categorical dimension whereas a dimensional approach seems more appropriate.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To deal with these dependencies in the data we used a hierarchical modeling approach [27]. Use of hierarchical models is a more appropriate way of analyzing generalization data than using repeated measures ANOVA, as the latter treats the generalization stimuli as a categorical dimension whereas a dimensional approach seems more appropriate.…”
Section: Resultsmentioning
confidence: 99%
“…Moreover, the assumption of sphericity in a repeated measures ANOVA, that is equality of variances within stimuli and correlations between stimuli, is not realistic when working with generalization data, resulting in increased probability of type I error. For a full discussion of the benefits of the used approach over a repeated measures ANOVA, see [27]. All models were fitted in R [28] by means of the lme4 package [29].…”
Section: Resultsmentioning
confidence: 99%
“…Accordingly, we investigated two key questions: ( i ) Is perceptual similarity of the morphs implicitly used to guide novel decisions to trust unfamiliar others, and ( ii ) do these putative generalization gradients evoke structurally similar behavioral tuning profiles (e.g., adaptively refraining from or choosing to trust at the same rate)? To answer these questions, we ran a hierarchal logistic regression ( 14 ), where both trustworthiness type (whether faces were morphed with the original trustworthy, untrustworthy, or neutral player) and perceptual similarity (increasing similarity to the original players) were entered as predictors of choosing to play with the morph. We found that as perceptual resemblance to the original trustworthy player increased subjects were significantly more likely to choose the morph as a partner for a future trust game (trust type × perceptual similarity: P < 0.001; Fig.…”
mentioning
confidence: 99%
“…The training and test data were analyzed in hierarchical mixed effects regressions using the ‘lme4’ package ([ 37 ]) in R ([ 38 ]). In the context of our experiments, hierarchical linear regression is more appropriate and powerful than traditional analysis of variance (ANOVA) (see [ 39 ]). Firstly, both our training and test data consist of multiple trial-level observations for each level of each factor.…”
Section: Resultsmentioning
confidence: 99%
“…Hierarchical linear models utilize the raw trial-level responses (e.g., correct/incorrect), minimizing information loss ([ 40 ]). Mixed effects models are especially useful in analyzing categorization test data consisting of binary (correct/incorrect) responses, which otherwise would need to be averaged to form continuous data to analyze in a repeated-measures ANOVA (see [ 39 ], for other arguments against using repeated-measures ANOVA to analyze generalization gradients).…”
Section: Resultsmentioning
confidence: 99%