2021
DOI: 10.1016/j.ijpsycho.2021.01.006
|View full text |Cite
|
Sign up to set email alerts
|

Using generalizability theory and the ERP Reliability Analysis (ERA) Toolbox for assessing test-retest reliability of ERP scores part 1: Algorithms, framework, and implementation

Abstract: The reliability of event-related brain potential (ERP) scores depends on study context and how those scores will be used, and reliability must be routinely evaluated. Many factors can influence ERP score reliability; generalizability (G) theory provides a multifaceted approach to estimating the internal consistency and temporal stability of scores that is well suited for ERPs. G-theory's approach possesses a number of advantages over classical test theory that make it ideal for pinpointing sources of error in … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
38
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 39 publications
(44 citation statements)
references
References 20 publications
0
38
0
Order By: Relevance
“…Subject-level internal consistency estimates characterize whether a person's ERP data is of high enough quality to be used in an investigation of individual differences. Notably, the tools for estimating subject-level reliability are openly accessible via the ERP Reliability Analysis Toolbox (Clayson & Miller, 2017a), which also includes the capability to estimate test-retest reliability (Clayson et al, 2021e) and the internal consistency of difference scores (Clayson et al, 2021a). Estimates of data quality and psychometric reliability can guide decisions about analysis and processing pipelines (Clayson, 2020;Clayson et al, 2021c;Sandre et al, 2020).…”
Section: Data Analysis and Pipeline Sharingmentioning
confidence: 99%
“…Subject-level internal consistency estimates characterize whether a person's ERP data is of high enough quality to be used in an investigation of individual differences. Notably, the tools for estimating subject-level reliability are openly accessible via the ERP Reliability Analysis Toolbox (Clayson & Miller, 2017a), which also includes the capability to estimate test-retest reliability (Clayson et al, 2021e) and the internal consistency of difference scores (Clayson et al, 2021a). Estimates of data quality and psychometric reliability can guide decisions about analysis and processing pipelines (Clayson, 2020;Clayson et al, 2021c;Sandre et al, 2020).…”
Section: Data Analysis and Pipeline Sharingmentioning
confidence: 99%
“…Another obvious restriction of the study is that only estimators from the classical test theory were discussed. A relevant question is, how applicable the results would be with estimators of reliability within Generalizability Theory (G-Theory; chronologically, e.g., Cronbach et al, 1972 ; Shavelson et al, 1989 ; Shavelson and Webb, 1991 ; Brennan, 2001 , 2010 ; Vispoel et al, 2018a , b ; Clayson et al, 2021 ), confirmatory factor analysis (CFA) or structural equation modeling (SEM refer to, e.g., Raykov and Marcoulides, 2006 ; Green and Yang, 2009b ), and IRT and Rasch modeling (refer to estimators in e.g., Verhelst et al, 1995 ; Holland and Hoskens, 2003 ; Kim and Feldt, 2010 ; Cheng et al, 2012 ; Kim, 2012 ; Milanzi et al, 2015 )? Except for the estimators developed for CFA and SEM analysis, in all cases, the possible deflation in the estimates is not as obvious as with the classical estimators, because the latter can be expressed using Rit and principal and factor loadings that are obviously deflated.…”
Section: Conclusion Discussion and Restrictionsmentioning
confidence: 99%
“…To our knowledge, currently, there are only two metrics that can be used to quantify ERP data quality for individual participants: subjectlevel internal consistency (Clayson et al, 2021) and the SME (Luck et al, 2021). They index two different sources of variability and therefore are not interchangeable in their use.…”
Section: 5mentioning
confidence: 99%
“…They index two different sources of variability and therefore are not interchangeable in their use. As a measure of psychometric reliability, internal consistency provides a metric of how well ERP measurements can capture true differences between people (Clayson et al, 2021). Measurements of internal consistency are particularly important for studies of individual differences where the links between ERP scores and variables of interest, such as anxiety symptoms or math performance, are examined (Thigpen et al, 2017).…”
Section: 5mentioning
confidence: 99%
See 1 more Smart Citation