2018
DOI: 10.3390/jintelligence6010006
|View full text |Cite
|
Sign up to set email alerts
|

Response Time Reduction Due to Retesting in Mental Speed Tests: A Meta-Analysis

Abstract: As retest effects in cognitive ability tests have been investigated by various primary and meta-analytic studies, most studies from this area focus on score gains as a result of retesting. To the best of our knowledge, no meta-analytic study has been reported that provides sizable estimates of response time (RT) reductions due to retesting. This multilevel meta-analysis focuses on mental speed tasks, for which outcome measures often consist of RTs. The size of RT reduction due to retesting in mental speed task… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
15
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(18 citation statements)
references
References 139 publications
2
15
1
Order By: Relevance
“…including outcome measures such as scores, hits minus false alarms, percentage of correct responses or errors, or number of errors. Tests were not included if the only reported outcome was a reaction time measure, as reaction times reductions due to retesting do not necessarily reflect an improvement in test performance and should be differentiated from score change according to Ackerman (1987) and Scharfen, Blum and Holling (2018). (e) The mean scores of the first test administration must be below the maximum score reachable (e.g., accuracy below 100%) to ensure the absence of ceiling effects.…”
Section: Inclusion and Exclusion Criteriamentioning
confidence: 99%
“…including outcome measures such as scores, hits minus false alarms, percentage of correct responses or errors, or number of errors. Tests were not included if the only reported outcome was a reaction time measure, as reaction times reductions due to retesting do not necessarily reflect an improvement in test performance and should be differentiated from score change according to Ackerman (1987) and Scharfen, Blum and Holling (2018). (e) The mean scores of the first test administration must be below the maximum score reachable (e.g., accuracy below 100%) to ensure the absence of ceiling effects.…”
Section: Inclusion and Exclusion Criteriamentioning
confidence: 99%
“…For example, after eleven test administrations, Westhoff and Dewald [13] revealed large practice effects of nearly three standard deviations for a figural and around two and a half standard deviations for a numerical sustained attention test. Even after these many repetitions, though test scores increased more slowly, they had not reached a plateau yet (see also [16]). Additionally, while retest effects have been shown to decline with longer retest intervals, they have also been reported to decline rather slowly, so much so that it takes five years for them to vanish [10].…”
Section: Introductionmentioning
confidence: 99%
“…Though various factors are being discussed [8,9,18], the causes and locus (that is, the specific processes that become more efficient through practice) of the practice effect in sustained attention tests are not comprehended in detail [6,16]. In fact, there is a paucity of studies that investigate how retesting affects the specific processes and mechanisms involved in these tests.…”
Section: Introductionmentioning
confidence: 99%
“…The progression of retest effects over multiple test sessions and the causes for retest effects are not yet fully understood. Meta-analytic evidence for increasing test performance in various cognitive ability tests due to retesting has been reported by several authors [2][3][4][5][6]. The size of the effect is moderated by several variables such as equivalence of test forms, test-retest interval, participant age, as well as cognitive ability operation and content.…”
Section: Introductionmentioning
confidence: 92%
“…This implies a loss of construct validity in later test sessions [22]. Yet, the literature suggests that retest effects also occur when parallel forms of tests are provided [4][5][6]. Accordingly, attempts should be made to create and use different but parallel test versions.…”
Section: Deliberations On Measurement Invariance In Multiple Test Admmentioning
confidence: 99%