2020
DOI: 10.1080/13803611.2021.1977152
|View full text |Cite
|
Sign up to set email alerts
|

The impact of students’ test-taking effort on growth estimates in low-stakes educational assessments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 36 publications
0
9
0
Order By: Relevance
“…However, neither attentiveness nor established response‐pattern‐based indicators showed a substantive correlation with time spent on the questionnaire. A possible explanation for this result may be that in unproctored online administration of self‐report measures as considered in the empirical example, the relationship between C/IER and timing data may be more complex and both extremely short times and extremely long times may be indicative of C/IER (Gorgun & Bulut, 2021; Yildirim‐Erbasli & Bulut, 2021). While extremely short times may stem from rushing through the items, extremely long times may go back to deficient attention of respondents (i.e., due to getting distracted by other browser tabs or applications on their devices).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, neither attentiveness nor established response‐pattern‐based indicators showed a substantive correlation with time spent on the questionnaire. A possible explanation for this result may be that in unproctored online administration of self‐report measures as considered in the empirical example, the relationship between C/IER and timing data may be more complex and both extremely short times and extremely long times may be indicative of C/IER (Gorgun & Bulut, 2021; Yildirim‐Erbasli & Bulut, 2021). While extremely short times may stem from rushing through the items, extremely long times may go back to deficient attention of respondents (i.e., due to getting distracted by other browser tabs or applications on their devices).…”
Section: Discussionmentioning
confidence: 99%
“…Examples of response‐pattern‐based indicators include the long string index, which is constructed by examining the longest sequence of subsequently occurring identical responses for each respondent (Johnson, 2005), or Mahalanobis distance, following the rationale that C/IE responses are outliers that deviate from typical response patterns (Curran, 2016; Huang et al., 2012). When response times are employed, unrealistically short (i.e., rapid responses; Kroehne, Buchholz, & Goldhammer, 2019) or unrealistically long response times (i.e., slow responding; see Gorgun & Bulut, 2021; Yildirim‐Erbasli & Bulut, 2021, for applications in the context of cognitive assessment), times spent on screens, or total survey time (Huang et al., 2012; Meade & Craig, 2012) are used as indicators of C/IER.…”
Section: Detecting Careless and Insufficient Effort Respondingmentioning
confidence: 99%
See 1 more Smart Citation
“…They may, however, also indicate that respondents were very sure of their answer, thus requiring only a very short amount of time to generate an attentive response (referred to as distance-difficulty effect; Kuncel & Fiske, 1974). Further, not only very short but also outrageously long times entail ambiguity as to whether they stem from C/IER due to respondents getting distracted and not focusing on the administered items (Meade & Craig, 2012;Yildirim-Erbasli & Bulut, 2021), or go back to time-consuming attentive response processes, e.g., when respondents are indecisive between competing response options or require a large amount of time for processing the item content. Requiring a clear-cut decision for such cases neglects the uncertainty in classification and inevitably results in misclassifications.…”
Section: Shared Strengths and Limitations Of Indicator-based Approachesmentioning
confidence: 99%
“…For example, under certain conditions, when RG responses comprise only 6% of a data matrix, aggregated examinee ability can be negatively biased by 0.20 standard deviations (Rios et al, 2017). This degree of bias can potentially undermine a number of measurement property and score-based inferences, such as item parameter estimates (van Barnevald, 2007), measurement invariance (Rios, 2021a), proficiency classifications (Rios & Soland, 2021b), treatment effects (e.g., Osborne & Blanchard, 2011), achievement gains (e.g., Wise & DeMars, 2010), and growth estimates (Yildirim-Erbasli & Bulut, 2020).…”
Section: Consequences Of Not Accounting For Rgmentioning
confidence: 99%