2009
DOI: 10.3102/1076998609332107
View full text |Buy / Rent full text
|
Sign up to set email alerts
|

Abstract: A bivariate lognormal model for the distribution of the response times on a test by a pair of test takers is presented. As the model has parameters for the item effects on the response times, its correlation parameter automatically corrects for the spuriousness in the observed correlation between the response times of different test takers because of variation in the time intensities of the items. This feature suggests using the model in a routine check of response-time patterns for possible collusion between … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2016
2016
2018
2018

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 28 publications
(20 citation statements)
references
References 26 publications
(35 reference statements)
0
20
0
Order By: Relevance
“…Additionally, if a respondent spent a great deal of time at the beginning of the assessment, but then answered items unduly quickly at the end of the assessment, such a pattern (which could only be detected from item-level response times) could be useful in identifying potentially problematic answering. Indeed, item-level response times have been employed in identifying aberrant answering911 and would allow more diverse modeling approaches to be considered in the context of predicting ADB. Data on person-level confounding variables (eg, reading speed and cognitive skills) were unavailable; adjusting for these variables, as well as item-level variables (eg, item complexity and length), could also improve the signal-to-noise ratio of the data through a richer statistical model 28,52–56,5861.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, if a respondent spent a great deal of time at the beginning of the assessment, but then answered items unduly quickly at the end of the assessment, such a pattern (which could only be detected from item-level response times) could be useful in identifying potentially problematic answering. Indeed, item-level response times have been employed in identifying aberrant answering911 and would allow more diverse modeling approaches to be considered in the context of predicting ADB. Data on person-level confounding variables (eg, reading speed and cognitive skills) were unavailable; adjusting for these variables, as well as item-level variables (eg, item complexity and length), could also improve the signal-to-noise ratio of the data through a richer statistical model 28,52–56,5861.…”
Section: Discussionmentioning
confidence: 99%
“…van der Linden discussed this type of issue in the context of educational assessment and concluded that information on speed nevertheless provides useful information 9. In the current context of screening for ADB, respondents could potentially make a conscious effort to provide answers more quickly if they learned that longer completion times were associated with greater perceived risk.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…One of the most popular models for responses and response times from psychological tests is the hierarchical model of van der Linden (2007). This model has numerous applications in psychological assessment and can be used for the incorporation of response time into trait estimation (van der Linden, Klein Entink, & Fox, 2010), the selection of items (Fan, Wang, Chang, & Douglas, 2012;van der Linden, 2008) and the detection of rapid guessing, item leakage or answer copying (Boughton, Smith, & Ren, 2017;Chan, Lu, & Tsai, 2014;Marianti, Fox, Avetisyan, Veldkamp, & Tijmstra, 2014;van der Linden, 2009;van der Linden & Guo, 2008). A successful application of the model, however, requires that the model is well calibrated, that is, that the parameters of the model are estimated correctly.…”
Section: Introductionmentioning
confidence: 99%
“…Initially, the majority of statistical procedures and techniques proposed for RT applications were used in posttest scoring procedures, such as calibration (i.e., Ingrisone II, 2008;Ranger & Kuhn, 2012b;Thissen, 1983;van der Linden, 2009a;Wang et al, 2013;Wang, 2005). These procedures are rarely employed for K-12 tests because these achievement tests are typically intended to be power tests, and students' scores are typically only based on the number or points of correct responses.…”
Section: Current Research On Item Rtmentioning
confidence: 99%