2017
DOI: 10.1002/ets2.12150
|View full text |Cite
|
Sign up to set email alerts
|

A Statistical Procedure for Testing Unusually Frequent Exactly Matching Responses and Nearly Matching Responses

Abstract: In investigations of unusual testing behavior, a common question is whether a specific pattern of responses occurs unusually often within a group of examinees. In many current tests, modern communication techniques can permit quite large numbers of examinees to share keys, or common response patterns, to the entire test. To address this issue, statistical methods are provided to identify examinees in a test with answers that exactly match and to assess whether such exact matches are unusual. In addition, metho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 27 publications
(44 reference statements)
0
12
0
Order By: Relevance
“…In contrast to work on test security by Holland (1996) and Haberman and Lee (2017), the authors do not appear to exploit distractors, despite the availability of an unauthorized answer key with errors. Simply comparing scores based on use of the unauthorized answer key with scores based on the true key can be remarkably informative.…”
Section: Comments On the Specific Papersmentioning
confidence: 71%
“…In contrast to work on test security by Holland (1996) and Haberman and Lee (2017), the authors do not appear to exploit distractors, despite the availability of an unauthorized answer key with errors. Simply comparing scores based on use of the unauthorized answer key with scores based on the true key can be remarkably informative.…”
Section: Comments On the Specific Papersmentioning
confidence: 71%
“…Other limitations include the nonparametric approach in estimating OCCs and response time; that is, we could not use a parametric item response model for item analysis because of the restriction imposed by the large numbers of response selections for the MC.m items—and for the same reason, use of item response time was limited in our study. A modified NRM approach as in Haberman and Lee (2017) could be explored, in which distractor selection and proficiency are conditionally independent given that the test taker's selection was incorrect, even though this approach does not result in the test taker gaining more credit with fewer options. Further studies may also investigate parametric modeling of response selections and response time with appropriate data.…”
Section: Discussionmentioning
confidence: 99%
“…The answer-copying indices that are most popular include the K-index (e.g., Holland, 1996), the closely related probability of matching incorrect responses (PMIR; e.g., Lewis & Thayer, 1998), the ω index (Wollack, 1997), and the generalized binomial model approach (van der Linden & Sotaridona, 2006). The popular similarity indices include the M4 index (Maynes, 2014) for detecting pairs of examinees one or more of whom may have been involved in test fraud, the index of Wollack and Maynes (2017) that employs cluster-analysis methods (e.g., Everitt, Landau, Leese, & Stahl, 2011) in conjunction with the M4 index (Maynes, 2014) to detect groups of test-takers engaged in collusion, and the analysis for matching of response patterns based on a specialized IRT model (Haberman & Lee, 2017).…”
Section: The Five Types Of Statistical/ Psychometric Approaches To Dementioning
confidence: 99%