Since its 1947 founding, ETS has conducted and disseminated scientific research to support its products and services, and to advance the measurement and education fields. In keeping with these goals, ETS is committed to making its research freely available to the professional community and to the general public. Published accounts of ETS research, including papers in the ETS Research Report series, undergo a formal peer-review process by ETS staff to ensure that they meet established scientific and professional standards. All such ETS-conducted peer reviews are in addition to any reviews that outside organizations may provide as part of their own publication processes. Peer review notwithstanding, the positions expressed in the ETS Research Report series and other published accounts of ETS research are those of the authors and not necessarily those of the Officers and Trustees of Educational Testing Service.The Daniel Eignor Editorship is named in honor of Dr. Daniel R. Eignor, who from 2001 until 2011 served the Research and Development division as Editor for the ETS Research Report series. The Eignor Editorship has been created to recognize the pivotal leadership role that Dr. Eignor played in the research publication process at ETS.
ETS Research Report Series ISSN 2330-8516 R E S E A R C H R E P O R T
Articulation of Cut Scores in the Context of the Next-Generation Assessments
Priya Kannan & Adrienne SgammatoEducational Testing Service, Princeton, NJ Logistic regression (LR)-based methods have become increasingly popular for predicting and articulating cut scores. However, the precision of predictive relationships is largely dependent on the underlying correlations between the predictor and the criterion. In two simulation studies, we evaluated the impact of varying the underlying grade-level correlations on the resultant bias in cut scores articulated using the LR method. In Study 1, we compared different articulation methods (LR and equipercentile smoothing), and in Study 2, we evaluated different criteria for linking (e.g., adjacent grade or end of course). The collective results indicate that as correlations became smaller, cut scores articulated using LR-based predictions became increasingly biased when compared to a true value obtained under perfect correlation. The predicted values are significantly biased for lower achievement levels, irrespective of the linking criteria used. Results from these studies suggest that the LR method must be used with caution, particularly when articulating cut scores for multiple achievement levels.