2013
DOI: 10.14507/epaa.v21n6.2013
|View full text |Cite
|
Sign up to set email alerts
|

Legal Issues in the Use of Student Test Scores and Value-added Models (VAM) to Determine Educational Quality

Abstract: A growing number of states and local schools across the country have adopted educator evaluation and accountability programs based on the use of student test scores and value-added models (VAM). A wide array of potential legal issues could arise from the implementation of these programs. This article uses legal analysis and social science evidence to discuss potential legal challenges by educators to the use of VAM that should be considered by public policy makers. It also discusses potential ways VAM might be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0
1

Year Published

2014
2014
2018
2018

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(27 citation statements)
references
References 20 publications
0
26
0
1
Order By: Relevance
“…This strategy, that employs repeated individual performance evaluations in relation to so-called value added measures of student achievement, is a technical, moral and legal nightmare. There are notorious cases of teachers who teach so few (tested) students that scores and associated judgments fluctuate wildly from year to year and of teachers whose performance is evaluated on the basis of students they don't actually teach (Pullin, 2013). These issues are not only unfair to teachers who are often judged unreasonably, and to students who are tested (rather than taught) pervasively in order to serve administrative purposes.…”
Section: External Accountabilitymentioning
confidence: 99%
“…This strategy, that employs repeated individual performance evaluations in relation to so-called value added measures of student achievement, is a technical, moral and legal nightmare. There are notorious cases of teachers who teach so few (tested) students that scores and associated judgments fluctuate wildly from year to year and of teachers whose performance is evaluated on the basis of students they don't actually teach (Pullin, 2013). These issues are not only unfair to teachers who are often judged unreasonably, and to students who are tested (rather than taught) pervasively in order to serve administrative purposes.…”
Section: External Accountabilitymentioning
confidence: 99%
“…Since the implementation of TVAAS a large variety of value added statistical models (i.e., the Value-Added Research Center (VARC) model, the RAND Corporation model, the American Institute for Research (AIR) model, and the Stude nt Growth Percentiles (SGP) model) have been developed and applied (Amrein-Beardsley & Collins, 2012). In addition to the development and widespread adoption of these growth models there has been a surge in the research base providing analysis of the benefits, drawbacks, costs, and implications of these new methods (Darling-Hammond et al, 2012;Hewitt, 2015;Pullin, 2013;Sparks, 2011). In general these growth models are very complex and highly technical, and there are concerns that policymakers, administrators, teachers and other stakeholders will struggle to understand the pros and cons of so many different and complex approaches.…”
Section: Use Of Vams In Policy Contextmentioning
confidence: 99%
“…While judges are generally reluctant to second-guess educators' judgments of educational performance based on purely subjective evaluations, they will review decisions involving standardized multiple-choice tests or requirements for a diploma, licensure, or continued participation in a program of professional education (Baker, Oluwole & Green, 2013;Pullin, 2013Pullin, , 2001. They engage in more rigorous review in high stakes contexts, as when a license is already held and might be taken away or where a teacher has an ongoing contract or tenure and might lose his employment.…”
Section: Fair Treatment and Defensible Decision-makingmentioning
confidence: 99%
“…Claims of the denial of fairness in educator assessment systems could arise from a variety of implementation practices, such as issues of technical quality, inadequate decision-making processes, or failure to provide useful information to guide improvement (Baker, Oluwole, & Green, 2013;Duckor et al, 2014;Gulino v. Board of Education, 2014;Nordberg v. Massachusetts, 2011;Pullin, 2013;Sato, 2014;Wilkerson, 2015). In addition, social scientists have raised concerns about methods for collecting and verifying data about an individual educator's performance, the potential misattribution of a teacher's scores from other teachers, and the fairness, reliability, and validity of the scoring system used to evaluate educators (American Educational Research Association, American Psychological Association and National Council for Measurement in Education, 2014; American Statistical Association; Amrein-Beardsley, 2014;Baker et al, 2013;Darling-Hammond, Amrein-Beardsley, Haertel, & Rothstein, 2012;Pullin, 2013).…”
Section: Fair Treatment and Defensible Decision-makingmentioning
confidence: 99%
See 1 more Smart Citation