2004
DOI: 10.14507/epaa.v12n32.2004
|View full text |Cite
|
Sign up to set email alerts
|

Interrogating the Generalizability of Portfolio Assessments of Beginning Teachers: A Qualitative Study

Abstract: This qualitative study is intended to illuminate factors that affect the generalizability of portfolio assessments of beginning teachers. By generalizability, we refer here to the extent to which the portfolio assessment supports generalizations from the particular evidence reflected in the portfolio to the conception of competent teaching reflected in the standards on which the assessment is based. Or, more practically, “The key question is, ‘How likely is it that this finding would be reversed or substantial… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2004
2004
2017
2017

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 52 publications
0
10
0
Order By: Relevance
“…The reliability of ratings was then formally assessed using Generalizability (G) Theory (Shavelson & Webb, 1991). G‐theory is particularly suitable as a framework for investigating the reliability of measures of instruction because it can assess the relative importance of multiple sources of error simultaneously (e.g., raters, tasks, occasions; see e.g., Moss et al, 2004). In the year 1 study each notebook was scored by multiple raters on each dimension; this is a crossed Teacher × Rater design with one facet of error (raters), which identifies three sources of score variance: true differences in instructional practice across teachers ($\sigma _{{\rm T}}^{{\rm 2}} $ ), mean differences between raters (i.e., variance in rater severity , $\sigma _{{\rm R}}^{{\rm 2}} $ ), and a term combining interaction and residual error ($\sigma _{{\rm TR,e}}^{2} $ ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The reliability of ratings was then formally assessed using Generalizability (G) Theory (Shavelson & Webb, 1991). G‐theory is particularly suitable as a framework for investigating the reliability of measures of instruction because it can assess the relative importance of multiple sources of error simultaneously (e.g., raters, tasks, occasions; see e.g., Moss et al, 2004). In the year 1 study each notebook was scored by multiple raters on each dimension; this is a crossed Teacher × Rater design with one facet of error (raters), which identifies three sources of score variance: true differences in instructional practice across teachers ($\sigma _{{\rm T}}^{{\rm 2}} $ ), mean differences between raters (i.e., variance in rater severity , $\sigma _{{\rm R}}^{{\rm 2}} $ ), and a term combining interaction and residual error ($\sigma _{{\rm TR,e}}^{2} $ ).…”
Section: Methodsmentioning
confidence: 99%
“…Systematically collected artifacts, assembled into portfolios or collected in other forms, can be used to measure various features of instructional practice, including some that are difficult to capture through surveys or observations (e.g., use of written feedback); moreover, because they contain direct evidence of classroom practice, artifacts are less susceptible to biases and social desirability effects. In addition to its potential for measuring instructional practice, the process of collecting artifacts can have value for teacher professional development (see e.g., Gerard, Spitulnik, & Linn, 2010; Moss et al, 2004). However, this method is not without limitations: collecting artifacts places a significant burden on teachers, who must save, copy, assemble, and even annotate and reflect on the materials.…”
Section: Features Of Instructional Practice In Middle School Sciencementioning
confidence: 99%
“…Revealed also in this literature is the possibility that the certification examination may identify more than the usual numbers of false negatives and false positives, an issue that we think deserves much more research. There is a growing belief, based on recent research (Shutz and Moss et al, 2004), that the kinds of assessments used by the NBPTS gives only a brief glimpse of what a teacher is capable of under restrictions and controls. Typical, everyday classroom performance must necessarily differ from the performance displayed and judged from portfolio's and at assessment centers.…”
Section: Research On National Board Certified Teachersmentioning
confidence: 99%
“…Most of the evidence-based research looking at these developments has focused on evaluating the quality of the evidence (Tillema, 2001), the need to ensure validity and reliability (Moss et al, 2004), and the technical infrastructure required to facilitate this migration (Gill, 2003). Beyond these, however, there are questions of control and ownership over the content of the e-portfolioissues of access to the data and questions of identity and privacy, on which the literature is less well defined.…”
Section: Advantages and Identified Risks Associated With E-portfoliosmentioning
confidence: 99%