2003
DOI: 10.1177/1094428102239427
|View full text |Cite
|
Sign up to set email alerts
|

The Restriction of Variance Hypothesis and Interrater Reliability and Agreement: Are Ratings from Multiple Sources Really Dissimilar?

Abstract: The fundamental assumption underlying the use of 360-degree assessments is that ratings from different sources provide unique and meaningful information about the target manager’s performance. Extant research appears to support this assumption by demonstrating low correlations between rating sources. This article reexamines the support of this assumption, suggesting that past research has been distorted by a statistical artifact—restriction of variance in job performance. This artifact reduces the amount of be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
239
0
1

Year Published

2013
2013
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 223 publications
(247 citation statements)
references
References 91 publications
7
239
0
1
Order By: Relevance
“…Prior to aggregating, we first assessed within-team agreement in intrinsic and extrinsic work values, by means of the r wg(J) index (James, Demaree, & Wolf, 1984), using a uniform null distribution. We obtained a r wg(J) value of .95 and .92, both of which are above the conventionally acceptable value of .70 (Lebreton, Burgess, Kaiser, Atchley, & James, 2003). Next, we computed the intraclass correlation coefficient ICC(1) (Bliese, 2000) in order to examine the relative consistency of responses among team members.…”
Section: Preliminary Analysesmentioning
confidence: 99%
“…Prior to aggregating, we first assessed within-team agreement in intrinsic and extrinsic work values, by means of the r wg(J) index (James, Demaree, & Wolf, 1984), using a uniform null distribution. We obtained a r wg(J) value of .95 and .92, both of which are above the conventionally acceptable value of .70 (Lebreton, Burgess, Kaiser, Atchley, & James, 2003). Next, we computed the intraclass correlation coefficient ICC(1) (Bliese, 2000) in order to examine the relative consistency of responses among team members.…”
Section: Preliminary Analysesmentioning
confidence: 99%
“…With regards to fluency, the second rater arrived at the same number of non-redundant ideas per participant as the first rater (i.e., 100% agreement). In addition, interrater reliability coefficients for originality ( τ = .93) and flexibility ( κ = .90), confirm that there was near-perfect agreement amongst raters (Landis & Koch, 1977;LeBreton et al, 2003). Following recommendations by Runco et al (1987) …”
Section: Methodsmentioning
confidence: 61%
“…The average rwg (j) for our respondents was 0.80 indicating strong inter-rater agreement and larger reduction of error variance. This is higher than the traditional cut point of 0.70 (Lance et al, 2006;LeBreton et al, 2003). Also, we computed ICC (1) for all society cultural norms, which was 0.16.…”
Section: Aggregation Verification Reliability and Consistencymentioning
confidence: 99%