2007
DOI: 10.1037/0021-9010.92.3.812
|View full text |Cite
|
Sign up to set email alerts
|

Can training improve the quality of inferences made by raters in competency modeling? A quasi-experiment.

Abstract: A quasi-experiment was conducted to investigate the effects of frame-of-reference training on the quality of competency modeling ratings made by consultants. Human resources consultants from a large consulting firm were randomly assigned to either a training or a control condition. The discriminant validity, interrater reliability, and accuracy of the competency ratings were significantly higher in the training group than in the control group. Further, the discriminant validity and interrater reliability of co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
60
0

Year Published

2008
2008
2017
2017

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 55 publications
(63 citation statements)
references
References 32 publications
3
60
0
Order By: Relevance
“…Even though it has been shown for mini-CEX that rater training had no effect on the reliability of the exam scores [30] it is an unconditional requirement to reach solid reliability for the assessment of competences [11]. Another study also showed that rater training had a positive effect on the quality of inferences made by raters in competence modeling [31]. Additionally, a qualitative study pointed out that participation of clinicians in a performance dimension rater training and in a frame of reference rater training equipped participants with assessment skills, which were congruent with principles of criterion-referenced assessment and entrustment, and basic principles of competency-based education [32].…”
Section: Discussionmentioning
confidence: 94%
“…Even though it has been shown for mini-CEX that rater training had no effect on the reliability of the exam scores [30] it is an unconditional requirement to reach solid reliability for the assessment of competences [11]. Another study also showed that rater training had a positive effect on the quality of inferences made by raters in competence modeling [31]. Additionally, a qualitative study pointed out that participation of clinicians in a performance dimension rater training and in a frame of reference rater training equipped participants with assessment skills, which were congruent with principles of criterion-referenced assessment and entrustment, and basic principles of competency-based education [32].…”
Section: Discussionmentioning
confidence: 94%
“…These recommendations are not meant to be exhaustive; we urge researchers and practitioners to build on them and conduct much-needed empirical research. In the past, we successfully engaged in similar endeavors to formulate evidence-based recommendations for increasing the quality of the data gathered in unproctored Internet testing (e.g., Bartram, 2008;Lievens & Burke, 2011) and in competency determinations (e.g., Campion et al, 2011;Lievens & Sanchez, 2007). There is no reason why we could not do this again this time.…”
Section: Epiloguementioning
confidence: 99%
“…From these weights, one can also determine the relative evidence for Hypothesis m compared to m . For instance, in the example of Lievens and Sanchez (2007), H 1 is 0.44/0.14 ≈ 3.18 more likely than H u . Therefore, it is not a weak hypothesis.…”
Section: The Goric Weightsmentioning
confidence: 99%
“…In this section, we will illustrate the GORIC supported by real data for which the descriptive statistics are available in Lievens and Sanchez (2007). They investigated the effect of training on the quality of ratings made by consultants.…”
Section: Analysis Of Variance (Anova)mentioning
confidence: 99%