2002
DOI: 10.1509/jmkr.39.4.469.19117
|View full text |Cite
|
Sign up to set email alerts
|

Informants in Organizational Marketing Research: Why Use Multiple Informants and how to Aggregate Responses

Abstract: Organizational research frequently involves seeking judgmental response data from informants within organizations. This article discusses why using multiple informants improves the quality of response data and thereby the validity of research findings. The authors show that when there are multiple informants who disagree, responses aggregated with confidence- or competence-based weights outperform those with response data-based weights, which in turn provide significant gains in estimation accuracy over simply… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
223
0
1

Year Published

2006
2006
2022
2022

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 268 publications
(225 citation statements)
references
References 45 publications
(92 reference statements)
1
223
0
1
Order By: Relevance
“…First, there is the potential for differences in the reliability of ratings across the raters (Van Bruggen, Lilien, and Kacker 2002). To account for this bias, we computed a weighted mean of the raters' ratings; the weight assigned to a rater was the reciprocal of the rater's absolute distance from the unweighted mean rating compared with other raters, as Van Bruggen, Lilien, and Kacker (2002) propose. Second, there may be a measurement error bias because the rating instrument may have induced each rater to provide a rating that differs from its "true" value.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…First, there is the potential for differences in the reliability of ratings across the raters (Van Bruggen, Lilien, and Kacker 2002). To account for this bias, we computed a weighted mean of the raters' ratings; the weight assigned to a rater was the reciprocal of the rater's absolute distance from the unweighted mean rating compared with other raters, as Van Bruggen, Lilien, and Kacker (2002) propose. Second, there may be a measurement error bias because the rating instrument may have induced each rater to provide a rating that differs from its "true" value.…”
Section: Methodsmentioning
confidence: 99%
“…For each variable and for each rater, we drew a random number from N(0, σ Å ) and added that value to the rating of each product provided by that rater. We then computed the combined ratings across the raters as Van Bruggen, Lilien, and Kacker (2002) propose. We reestimated the split hazard model, once for each of the 30 random data sets generated, to obtain the mean and standard deviation of the estimates across the 30 simulations.…”
Section: Robustness Testsmentioning
confidence: 99%
“…1 Van Bruggen et al (2002) have suggested two alternative approaches to the simple average approach. These are "Response Data-Based Weighted Mean" approach and "Confidence-Based Weighted Mean" approach.…”
Section: Measurementsmentioning
confidence: 99%
“…Subsequently, the study followed Van Bruggen, Lilien, and Kacker (2002) interrater agreement index (rWG) to compute for each of the export performance measures from the two informant groups. The lowest rWG index for the entire set of items was 0.80.…”
Section: Sample and Data Collectionmentioning
confidence: 99%