Purpose This study expands upon existing knowledge of response rates by conducting a large-scale quantitative review of published response rates. This allowed a finegrained comparison of response rates across respondent groups. Other unique features of this study are the analysis of response enhancing techniques across respondent groups and response rate trends over time. In order to aid researchers in designing surveys, we provide expected response rate percentiles for different survey modalities. Design We analyzed 2,037 surveys, covering 1,251,651 individual respondents, published in 12 journals in I/O Psychology, Management, and Marketing during the period 1995-2008. Expected response rate levels were summarized for different types of respondents and use of response enhancing techniques was coded for each study. Findings First, differences in mean response rate were found across respondent types with the lowest response rates reported for executive respondents and the highest for non-working respondents and non-managerial employees. Second, moderator analyses suggested that the effectiveness of response enhancing techniques was dependent on type of respondents. Evidence for differential prediction across respondent type was found for incentives, salience, identification numbers, sponsorship, and administration mode. When controlling for increased use of response enhancing techniques, a small decline in response rates over time was found. Implications Our findings suggest that existing guidelines for designing effective survey research may not always offer the most accurate information available. Survey researchers should be aware that they may obtain lower/ higher response rates depending on the respondent type surveyed and that some response enhancing techniques may be less/more effective in specific samples. Originality/value This study, analyzing the largest set of published response rates to date, offers the first evidence for different response rates and differential functioning of response enhancing techniques across respondent types.
Resumen. Este artículo presenta dos estudios en los que se ha examinado la fiabilidad (consistencia interna, equivalencia y estabilidad), validez de constructo y discriminación de género de las valoraciones de méritos como instrumento de selección de personal. En el primer estudio (N=72) se encontró que la valoración de méritos presentaba una fiabilidad test-retest y de acuerdo entre valoradores elevada (rxx=.93) pero una consistencia interna baja (α= .53). Igualmente, se observó evidencia de discriminación indirecta contra el grupo de mujeres. En el segundo estudio, dos muestras (N=42 y N=98) sirvieron para examinar la consistencia interna, la validez de constructo y la discriminación de género. Los resultados mostraron coeficientes alfa inferiores a los del primer estudio y mayor discriminación de género. Por último, se discuten las implicaciones de los resultados para la investigación y la aplicación de este instrumental en la selección de personal. Palabras clave: valoraciones de méritos, experiencia, formación, fiabilidad, validez, discriminación.Abstract. This article presents two studies examining reliability (internal consistency, equivalence, and stability), construct validity, and gender discrimination of merit ratings as a personnel selection procedure. The first study (N=72) found that merit rating showed a large test-retest reliability and rater's agreement (rxx=.93) but low internal consistency (α=.53). It was also observed evidence of gender discrimination against women. In the second study, two samples (N=42 and N=98) were used for estimating internal consistency, construct validity and gender discrimination. Results showed Alpha's coefficients smaller than study 1 and larger gender discrimination. Finally, the implications of this procedure for the research and practice of personnel selection are discussed.
Resumen. En este artículo se analiza si el uso de la entrevista de descripción de conducta (EDC) puede implicar discriminación indirecta. Doce entrevistados, 6 con previa experiencia laboral y 6 sin experiencia, la mitad de ellos hombres y la mitad mujeres, fueron evaluados mediante una EDC. Se han calculado las puntuaciones medias de los entrevistados, al igual que la fiabilidad interjueces de la entrevista, utilizando un panel de 12 y de 6 evaluadores. No se han encontrado diferencias significativas entre los entrevistados con y sin experiencia laboral previa, al igual que entre hombres y mujeres. Se debaten las implicaciones de estos resultados para el uso de la EDC en los procesos de selección de personal. Palabras clave: entrevista conductual, discriminación, selección, experiencia, género.Abstract. In this paper, the potential unfairness when using a behavior description interview (BDI) is analyzed. Twelve interviewees (6 experienced and 6 inexperienced ones, half of them men and the other half women) were assessed using a BDI. The average scores of participants and the interrater reliability coefficients when using 12 or 6 raters were calculated. Statistically significant differences were not found between experienced and inexperienced interviewees, or between male and female ones. The implications of these findings in the use of the BDI for personnel selection processes are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.