Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Quelles retombées peut avoir le fait de dévoiler aux participants les dimensions mesurées dans un centre d'évaluation ? Cette question est abordée dans deux études indépendantes qui font appel à des exercices individuels. Les résultats de la première étude n'indiquent aucune différence dans la validité de construction entre un groupe d'étudiants universitaires "transparent" ( N = 99) et un autre "non transparent" ( N = 50) ; ceci est contraire à ce qu'avaient trouvé Kleinmann & al. (1996) et Kleinmann (1997 avec des exercices de groupe. Les évaluations moyennes ne changent pas à l'exception de la "sensibilité" qui augmente légèrement avec la transparence. Par contre, les résultats de la deuxième étude, qui faisait appel à un échantillon de candidats à un poste réel, débouchèrent sur une amélioration significative de la validité de construction chez le groupe "transparent" ( N = 297) pa rapport au group "non transparent" ( N = 393). La encore, les évaluation moyennes des deux groupes n'ont pas différé. Les apports de ces résultats pour la pratique et des suggestions pour de futures recherches sont présentés dans cet article.What are the effects of revealing dimensions to candidates in an assessment centre? This question is addressed in two independent studies, using individual exercises. Results in Study 1 showed no difference in construct-related validity between a transparent ( N = 99) and a non-transparent group of university students ( N = 50), contrary to previous findings by Kleinmann, Kuptsch, and Köller (1996) and Kleinmann (1997), who used group exercises. Also, mean ratings did not alter, the exception being the dimension "Sensitivity", which increased slightly after the transparency treatment. Conversely, results in Study 2, which contained a sample of actual job applicants, showed a significant improvement in construct-related validity for the transparent group ( N = 297) compared with the non-transparent group ( N = 393). Again, mean ratings did not differ between these two groups. Implications of these findings for practice and suggestions for future research are discussed in this paper.
Quelles retombées peut avoir le fait de dévoiler aux participants les dimensions mesurées dans un centre d'évaluation ? Cette question est abordée dans deux études indépendantes qui font appel à des exercices individuels. Les résultats de la première étude n'indiquent aucune différence dans la validité de construction entre un groupe d'étudiants universitaires "transparent" ( N = 99) et un autre "non transparent" ( N = 50) ; ceci est contraire à ce qu'avaient trouvé Kleinmann & al. (1996) et Kleinmann (1997 avec des exercices de groupe. Les évaluations moyennes ne changent pas à l'exception de la "sensibilité" qui augmente légèrement avec la transparence. Par contre, les résultats de la deuxième étude, qui faisait appel à un échantillon de candidats à un poste réel, débouchèrent sur une amélioration significative de la validité de construction chez le groupe "transparent" ( N = 297) pa rapport au group "non transparent" ( N = 393). La encore, les évaluation moyennes des deux groupes n'ont pas différé. Les apports de ces résultats pour la pratique et des suggestions pour de futures recherches sont présentés dans cet article.What are the effects of revealing dimensions to candidates in an assessment centre? This question is addressed in two independent studies, using individual exercises. Results in Study 1 showed no difference in construct-related validity between a transparent ( N = 99) and a non-transparent group of university students ( N = 50), contrary to previous findings by Kleinmann, Kuptsch, and Köller (1996) and Kleinmann (1997), who used group exercises. Also, mean ratings did not alter, the exception being the dimension "Sensitivity", which increased slightly after the transparency treatment. Conversely, results in Study 2, which contained a sample of actual job applicants, showed a significant improvement in construct-related validity for the transparent group ( N = 297) compared with the non-transparent group ( N = 393). Again, mean ratings did not differ between these two groups. Implications of these findings for practice and suggestions for future research are discussed in this paper.
The aim of this article was to examine interviewers' perceptions of applicant personality and to assess how these personality perceptions were related to employment recommendations. In a field setting, personality adjectives spontaneously written down by eight interviewers and referring to 720 applicants were analyzed. The AB5C model was used to classify the adjectives and determine the applicants' personality profile scores. The results showed that interviewers used descriptors referring to all five personality dimensions, with a preference for extraversion and agreeableness. Relationships were found between employment recommendations and three dimensions: emotional stability, openness to experience, and conscientiousness. Interviewers, however, differed in judgment standards and in the weights they assigned to trait perceptions when deciding on applicant hirability.
This study examined the coachability of two situational judgment tests, the College Student Questionnaire (CSQ) and the Situational Judgment Inventory (SJI), developed for consideration as selection instruments in the college admission process. Strategies for raising scores on each test were generated, and undergraduates were trained in the use of the strategies using a video-based training program. Results indicated that the CSQ was susceptible to coaching. In addition, the scoring format of the CSQ was found to be easily exploited, such that trainees could increase their scores by greater than 1 SD simply by avoiding extreme responses on that test. The results as a whole sounded a note of caution for the potential use of the CSQ in the college admission process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.