OBJECTIVES This study aimed to evaluate the validity and utility of and candidate reactions towards cognitive ability tests, and current selection methods, including a clinical problemsolving test (CPST) and a situational judgement test (SJT), for postgraduate selection.METHODS This was an exploratory, longitudinal study to evaluate the validities of two cognitive ability tests (measuring general intelligence) compared with current selection tests, including a CPST and an SJT, in predicting performance at a subsequent selection centre (SC). Candidate reactions were evaluated immediately after test administration to examine face validity. Data were collected from candidates applying for entry into training in UK general practice (GP) during the 2009 recruitment process. Participants were junior doctors (n = 260). The mean age of participants was 30.9 years and 53.1% were female. Outcome measures were participants' scores on three job simulation exercises at the SC.RESULTS Findings indicate that all tests measure overlapping constructs. Both the CPST and SJT independently predicted more variance than the cognitive ability test measuring non-verbal mental ability. The other cognitive ability test (measuring verbal, numerical and diagrammatic reasoning) had a predictive value similar to that of the CPST and added significant incremental validity in predicting performance on job simulations in an SC. The best single predictor of performance at the SC was the SJT. Candidate reactions were more positive towards the CPST and SJT than the cognitive ability tests.CONCLUSIONS In terms of operational validity and candidate acceptance, the combination of the current CPST and SJT proved to be the most effective administration of tests in predicting selection outcomes. In terms of construct validity, the SJT measures procedural knowledge in addition to aspects of declarative knowledge and fluid abilities and is the best single predictor of performance in the SC. Further research should consider the validity of the tests in this study in predicting subsequent performance in training.change management
Background The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study Aim To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. Design and setting A three-part longitudinal predictive validity study of selection into training for UK general practice. Method In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Results Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. Conclusion In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered
Medical Education 2011: 45: 289–297 Objectives This study aimed to examine candidate reactions to selection practices in postgraduate medical training using organisational justice theory. Methods We carried out three independent cross‐sectional studies using samples from three consecutive annual recruitment rounds. Data were gathered from candidates applying for entry into UK general practice (GP) training during 2007, 2008 and 2009. Participants completed an evaluation questionnaire immediately after the short‐listing stage and after the selection centre (interview) stage. Participants were doctors applying for GP training in the UK. Main outcome measures were participants’ evaluations of the selection methods and perceptions of the overall fairness of each selection stage (short‐listing and selection centre). Results A total of 23 855 evaluation questionnaires were completed (6893 in 2007, 10 497 in 2008 and 6465 in 2009). Absolute levels of perceptions of fairness of all the selection methods at both the short‐listing and selection centre stages were consistently high over the 3 years. Similarly, all selection methods were considered to be job‐related by candidates. However, in general, candidates considered the selection centre stage to be significantly fairer than the short‐listing stage. Of all the selection methods, the simulated patient consultation completed at the selection centre stage was rated as the most job‐relevant. Conclusions This is the first study to use a model of organisational justice theory to evaluate candidate reactions during selection into postgraduate specialty training. The high‐fidelity selection methods are consistently viewed as more job‐relevant and fairer by candidates. This has important implications for the design of recruitment systems for all specialties and, potentially, for medical school admissions. Using this approach, recruiters can systematically compare perceptions of the fairness and job relevance of various selection methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.