The AKT is a high-stakes computer-based test for licensing UK general practitioners (GPs) and forms part of the Membership of the Royal College of General Practitioners examinationThe AKT uses different question formats and consistently demonstrates high reliability Pre-trialling of new questions has been shown to be unnecessary due to a systematic process of test construction
WHAT THIS WORK ADDSThere was a high response rate from candidates asked their views immediately after its completion.A computer based evaluation questionnaire immediately after each test enables candidates' views to be easily measured over time.Feedback from candidates completing the test suggested the assessment was valid and highlighted areas for improvement.Candidates identified training and knowledge needs particularly around research and practice administrationWe are not aware of any similar evaluation of a high-stakes postgraduate licensing examination
SUGGESTIONS FOR FURTHER RESEARCHThe relationship between changes in the AKT and candidates' views of the assessment.Comparison of AKT performance in candidates who have had different experiences of general practice prior to or during specialty training.
3
AbstractThe Applied Knowledge Test (AKT) of the MRCGP examination is a computer-based assessment delivered three times a year. A computerised questionnaire, administered immediately after the test, sought candidates' views as part of the test evaluation.Of 1681 candidates taking the test 1418 (84%) responded. Most candidates believed that the test assessed their knowledge of problems relevant to general practice. Their feedback highlighted areas where improvements could be made.Candidates' views of postgraduate specialty medical examinations in the UK are rarely sought or published. We are not aware of other published evidence.The use of computer based testing enables immediate candidate feedback and can be used routinely to evaluate the test validity and formats.The views of candidates are an important component of quality assurance in reviewing the content, format and educational experience of a high stakes examination.
BackgroundPatients often seek doctors of the same sex, particularly for sex-specific complaints and also because of a perception that doctors have greater knowledge of complaints relating to their own sex. Few studies have investigated differences in knowledge by sex of candidate on sex-specific questions in medical examinations.
AimThe aim was to compare the performance of males and females in sex-specific questions in a 200-item computer-based applied knowledge test for licensing UK GPs.
Design and settingA cross-sectional design using routinely collected performance and demographic data from the first three versions of the Applied Knowledge Test, MRCGP, UK.
MethodQuestions were classified as female specific, male specific, or sex neutral. The performance of males and females was analysed using multiple analysis of covariance after adjusting for sex-neutral score and demographic confounders.
ResultsData were included from 3627 candidates. After adjusting for sex-neutral score, age, time since qualification, year of speciality training, ethnicity, and country of primary medical qualification, there were differences in performance in sexspecific questions. Males performed worse than females on female-specific questions (-4.2%, 95% confidence interval [CI] = -5.7 to -2.6) but did not perform significantly better than females on male-specific questions (0.3%, 95% CI = -2.6 to 3.2%.
ConclusionThere was evidence of better performance by females in female-specific questions but this was small relative to the size of the test. Differential performance of males and females in sexspecific questions in a licensing examination may have implications for vocational and postqualification general practice training. Keywords assessment; general practice; learning; medical education; primary health care; sex.
MethodSelf-administered postal questionnaires were sent to examiners not involved with the development of the test after completing it. Their performance scores were compared with those of candidates.
ResultsThe majority of participants (80.9%) were satisfied with the new computer-based test. Responses relating to content and attitudes to the test were also positive overall, but some problems with content were highlighted. Fewer examiners (61.9%) were positive about the physical comfort of the test centre, including seating, heating, and lighting. Examiners had significantly higher scores (mean 83.3%, range 69 to 93%, 95% confidence interval [CI] = 81.9 to 84.7%) than 'real' candidates (mean 75.0%, range 45 to 94%, 95% CI = 74.6 to 75.5%), who subsequently took an identical test.
ConclusionThe new computer-based licensing test (the AKT) was found to be acceptable to the majority of examiners. The pass-fail standard, determined by routine methods including an Angoff procedure, was supported by the higher success rate of examiners compared with candidates. The use of selected groups to assess highstakes (licensing) examinations can be useful for assessing test validity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.