BackgroundThe incidence of oral cancer is increasing. Guidance for oral cancer from the National Institute for Health and Care Excellence (NICE) is unique in recommending cross-primary care referral from GPs to dentists.AimThis review investigates knowledge about delays in the diagnosis of symptomatic oral squamous cell carcinoma (OSCC) in primary care.Design and settingAn independent multi-investigator literature search strategy and an analysis of study methodologies using a modified data extraction tool based on Aarhus checklist criteria relevant to primary care.MethodThe authors conducted a focused systematic review involving document retrieval from five databases up to March 2018. Included were studies looking at OSCC diagnosis from when patients first accessed primary care up to referral, including length of delay and stage of disease at time of definitive diagnosis.ResultsFrom 538 records, 16 articles were eligible for full-text review. In the UK, more than 55% of patients with OSCC were referred by their GP, and 44% by their dentist. Rates of prescribing between dentists and GPs were similar, and both had similar delays in referral, though one study found greater delays attributed to dentists as they had undertaken dental procedures. On average, patients had two to three consultations before referral. Less than 50% of studies described the primary care aspect of referral in detail. There was no information on inter-GP–dentist referrals.ConclusionThere is a need for primary care studies on OSCC diagnosis. There was no evidence that GPs performed less well than dentists, which calls into question the NICE cancer option to refer to dentists, particularly in the absence of robust auditable pathways.
APPENDIX 1: Search StrategyPubMed (covid-19[tw] OR COVID19 [tw] OR OR OR OR severe acute respiratory syndrome coronavirus 2[nm] OR severe acute respiratory syndrome coronavirus 2[tw] OR 2019-nCoV[tw] OR 2019nCoV[tw] OR coronavirus[tw] OR coronavirus[mh] OR pandemic[tw]) AND ("Internship and Residency"[Mesh] OR "Students, Medical"[Mesh] OR "Education, Medical"[Mesh] OR "Schools, Medical"[Mesh] OR Intern[tiab] OR interns[tiab] OR "House officer"[tw] OR "house officers"[tw] OR Resident[ti] OR residents[ti] OR residency[ti] OR "medical education"[tw] OR fellow[tiab] OR fellows[tiab] OR "junior doctor"[tw] OR "junior doctors"[tw] OR "postgraduate"[tw] OR postgraduate[tw] OR "foundation year"[tw] OR "foundation program"[tw] OR "medical student"[tw] OR "medical students"[tw] OR "Curriculum"[mesh] OR curricul*[tiab] OR "medical school"[tw] OR "medical schools"[tw] OR "medical training"[tw] OR "undergraduate"[tw] OR "graduate"[tw] OR Learn*[tw] OR training[tw] OR trainer[tw] OR trainee*[tw] OR instructor*[tw] OR instructional[tw] OR educat*[tw] OR classroom*[tw] OR simulat*[tw] OR virtual[tw] OR ZOOM[tw]) AND ("2020/05/01"[Date -Publication] : "3000"[Date -Publication])
Background: Medical schools differ, particularly in their teaching, but it is unclear whether such differences matter, although influential claims are often made. The Medical School Differences (MedDifs) study brings together a wide range of measures of UK medical schools, including postgraduate performance, fitness to practise issues, specialty choice, preparedness, satisfaction, teaching styles, entry criteria and institutional factors. Method: Aggregated data were collected for 50 measures across 29 UK medical schools. Data include institutional history (e.g. rate of production of hospital and GP specialists in the past), curricular influences (e.g.
Background: What subjects UK medical schools teach, what ways they teach subjects, and how much they teach those subjects is unclear. Whether teaching differences matter is a separate, important question. This study provides a detailed picture of timetabled undergraduate teaching activity at 25 UK medical schools, particularly in relation to problem-based learning (PBL). Method: The Analysis of Teaching of Medical Schools (AToMS) survey used detailed timetables provided by 25 schools with standard 5-year courses. Timetabled teaching events were coded in terms of course year, duration, teaching format, and teaching content. Ten schools used PBL. Teaching times from timetables were validated against two other studies that had assessed GP teaching and lecture, seminar, and tutorial times. Results: A total of 47,258 timetabled teaching events in the academic year 2014/2015 were analysed, including SSCs (student-selected components) and elective studies. A typical UK medical student receives 3960 timetabled hours of teaching during their 5-year course. There was a clear difference between the initial 2 years which mostly contained basic medical science content and the later 3 years which mostly consisted of clinical teaching, although some clinical teaching occurs in the first 2 years. Medical schools differed in duration, format, and content of teaching. Two main factors underlay most of the variation between schools, Traditional vs PBL teaching and Structured vs Unstructured teaching. A curriculum map comparing medical schools was constructed using those factors. PBL schools differed on a number of measures, having more PBL teaching time, fewer lectures, more GP teaching, less surgery, less formal teaching of basic science, and more sessions with unspecified content. Discussion: UK medical schools differ in both format and content of teaching. PBL and non-PBL schools clearly differ, albeit with substantial variation within groups, and overlap in the middle. The important question of whether differences in teaching matter in terms of outcomes is analysed in a companion study (MedDifs) which examines how teaching differences relate to university infrastructure, entry requirements, student perceptions, and outcomes in Foundation Programme and postgraduate training.
Background Many medical schools have implemented curricula to teach non-technical skills, a personal set of complex social and cognitive skills which are grounded in human factors safety industries in and out of health. Consensus on how to assess these skills is lacking. This systematic review aimed to evaluate the evidence regarding non-technical skills assessments in undergraduate medical education, to describe the tools used, learning outcomes and the validity, reliability and psychometrics of the instruments. Given the discrete context, a focussed review model is being deployed. Methods Studies describing assessment methods as either the focus of the study or having non-technical skills assessment as an outcome measure of the research were considered. A standardized search of online databases was conducted and consensus reached on included studies. Data extraction, quality assessment and content analysis were conducted per Best Evidence in Medical Education guidelines. Results Nine papers met the inclusion criteria. Assessment methods broadly fell into three categories: simulated clinical scenarios, objective structured clinical examinations, and questionnaires or written assessments. Details of methodology were synthesised to support readers developing their own materials. Tools to assess non-technical skills were often developed locally, in 4 response to specific educational interventions, without reference to conceptual frameworks. Consequently, the tools were rarely validated, limiting dissemination and replication. The majority of studies achieved outcomes modifying knowledge and skills of participants. Two studies resulted in behavioural change and one resulted in change in practice. Conclusions There were clear themes in content and broad categories in methods of assessments employed, with the OSCE identified as most able to assess multiple related skills at once. The quality of this evidence was poor due to lack of theoretical underpinning, with most assessments not part of normal process, but rather produced as a specific outcome measure for a teaching based study. Data on validity, reliability and learning outcomes was not available so these questions cannot be addressed at this time. Whilst the current literature forms a good starting position for educators developing materials, there is a need for future work to address these weaknesses as such tools are required across health education.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.