Objectives: To explore how undergraduate health care students use digital technology to deliver patient care during their clinical placements.Design: A scoping review of primary research was conducted using the extended PRISMA guidelines.
Data sources:A subject specialist librarian assisted in searching for the academic literature in four electronic databases: CINAHL, PubMed, Scopus and ERIC.Review methods: Four reviewers, working in pairs, independently reviewed a total of 332 potentially relevant articles according to set inclusion and exclusion criteria. Then, all included papers underwent an independent quality review by two reviewers.Results: Seven studies involving medical or nursing/midwifery students were included in the review. Three studies evaluated the use of mobile learning devices in patient care with four studies evaluating the use of digital systems in practice. Due to the heterogeneity of studies, which used differing digital systems and instruments, the researchers decided the most suitable method of analysis was a narrative review. The results are explained using four key themes: student learning needs when using technology in practice; access to technology in placements; perceptions of using technology in placements; and impact of technology on patient care.
Conclusion:The use of digital systems in clinical settings creates challenges and benefits to student learning in delivering patient care. When students are prepared and facilitated to use digital systems, a sense of confidence and belonging to the team is fostered. Lack of availability and access to these systems, however, may impede students' ability to be involved in all aspects of patient care. Limitations of the current review included the relatively low quality of the educational research being conducted in this field of research. Further quality research is needed to explore how students in the health care professions are supported in digital environments and how higher education institutions are adapting their curricula to meet the digital learning needs of health care students.
Highlights Digitalisation of health care has become a widespread practice Preparation and support are essential for using digital systems on placements Students feel a sense of belonging when provided with access to patient information Further support is required for students to integrate the use of technology during the patient encounter
Our findings suggest that fair clinical assessment is important to both medical students and clinical teachers. Interspecialty discussions about assessment may have the potential to enrich intraspecialty perspectives, enhance interspecialty engagement and collaboration, and improve the quality of clinical teacher assessment. Better alignment of university and hospital systems, a face to face component and other modifications may have enhanced clinician engagement with this project. Findings suggest that specialty assessment cultures and content expertise may not be barriers to pursuing more integrated approaches to assessment.
BackgroundRobust and defensible clinical assessments attempt to minimise differences in student grades which are due to differences in examiner severity (stringency and leniency). Unfortunately there is little evidence to date that examiner training and feedback interventions are effective; “physician raters” have indeed been deemed “impervious to feedback”. Our aim was to investigate the effectiveness of a general practitioner examiner feedback intervention, and explore examiner attitudes to this.MethodsSixteen examiners were provided with a written summary of all examiner ratings in medical student clinical case examinations over the preceding 18 months, enabling them to identify their own rating data and compare it with other examiners. Examiner ratings and examiner severity self-estimates were analysed pre and post intervention, using non-parametric bootstrapping, multivariable linear regression, intra-class correlation and Spearman’s correlation analyses. Examiners completed a survey exploring their perceptions of the usefulness and acceptability of the intervention, including what (if anything) examiners planned to do differently as a result of the feedback.ResultsExaminer severity self-estimates were relatively poorly correlated with measured severity on the two clinical case examination types pre-intervention (0.29 and 0.67) and were less accurate post-intervention. No significant effect of the intervention was identified, when differences in case difficulty were controlled for, although there were fewer outlier examiners post-intervention. Drift in examiner severity over time prior to the intervention was observed. Participants rated the intervention as interesting and useful, and survey comments indicated that fairness, reassurance, and understanding examiner colleagues are important to examiners.ConclusionsDespite our participants being receptive to our feedback and wanting to be “on the same page”, we did not demonstrate effective use of the feedback to change their rating behaviours. Calibration of severity appears to be difficult for examiners, and further research into better ways of providing more effective feedback is indicated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.