ContextThe increasing use of Internet-based learning in health professions education may be informed by a timely, comprehensive synthesis of evidence of effectiveness.Objectives To summarize the effect of Internet-based instruction for health professions learners compared with no intervention and with non-Internet interventions.
Interactivity, practice exercises, repetition, and feedback seem to be associated with improved learning outcomes, although inconsistency across studies tempers conclusions. Evidence for other instructional variations remains inconclusive.
Context Computer‐aided instruction is used increasingly in medical education and anatomy instruction with limited research evidence to guide its design and deployment.
Objectives To determine the effects of (a) learner control over the e‐learning environment and (b) key views of the brain versus multiple views in the learning of brain surface anatomy.
Design Randomised trial with 2 phases of study.
Participants Volunteer sample of 1st‐year psychology students (phase 1, n = 120; phase 2, n = 120).
Interventions Phase 1: computer‐based instruction in brain surface anatomy with 4 conditions: (1) learner control/multiple views (LMV); (2) learner control/key views (LKV); (3) programme control/multiple views (PMV); (4) programme control/key views (PKV). Phase 2: 2 conditions: low learner control/key views (PKV) versus no learner control/key views (SKV). All participants performed a pre‐test, post‐test and test of visuospatial ability.
Main outcome measures A 30‐item post‐test of brain surface anatomy structure identification.
Results The PKV group attained the best post‐test score (57.7%) and the PMV group received the worst (42.2%), with the 2 high learner control groups performing in between. For students with low spatial ability, estimated scores are 20% lower for those who saw multiple views during learning. In phase 2, students with the most static condition and no learner control (SKV) performed similarly to those students in the PKV group.
Conclusions Multiple views may impede learning, particularly for those with relatively poor spatial ability. High degrees of learner control may reduce effectiveness of learning.
OBJECTIVES Educators often speak of web-based learning (WBL) as a single entity or a cluster of similar activities with homogeneous effects. Yet a recent systematic review demonstrated large heterogeneity among results from individual studies. Our purpose is to describe the variation in configurations, instructional methods and presentation formats in WBL.METHODS We systematically searched MEDLINE, EMBASE, ERIC, CINAHL and other databases (last search November 2008) for studies comparing a WBL intervention with no intervention or another educational activity. From eligible studies we abstracted information on course participants, topic, configuration and instructional methods. We summarised this information and then purposively selected and described several WBL interventions that illustrate specific technologies and design features.
RESULTSWe identified 266 eligible studies. Nearly all courses (89%) used written text and most (55%) used multimedia. A total of 32% used online communication via e-mail, threaded discussion, chat or videoconferencing, and 9% implemented synchronous components. Overall, 24% blended web-based and non-computerbased instruction. Most web-based courses (77%) employed specific instructional methods, other than text alone, to enhance the learning process. The most common instructional methods (each used in nearly 50% of courses) were patient cases, self-assessment questions and feedback. We describe several studies to illustrate the range of instructional designs.
CONCLUSIONS
on average, Internet-based instruction and non-computer instruction require similar time. Instructional strategies to enhance feedback and interactivity typically prolong learning time, but in many cases also enhance learning outcomes. Isolated examples suggest potential for improving efficiency in Internet-based instruction.
CONTEXT Studies evaluating reporting quality in health professions education (HPE) research have demonstrated deficiencies, but none have used comprehensive reporting standards. Additionally, the relationship between study methods and effect size (ES) in HPE research is unknown.OBJECTIVES This review aimed to evaluate, in a sample of experimental studies of Internetbased instruction, the quality of reporting, the relationship between reporting and methodological quality, and associations between ES and study methods.
METHODSWe conducted a systematic search of databases including MEDLINE, Scopus, CINAHL, EMBASE and ERIC, for articles published during 1990-2008. Studies (in any language) quantifying the effect of Internetbased instruction in HPE compared with no intervention or other instruction were included. Working independently and in duplicate, we coded reporting quality using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement, and coded study methods using a modified Newcastle-Ottawa Scale (m-NOS), the Medical Education Research Study Quality Instrument (MERSQI), and the Best Evidence in Medical Education (BEME) global scale.RESULTS For reporting quality, articles scored a mean ± standard deviation (SD) of 51 ± 25% of STROBE elements for the Introduction, 58 ± 20% for the Methods, 50 ± 18% for the Results and 41 ± 26% for the Discussion sections. We found positive associations (all p < 0.0001) between reporting quality and MERSQI (q = 0.64), m-NOS (q = 0.57) and BEME (q = 0.58) scores. We explored associations between study methods and knowledge ES by subtracting each study's ES from the pooled ES for studies using that method and comparing these differences between subgroups. Effect sizes in single-group pretest ⁄ post-test studies differed from the pooled estimate more than ESs in two-group studies (p = 0.013). No difference was found between other study methods (yes ⁄ no: representative sample, comparison group from same community, randomised, allocation concealed, participants blinded, assessor blinded, objective assessment, high follow-up).CONCLUSIONS Information is missing from all sections of reports of HPE experiments. Single-group pre-⁄ post-test studies may overestimate ES compared with two-group designs. Other methodological variations did not bias study results in this sample.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.