BACKGROUND Locomotor training, including the use of body-weight support in treadmill stepping, is a physical therapy intervention used to improve recovery of the ability to walk after stroke. The effectiveness and appropriate timing of this intervention have not been established. METHODS We stratified 408 participants who had had a stroke 2 months earlier according to the extent of walking impairment — moderate (able to walk 0.4 to <0.8 m per second) or severe (able to walk <0.4 m per second) — and randomly assigned them to one of three training groups. One group received training on a treadmill with the use of body-weight support 2 months after the stroke had occurred (early locomotor training), the second group received this training 6 months after the stroke had occurred (late locomotor training), and the third group participated in an exercise program at home managed by a physical therapist 2 months after the stroke (home-exercise program). Each intervention included 36 sessions of 90 minutes each for 12 to 16 weeks. The primary outcome was the proportion of participants in each group who had an improvement in functional walking ability 1 year after the stroke. RESULTS At 1 year, 52.0% of all participants had increased functional walking ability. No significant differences in improvement were found between early locomotor training and home exercise (adjusted odds ratio for the primary outcome, 0.83; 95% confidence interval [CI], 0.50 to 1.39) or between late locomotor training and home exercise (adjusted odds ratio, 1.19; 95% CI, 0.72 to 1.99). All groups had similar improvements in walking speed, motor recovery, balance, functional status, and quality of life. Neither the delay in initiating the late locomotor training nor the severity of the initial impairment affected the outcome at 1 year. Ten related serious adverse events were reported (occurring in 2.2% of participants undergoing early locomotor training, 3.5% of those undergoing late locomotor training, and 1.6% of those engaging in home exercise). As compared with the home-exercise group, each of the groups receiving locomotor training had a higher frequency of dizziness or faintness during treatment (P=0.008). Among patients with severe walking impairment, multiple falls were more common in the group receiving early locomotor training than in the other two groups (P = 0.02). CONCLUSIONS Locomotor training, including the use of body-weight support in stepping on a treadmill, was not shown to be superior to progressive exercise at home managed by a physical therapist. (Funded by the National Institute of Neurological Disorders and Stroke and the National Center for Medical Rehabilitation Research; LEAPS ClinicalTrials.gov number, NCT00243919.)
We estimate that the MCID for gait speed among patients with subacute stroke and severe gait speed impairments is 0.16 m/s. Patients with subacute stroke who increase gait speed >or=0.16 m/s are more likely to experience a meaningful improvement in disability level than those who do not. Clinicians can use this reference value to develop goals and interpret progress in patients with subacute stroke.
BackgroundTeaching the steps of evidence-based practice (EBP) has become standard curriculum for health professions at both student and professional levels. Determining the best methods for evaluating EBP learning is hampered by a dearth of valid and practical assessment tools and by the absence of guidelines for classifying the purpose of those that exist. Conceived and developed by delegates of the Fifth International Conference of Evidence-Based Health Care Teachers and Developers, the aim of this statement is to provide guidance for purposeful classification and development of tools to assess EBP learning.DiscussionThis paper identifies key principles for designing EBP learning assessment tools, recommends a common taxonomy for new and existing tools, and presents the Classification Rubric for EBP Assessment Tools in Education (CREATE) framework for classifying such tools. Recommendations are provided for developers of EBP learning assessments and priorities are suggested for the types of assessments that are needed. Examples place existing EBP assessments into the CREATE framework to demonstrate how a common taxonomy might facilitate purposeful development and use of EBP learning assessment tools.SummaryThe widespread adoption of EBP into professional education requires valid and reliable measures of learning. Limited tools exist with established psychometrics. This international consensus statement strives to provide direction for developers of new EBP learning assessment tools and a framework for classifying the purposes of such tools.
BackgroundFew studies have been performed to inform how best to teach evidence-based medicine (EBM) to medical trainees. Current evidence can only conclude that any form of teaching increases EBM competency, but cannot distinguish which form of teaching is most effective at increasing student competency in EBM. This study compared the effectiveness of a blended learning (BL) versus didactic learning (DL) approach of teaching EBM to medical students with respect to competency, self-efficacy, attitudes and behaviour toward EBM.MethodsA mixed methods study consisting of a randomised controlled trial (RCT) and qualitative case study was performed with medical students undertaking their first clinical year of training in EBM. Students were randomly assigned to receive EBM teaching via either a BL approach or the incumbent DL approach. Competency in EBM was assessed using the Berlin questionnaire and the ‘Assessing Competency in EBM’ (ACE) tool. Students’ self-efficacy, attitudes and behaviour was also assessed. A series of focus groups was also performed to contextualise the quantitative results.ResultsA total of 147 students completed the RCT, and a further 29 students participated in six focus group discussions. Students who received the BL approach to teaching EBM had significantly higher scores in 5 out of 6 behaviour domains, 3 out of 4 attitude domains and 10 out of 14 self-efficacy domains. Competency in EBM did not differ significantly between students receiving the BL approach versus those receiving the DL approach [Mean Difference (MD)=−0.68, (95% CI–1.71, 0.34), p=0.19]. No significant difference was observed between sites (p=0.89) or by student type (p=0.58). Focus group discussions suggested a strong student preference for teaching using a BL approach, which integrates lectures, online learning and small group activities.ConclusionsBL is no more effective than DL at increasing medical students’ knowledge and skills in EBM, but was significantly more effective at increasing student attitudes toward EBM and self-reported use of EBM in clinical practice. Given the various learning styles preferred by students, a multifaceted approach (incorporating BL) may be best suited when teaching EBM to medical students. Further research on the cost-effectiveness of EBM teaching modalities is required.Electronic supplementary materialThe online version of this article (doi:10.1186/s12909-015-0321-6) contains supplementary material, which is available to authorized users.
Background: Locomotor training using body weight support and a treadmill as a therapeutic modality for rehabilitation of walking post-stroke is being rapidly adopted into clinical practice. There is an urgent need for a well-designed trial to determine the effectiveness of this intervention.
BackgroundThe majority of reporting guidelines assist researchers to report consistent information concerning study design, however, they contain limited information for describing study interventions. Using a three-stage development process, the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET) checklist and accompanying explanatory paper were developed to provide guidance for the reporting of educational interventions for evidence-based practice (EBP). The aim of this study was to complete the final development for the GREET checklist, incorporating psychometric testing to determine inter-rater reliability and criterion validity.MethodsThe final development for the GREET checklist incorporated the results of a prior systematic review and Delphi survey. Thirty-nine items, including all items from the prior systematic review, were proposed for inclusion in the GREET checklist. These 39 items were considered over a series of consensus discussions to determine the inclusion of items in the GREET checklist. The GREET checklist and explanatory paper were then developed and underwent psychometric testing with tertiary health professional students who evaluated the completeness of the reporting in a published study using the GREET checklist. For each GREET checklist item, consistency (%) of agreement both between participants and the consensus criterion reference measure were calculated. Criterion validity and inter-rater reliability were analysed using intra-class correlation coefficients (ICC).ResultsThree consensus discussions were undertaken, with 14 items identified for inclusion in the GREET checklist. Following further expert review by the Delphi panelists, three items were added and minor wording changes were completed, resulting in 17 checklist items. Psychometric testing for the updated GREET checklist was completed by 31 participants (n = 11 undergraduate, n = 20 postgraduate). The consistency of agreement between the participant ratings for completeness of reporting with the consensus criterion ratings ranged from 19 % for item 4 Steps of EBP, to 94 % for item 16 Planned delivery. The overall consistency of agreement, for criterion validity (ICC 0.73) and inter-rater reliability (ICC 0.96), was good to almost perfect.ConclusionThe final GREET checklist comprises 17 items which are recommended for reporting EBP educational interventions. Further validation of the GREET checklist with experts in EBP research and education is recommended.Electronic supplementary materialThe online version of this article (doi:10.1186/s12909-016-0759-1) contains supplementary material, which is available to authorized users.
BackgroundHandheld computers and mobile devices provide instant access to vast amounts and types of useful information for health care professionals. Their reduced size and increased processing speed has led to rapid adoption in health care. Thus, it is important to identify whether handheld computers are actually effective in clinical practice.ObjectiveA scoping review of systematic reviews was designed to provide a quick overview of the documented evidence of effectiveness for health care professionals using handheld computers in their clinical work.MethodsA detailed search, sensitive for systematic reviews was applied for Cochrane, Medline, EMBASE, PsycINFO, Allied and Complementary Medicine Database (AMED), Global Health, and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases. All outcomes that demonstrated effectiveness in clinical practice were included. Classroom learning and patient use of handheld computers were excluded. Quality was assessed using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. A previously published conceptual framework was used as the basis for dual data extraction. Reported outcomes were summarized according to the primary function of the handheld computer.ResultsFive systematic reviews met the inclusion and quality criteria. Together, they reviewed 138 unique primary studies. Most reviewed descriptive intervention studies, where physicians, pharmacists, or medical students used personal digital assistants. Effectiveness was demonstrated across four distinct functions of handheld computers: patient documentation, patient care, information seeking, and professional work patterns. Within each of these functions, a range of positive outcomes were reported using both objective and self-report measures. The use of handheld computers improved patient documentation through more complete recording, fewer documentation errors, and increased efficiency. Handheld computers provided easy access to clinical decision support systems and patient management systems, which improved decision making for patient care. Handheld computers saved time and gave earlier access to new information. There were also reports that handheld computers enhanced work patterns and efficiency.ConclusionsThis scoping review summarizes the secondary evidence for effectiveness of handheld computers and mhealth. It provides a snapshot of effective use by health care professionals across four key functions. We identified evidence to suggest that handheld computers provide easy and timely access to information and enable accurate and complete documentation. Further, they can give health care professionals instant access to evidence-based decision support and patient management systems to improve clinical decision making. Finally, there is evidence that handheld computers allow health professionals to be more efficient in their work practices. It is anticipated that this evidence will guide clinicians and managers in implementing handheld computers in clinical practice and in designing futu...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.