Psychosocial, emotional, and physical problems can emerge after traumatic brain injury (TBI), potentially impacting health-related quality of life (HRQoL). Until now, however, neither the discriminatory power of disease-specific (QOLIBRI) and generic (SF-36) HRQoL nor their correlates have been compared in detail. These aspects as well as some psychometric item characteristics were studied in a sample of 795 TBI survivors. The Shannon H
' index absolute informativity, as an indicator of an instrument's power to differentiate between individuals within a specific group or health state, was investigated. Psychometric performance of the two instruments was predominantly good, generally higher, and more homogenous for the QOLIBRI than for the SF-36 subscales. Notably, the SF-36 “Role Physical,” “Role Emotional,” and “Social Functioning” subscales showed less satisfactory discriminatory power than all other dimensions or the sum scores of both instruments. The absolute informativity of disease-specific as well as generic HRQoL instruments concerning the different groups defined by different correlates differed significantly. When the focus is on how a certain subscale or sum score differentiates between individuals in one specific dimension/health state, the QOLIBRI can be recommended as the preferable instrument.
The new questionnaire is short and yet based on a widely-used framework for clinical teaching. The analyses presented here indicate good reliability and validity of the instrument. Future research needs to investigate whether feedback generated from this tool helps to improve teaching quality and student learning outcome.
BackgroundThe seven categories of the Stanford Faculty Development Program (SFDP) represent a framework for planning and assessing medical teaching. Nevertheless, so far there is no specific evaluation tool for large-group lectures that is based on these categories. This paper reports the development and psychometric validation of a short German evaluation tool for large-group lectures in medical education (SETMED-L: ‘Student Evaluation of Teaching in MEDical Lectures’) based on the SFDP-categories.MethodsData were collected at two German medical schools. In Study 1, a full information factor analysis of the new 14-item questionnaire was performed. In Study 2, following cognitive debriefings and adjustments, a confirmatory factor analysis was performed. The model was tested for invariance across medical schools and student gender. Convergent validity was assessed by comparison with results of the FEVOR questionnaire.ResultsStudy 1 (n = 922) yielded a three-factor solution with one major (10 items) and two minor factors (2 items each). In Study 2 (n = 2740), this factor structure was confirmed. Scale reliability ranged between α = 0.71 and α = 0.88. Measurement invariance was given across student gender but not across medical schools. Convergent validity in the subsample tested (n = 246) yielded acceptable results.ConclusionThe SETMED-L showed satisfactory to very good psychometric characteristics. The main advantages are its short yet comprehensive form, the integration of SFDP-categories and its focus on medical education.Electronic supplementary materialThe online version of this article (doi:10.1186/s12909-017-0970-8) contains supplementary material, which is available to authorized users.
Abstract:Personalized prompting research has shown the significant learning benefit of prompting. The current paper outlines and examines a personalized prompting approach aimed at eliminating performance differences on the basis of a number of learner characteristics (capturing learning strategies and traits). The learner characteristics of interest were the need for cognition, work effort, computer self-efficacy, the use of surface learning, and the learner's confidence in their learning. The approach was tested in two e-modules, using similar assessment forms (experimental n = 413; control group n = 243). Several prompts which corresponded to the learner characteristics were implemented, including an explanation prompt, a motivation prompt, a strategy prompt, and an assessment prompt. All learning characteristics were significant correlates of at least one of the outcome measures (test performance, errors, and omissions). However, only the assessment prompt increased test performance. On this basis, and drawing upon the testing effect, this prompt may be a particularly promising option to increase performance in e-learning and similar personalized systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.