Background: COVID necessitated the shift to virtual resident instruction. The challenge of learning via virtual modalities has the potential to increase cognitive load. It is important for educators to reduce cognitive load to optimize learning, yet there are few available tools to measure cognitive load. The objective of this study is to identify and provide validity evidence following Messicks' framework for an instrument to evaluate cognitive load in virtual emergency medicine didactic sessions.
Background: Didactics play a key role in medical education. There is no standardized didactic evaluation tool to assess quality and provide feedback to instructors.Cognitive load theory provides a framework for lecture evaluations. We sought to develop an evaluation tool, rooted in cognitive load theory, to assess quality of didactic lectures.
Methods:We used a modified Delphi method to achieve expert consensus for items in a lecture evaluation tool. Nine emergency medicine educators with expertise in cognitive load participated in three modified Delphi rounds. In the first two rounds, experts rated the importance of including each item in the evaluation rubric on a 1 to 9 Likert scale with 1 labeled as "not at all important" and 9 labeled as "extremely important." In the third round, experts were asked to make a binary choice of whether the item should be included in the final evaluation tool. In each round, the experts were invited to provide written comments, edits, and suggested additional items.Modifications were made between rounds based on item scores and expert feedback.We calculated descriptive statistics for item scores.
Results:We completed three Delphi rounds, each with 100% response rate. After Round 1, we removed one item, made major changes to two items, made minor wording changes to nine items, and modified the scale of one item. Following Round 2, we eliminated three items, made major wording changes to one item, and made minor wording changes to one item. After the third round, we made minor wording changes to two items. We also reordered and categorized items for ease of use. The final evaluation tool consisted of nine items.
Conclusions:We developed a lecture assessment tool rooted in cognitive load theory specific to medical education. This tool can be applied to assess quality of instruction and provide important feedback to speakers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.