A Mastery Rubric is a curriculum development and evaluation tool – for higher, graduate and professional, and post-graduate education. It brings structure to a curriculum by specifying the desired knowledge, skills and abilities (KSAs) that the curriculum will provide, together with performance levels that characterize the learner, on each KSA, as the individual moves through stages of development. This tool unites a developmental implementation of Bloom’s taxonomy with curriculum objectives, to move learners along the articulated path from novice towards independence and expertise with built-in features of psychometric assessment validity. This tool promotes development in the target KSAs as well as assessment that demonstrates this development, and encourages reflection and self-monitoring by learners and instructors throughout individual courses and the entire curriculum. A Mastery Rubric represents flexible, criterion-referenced, definitions of “success” for both individuals and the program itself, promoting alignment between the intended and the actual curricula, and fosters the generation of actionable evidence for learners, instructors, and institutions. These properties are described through the seven examples that have been completed to date. The methods that are used to create a Mastery Rubric highlight the theoretical and practical features; the effort required; as well as potential benefits to learners, instructors, and the institution.
Curriculum development in higher education should follow a formal process. Although the focus in formal curriculum theory is on long-term programs of study, the theoretical and practical considerations are also applicable to shorter-form learning experiences (single courses, lessons, or training sessions). With these considerations in mind, we discuss here an iterative model of curriculum design, the starting point of which (indeed, in the construction of any learning experience), is the articulation of the target learning outcomes: everything follows from these, including the selection of learning experiences and content, the development of assessments, and evaluation of the resulting curriculum. We discuss how the iterative process can be used in curriculum and instructional development, and provide a set of practical guidelines for curriculum and course preparation.
It is common to create courses for the higher education context that accomplish content-driven teaching goals and then develop assessments (quizzes and exams) based on the target content. However, content-driven assessment can tend to support teaching- or teacher-centered instruction. Adult learning and educational psychology theories suggest that instead, assessment should be aligned with learning, not teaching, objectives. To support the alignment of assessments with instruction in higher education, the Assessment Evaluation Rubric (AER) was developed. The AER can be utilized to guide the development and evaluation/revision of assessments that are already used. The AER describes, or permits the evaluation of, four features of an assessment: its general alignment with learning goal(s), whether the assessment is intended to/effective as formative or summative, whether some systematic approach to cognitive complexity is reflected, and whether the assessment (instructions as well as results) itself is clearly interpretable. Each dimension (alignment, utility, complexity, and clarity) has four questions that can be rated as present/absent. Other rating methods can also be conceptualized for the AER’s 16 questions, depending on the user’s intent. Any instructor can use the AER to evaluate their own assessments and ensure that they—or new assessments in development—will promote learning and learner-centered teaching. As instructors shift from face-to-face toward virtual or hybrid teaching models, or as they shift online instruction (back) to face-to-face teaching, it creates an ideal opportunity to ensure that assessment is optimizing learning and is valid for instructional decision-making.
This manuscript is in press (November, 2017) at Briefings in Bioinformatics. Qualitative data are commonly collected in higher, graduate, and post-graduate education; however, perhaps especially in the quantitative sciences, utilization of this qualitative data for decision-making can be challenging. A method for the analysis of qualitative data is the Degrees of Freedom Analysis, published in 1975. Given its origins in political science and its application in mainly business contexts, the degrees-of-freedom analysis method (DoFA) is unlikely to be discoverable or used to understand survey or other educational data obtained from teaching, training, or evaluation. This paper therefore introduces and demonstrates the DoFA with modifications specifically to support educational research and decision-making with examples in bioinformatics. DoFA identifies and aligns theoretical or applied principles with qualitative evidence. The demonstrations include two hypothetical examples, and a case study of the role of scaffolding in an independent project (“capstone”) of a graduate course in biostatistics. Included to promote inquiry, inquiry-based learning, and the development of research skills, the capstone is often scaffolded (instructor-supported and therefore, formative), although it is actually intended to be summative. The case analysis addresses the question of whether the scaffolding provided for a capstone assignment affects its utility for formative or summative assessment. The DoFA is also used to evaluate the relative efficacies of other models for scaffolding the capstone project. These examples are intended to both explain this method and to demonstrate how it can be used to make decisions within a curriculum or for bioinformatics training.
A 2017 article in the Proceedings of the National Academy of Science (PNAS) reported that short-term, intensive “bootcamp” and other training opportunities (“train the user”) did not yield results that the training is intended to achieve. The results are predicted by cognitive psychological theories and findings from educational psychology. However, many “bootcamp” (short and intense preparation) type training opportunities – especially train the trainer - have some anecdotal evidence of their success and impact. Data and Software Carpentries (“The Carpentries”) is an organization that offers short/intensive training workshops around the world similar to some of those discussed in the 2017 paper. They train “users” of data and software, and they also train trainers of these users. In their response to this 2017 paper, The Carpentries acknowledge that one point raised in that paper, that training spaced over time is more successful than shorter/more intense training, cannot be circumvented. However, short and intense training is popular, and is sometimes all that is feasible; both their train the user and train the trainer sessions are short and intense. In their response, The Carpentries described their own strategies for achieving more positive outcomes for short and intense training than were described in this 2017 PNAS article. Two of these strategies are: “meet learners where they are” and “explicitly address motivation and self efficacy”. These strategies may not be functioning as well as they could be. To clarify what might be impeding these strategies, this white paper compares and contrasts features of training those who will train others (“train the trainer”) with training for “new users”. These types of programs are short and intense, but differ in fundamental ways. Understanding these differences can be leveraged to improve the outcomes of train the user training; recommendations for doing so is presented. The recommendations are embedded in descriptions of those features of training opportunities that can be leveraged purposefully to promote sustainable learning, even when training is short and intense. It is hoped that the model can support the success of the two Carpentries strategies to promote the achievement of Carpentries goals –and those of all who offer short/intensive training opportunities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.