terminology is new, some of the approaches being labelled 'flipped' are actually much older. In this paper we provide a catch-all definition for the flipped classroom, and attempt to retrofit it with a pedagogical rationale, which we articulate through six testable propositions. These propositions provide a potential agenda for research about flipped approaches and form the structure of our investigation. We construct a theoretical argument that flipped approaches might improve student motivation, and help manage cognitive load. We conclude with a call for more specific types of research into the effectiveness of the flipped classroom approach.
Evaluative judgement is the capability to make decisions about the quality of work of oneself and others. In this paper, we propose that developing students' evaluative judgement should be a goal of higher education, to enable students to improve their work and to meet their future learning needs: a necessary capability of graduates. We explore evaluative judgement within a discourse of pedagogy rather than primarily within an assessment discourse, as a way of encompassing and integrating a range of pedagogical practices. We trace the origins and development of the term 'evaluative judgement' to form a concise definition then recommend refinements to existing higher education practices of self-assessment, peer assessment, feedback, rubrics, and use of exemplars to contribute to the development of evaluative judgement. Considering pedagogical practices in light of evaluative judgement
Since the early 2010s the literature has shifted to view feedback as a process that students do where they make sense of information about work they have done, and use it to improve the quality of their subsequent work. In this view, effective feedback needs to demonstrate effects. However, it is unclear if educators and students share this understanding of feedback. This paper reports a qualitative investigation of what educators and students think the purpose of feedback is, and what they think makes feedback effective. We administered a survey on feedback that was completed by 406 staff and 4514 students from two Australian universities. Inductive thematic analysis was conducted on data from a sample of 323 staff with assessment responsibilities and 400 students. Staff and students largely thought the purpose of feedback was improvement. With respect to what makes feedback effective, staff mostly discussed feedback design matters like timing, modalities and connected tasks. In contrast, students mostly wrote that high-quality feedback comments make feedback effective -especially comments that are usable, detailed, considerate of affect and personalised to the student's own work. This study may assist researchers, educators and academic developers in refocusing their efforts in improving feedback.
CONTEXT Formal qualitative synthesis is the process of pooling qualitative and mixedmethod research data, and then drawing conclusions regarding the collective meaning of the research. Qualitative synthesis is regularly used within systematic reviews in the health professions literature, although such use has been heavily debated in the general literature. This controversy arises in part from the inherent tensions found when generalisations are derived from in-depth studies that are heavily context-dependent. METHODSWe explore three representative qualitative synthesis methodologies: thematic analysis; meta-ethnography, and realist synthesis. These can be understood across two dimensions: integrative to interpretative, and idealist to realist. Three examples are used to illustrate the relative strengths and limitations of these approaches.DISCUSSION Against a backdrop of controversy and diverse methodologies, readers must take a critical stand when reading literature reviews that use qualitative synthesis to derive their findings. We argue that notions of qualitative rigour such as transparency and acknowledgment of the researchers' stance should be applied to qualitative synthesis.research methods
'Rubric' is a term with a variety of meanings. As the use of rubrics has increased both in research and practice, the term has come to represent divergent practices. These range from secret scoring sheets held by teachers to holistic studentdeveloped articulations of quality. Rubrics are evaluated, mandated, embraced and resisted based on often imprecise and inconsistent understandings of the term. This paper provides a synthesis of the diversity of rubrics, and a framework for researchers and practitioners to be clearer about what they mean when they say 'rubric'. Fourteen design elements or decision points are identified that make one rubric different from another. This framework subsumes previous attempts to categorise rubrics, and should provide more precision to rubric discussions and debate, as well as supporting more replicable research and practice.
A wide range of technologies has been developed to enhance assessment, but adoption has been inconsistent. This is despite assessment being critical to student learning and certification. To understand why this is the case and how it can be addressed, we need to explore the perspectives of academics responsible for designing and implementing technology‐supported assessment strategies. This paper reports on the experience of designing technology‐supported assessment based on interviews with 33 Australian university teachers. The findings reveal the desire to achieve greater efficiencies and to be contemporary and innovative as key drivers of technology adoption for assessment. Participants sought to shape student behaviors through their designs and made adaptations in response to positive feedback and undesirable outcomes. Many designs required modification because of a lack of appropriate support, leading to compromise and, in some cases, abandonment. These findings highlight the challenges to effective technology‐supported assessment design and demonstrate the difficulties university teachers face when attempting to negotiate mixed messages within institutions and the demands of design work. We use these findings to suggest opportunities to improve support by offering pedagogical guidance and technical help at critical stages of the design process and encouraging an iterative approach to design.
There are many excellent publications outlining features of assessment and feedback design in higher education. However, university educators often find these ideas challenging to realise in practice, as much of the literature focuses on institutional change rather than supporting academics. This paper describes the conceptual development of a practical framework designed to stimulate educators' thinking when creating or modifying assessments. We explain the concepts that underpin this practical support, including the notions of 'assessment decisions' and 'assessment design phases', as informed by relevant literature and empirical data. We also present the outcome of this work. The Assessment Design Decisions Framework. This provides key considerations in six categories: purposes, contexts, tasks, interactions, feedback processes and learning outcomes. By tracing the development of the Framework, we highlight complex ways of thinking about assessment that are relevant to those who design and deliver assessment to tertiary students. There are many excellent publications outlining features of assessment and feedback design 2 in higher education. However, university educators often find these ideas challenging to 3 realise in practice, as much of the literature focusses on institutional change rather than 4 supporting academics. This paper describes the conceptual development of a practical 5 framework designed to stimulate educators' thinking when creating or modifying assessments. 6We explain the concepts that underpin this practical support, including the notions of 7
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.