One of the implicit aims of higher education is to enable students to become better judges of their own work. This paper examines whether students who voluntarily engage in selfassessment improve in their capacity to make those judgements. The study utilizes data from a web-based marking system that provides students with the opportunity to assess themselves on each criterion for each assessment task throughout a program of study. Student marks were compared with those from tutors to plot changes over time. The findings suggest that overall students' judgements do converge with those of tutors, but that there is considerable variation across achievement levels, with weaker students showing little improvement. While the study is limited by the exigencies of voluntary participation and thus consequential gaps in the data set, it shows how judgement over time can be demonstrated and points to the potential for more systematic interventions to improve students' judgements. It also illustrates the use of the web-based marking and feedback software (ReView) that has considerable utility in aiding self-assessment research.
Can extended opportunities for self-assessment over time help students develop the capacity to make better judgements about their work? Using evidence gathered through students' voluntary self-assessment of their performance with respect to assessment tasks in two different disciplines at two Australian universities, the paper focuses on the effects of sequences of units of study and the use of different types of assessment task (written, oral, analysis, project) in the development of student judgement. Convergence between student criteria-based gradings of their own performance in units of study and those allocated by tutors was analysed to explore the calibration of students' judgement over time. First it seeks to replicate analyses from an earlier smaller-scale study to confirm that students' judgements can be calibrated through continuing opportunities for self-assessment and feedback. Second, it extends the analysis to coherently designed sequences of units of study and explores the effects of different types of assessment. It finds that disruptive patterns of assessment within a sequence of subjects can reduce convergence between student and tutor judgements.
4This work was part of an ALTC Funded Project -Facilitating staff and student engagement with graduate attribute development, assessment and standards. The project team would like to acknowledge the work of the teaching team of subject (Dr Peter Docherty & Mr Harry Tse) for their contribution to this study. AbstractSelf-assessment can be conceptualised as the involvement of students in identifying assessment criteria and standards that they can apply to their work in order to make judgements about whether they have met these criteria (Boud, 1995). It is a process that promotes student learning rather than just grade allocation. However, self-assessment does not have obvious face validity for students; and many students find that making an objective assessment of their work difficult (Lindblom-ylanne, Pihlajamak & Kotkas, 2006). Previous business education research has also found that self-assessment does not closely reflect either peer or instructor assessments (Campbell, et al., 2001).The current study aimed to explore: (a) the relationship between self-assessment grading and teacher assessment; and (b) the effect of self-assessment in engaging students with graduate attributes, in order to explore the tenets of self-assessment This process of self-assessment was investigated through application of an online assessment system, ReView, to encourage more effective self-assessment in business education. Data collected from two groups (student and teacher) demonstrated that: (1) initial self-assessment results between the teaching academics and the students' self-assessment, were significantly different with students overestimating their ability on every criterion; (2) however, the variation diminished with time to the point that there was no significant difference between the two assessments; and (3) students' awareness of the graduate attributes for their degree program increased from the beginning to the end of the subject (Note 1).
Purpose -Group-based tasks or assignments, if well designed, can yield benefits for student employability and other important attribute developments. However there is a fundamental problem when all members of the group receive the same mark and feedback. Disregarding the quality and level of individual contributions can seriously undermine many of the educational benefits that groupwork can potentially provide. This paper aims to describe the authors' research and practical experiences of using self and peer assessment in an attempt to retain these benefits. Design/methodology/approach -Both authors separately used different paper-based methods of self and peer assessment and then used the same web-based assessment tool. Case studies of their use of the online tool are described in Business Faculty and Design School subjects. Student comments and tabular data from their self and peer assessment ratings were compared from the two Faculties.Findings -The value of anonymity when using the online system was found to be important for students. The automatic calculation of student ratings facilitated the self and peer assessment process for large classes in both design and business subjects. Students using the online system felt they were fairly treated in the assessment process as long as it was explained to them beforehand. Students exercised responsibility in the online ratings process by not over-using the lowest rating category. Student comments and analysis of ratings implied that a careful and reflective evaluation of their group engagement was achieved online compared with the paper-based examples quoted.Research limitations/implications -This was not a control group study as the subjects in business and design were different for both paper-based and online systems. Although the online system used was the same (SPARK), the group sizes, rating scales and self and peer assessment criteria were different in the design and business cases. Originality/value -The use of paper-based approaches to calculate a fair distribution of marks to individual group members was not viable for the reasons identified. The article shows that the online system is a very viable option, particularly in large student cohorts where students are unlikely to know one another.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.