This paper evaluates the use of peer and self-assessments as part of the learning process of an open ended, essay-based course in a second-year degree engineering module in Brunei Darussalam. The essays were marked using a rubric by the student, a peer, and the lecturer, with students being pre-trained on the use of the rubric prior to the exercise. Comparison of the marks awarded by the different markers (student, peer, lecturer) showed that whilst there might be correlations between different markers (i.e. peer -self; or lecturer -self) for marks on certain sub-sections of the work, there was no overall correlation between marks for this open ended problem. This lack of consistency highlights the subjective nature of marking essay-based work, even with the use of a rubric. Feedback on the students' experiences was obtained using a questionnaire, and most students felt that the peer assessment exercise was a worthwhile activity which aided both their learning and students' motivation to learn. Analysis of student performance in the exam, after the exercise, identified that almost all students did better in the question linked to the exercise than in others, further reinforcing this student view. The poor mark concordance in this study indicates that both techniques are not suitable to quantitatively evaluate student performance, however they had a positive impact on student learning. It is recommended that this approach is incorporated in other open-ended assessments as a form of formative feedback with the provision of adequate tutor and student preparation.
This paper presents findings on the application of Google Docs as a collaborative writing platform for a research report assignment. Based on a mixed method study, 34 first year students were put into eight groups of four to five and tasked with writing a research report. Four groups adopted Google Docs to discuss and develop the assignment whilst the other four groups adopted a more traditional face-to-face approach. Two sets of questionnaires, pre and post to the assignment, were distributed to investigate students' attitude and preference towards both approaches. Findings indicate that students shared mixed feelings towards Google Docs where students without prior experience found it a positive experience and useful for their learning. Students preferred the real-time accessibility and time-saving features of the platform as compared to face-to-face, with results indicating that a blended approach of online and face-to-face meetings is the best approach to maximise student learning.
This article presents an evaluation of the use of peer and self-assessment as part of the learning process in a public speaking assessment coursework, with students from two departments taking part. Students were assessed by themselves, their peers and the lecturer using an online platform, Google forms, utilizing a set of rubrics. The marks were compared between markers to identify similarities and differences. After the process, student feedback on the experience was obtained using a questionnaire utilizing the Likert seven point scale to rate different questions. Analysis of the marks awarded found that whilst there might be correlations between different markers (i.e. peer – self) for marks on certain subsections of the work, there was no overall correlation between marks. Student perceptions to the exercise indicated that the use of rubrics was well received; students considered it a fair assessment method and it provided information on how to perform well in the assessment.
This study investigates using technology to promote authentic and meaningful learning in applying a peer assessment rubric for a public speaking assessment in a higher education institution in Brunei Darussalam. Three hundred six undergraduates from Universiti Teknologi Brunei's Schools of Business, Computing, and the Engineering Faculty conducted the assessments in real-time using online-based rubrics accessible via their smartphones or laptops. Comparisons were made between the lecturers' marks and students for each rubric criterion, and a set of questionnaires was distributed to investigate students' perceptions toward the peer assessment after the assessment. The results indicated a variable discrepancy between assessments by the lecturers and students for the rubric criteria. While in some disciplines, peer marking was found to overmark compared to the lecturer by more than 15%, in other cases, the marks were similar. Comparison between peer and lecturer assessment indicated that the level of agreement was sensitive to the lecturer, but less so between student cohort when assessed by the same lecturer. When differences were observed, there was no apparent discrepancy in an agreement between aspects of the rubric which evaluated content or delivery. Students’ feedback revealed a positive response towards peer assessment but highlighted issues surrounding the technological aspects of the implementation process.
This paper sets out to answer a fundamental question: ‘How do tutors hedge their comments using modal verbs?’ A total of 126 feedback reports comprising 35,941 words were collected from two Humanities departments in a UK higher education institution. Although this is a relatively small corpus, it is a specialised corpus. The research focuses on a specific genre – written feedback –, thus the findings should be justifiable in relation to the hedging expressions used in giving feedback through the use of modal verbs. A wordlist search of the nine core modal verbs (can, could, may, might, must, shall, should, will and would) was carried out with WordSmith Tools 5. The results show that could, might and would are the top three modal verbs, followed by can, may, must, should and will, all of which are used as hedging, although some level of certainties are higher than others. Shall was not found in the written feedback, since it is more commonly used in legal texts. The modal verbs could, might and would were used most often because of their lower levels of certainty. Must, should and will indicate the higher certainty level, more direct and less opted for. The concordances for each modal verb were also further examined for their functions. The modal verbs were used to indicate criticism (can, could, may, might, will and would), suggestions (could, may, might and would), possibility (may, might and can) and necessity (must and should). Other functions included permission (can), certainty (will) and advice (would), all of which were of very low frequency. The results show that tutors tend to be more assertive or direct when commenting on mechanical aspects of writing (through must and should) and to use more hedging in criticising or offering suggestions. The findings of this research aim to provide a feedback framework as a reference guide to teacher training programmes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.