Excellence in academic performance at the graduate level requires good command of writing skills. Teachers' written feedback can help students to develop their writing skills. However, several personal and contextual factors may influence feedback processes and its utilization by students. Therefore, understanding these factors is essential to improve the practice of written feedback. This study aimed to appraise the quality of written feedback in the graduate programmes and to ascertain students' perceptions about it at a private university in Pakistan. A purposive sample of 15 participants comprised the study. The data were collected through in-depth students' interviews and the teachers' written comments on students' assignments. Data were coded and categorized to assess the pattern of similarities and dissimilarities. The analysis of comments on students' assignments indicated that the amount of feedback varied greatly. Although some feedback focused on form and style, most comments focused on the content. Moreover, the tone of comments lacked a balance of praise, criticism and suggestions. The data from students' interviews were categorized as: variations in experiences, functions of written feedback, effectiveness of feedback and utilization of feedback. With some exceptions students' perceptions about the quality feedback corroborated with the teachers' comment analysis. The study highlights several factors that impact the receptivity and utilization of feedback by students. Therefore, teachers need to be aware and trained to enhance the quality of their feedback.
This article draws together two linked studies on formal teaching spaces within one university. The first consisted of a multi-method analysis, including observations of four teaching events, interviews with academics and estates staff, analysis of architectural plans, and a talking campus tour. The second study surveyed 166 students about their perceptions of existing teaching spaces and dreams of ideal spaces, eliciting qualitative comments. Researchers used a comparative analysis of the data to generate themes. Academics and students held differing conceptions of space. For students, a functional view prevailed with teacher-centred and dominant approaches (lectures, seminars, tutorials) constraining their imagination of fresh possibilities. Academics reflected on the limits and potential of spaces, surfacing more abstract concepts about familiarity, invisibility, space-time dimensions, territoriality and collegiality. The article explores the boundaries that space may place over imagined and alternative pedagogies, and concludes that familiar, computer-networked and conventional spaces may re-inscribe hierarchical, teacher-centred approaches.
This article explores the relationship between the lack of visible attention to formative assessment in degree specifications and its marginalization in practice. Degree specification documents form part of the quality apparatus emphasizing the accountability and certification duties of assessment. Ironically, a framework designed to assure quality may work to the exclusion of a pedagogic duty to students. This study draws on interview and documentary evidence from 14 programmes at a single UK university, supported by data from a national research project. The authors found that institutional quality frameworks focused programme leaders' attention on summative assessment, usually atomized to the modular unit. The invisibility of formative assessment in documentation reinforced the tendency of modular programmes to have high summative demands, with optional, fragmented and infrequent formative assessment. Heavy workloads, modularity and pedagogic uncertainties compounded the problem. The article concludes with reflections about facilitating a more pervasive culture of formative assessment to improve student learning.
Evidence from 73 programmes in 14 U.K universities sheds light on the typical student experience of assessment over a three-year undergraduate degree. A previous small-scale study in three universities characterised programme assessment environments using a similar method. The current study analyses data about assessment patterns using descriptive statistical methods, drawing on a large sample in a wider range of universities than the original study. Findings demonstrate a wide range of practice across programmes: from 12 summative assessments on one programme to 227 on another; from 87% by examination to none on others. While variations cast doubt on the comparability of U.K degrees, programme assessment patterns are complex. Further analysis distinguishes common assessment patterns across the sample. Typically, students encounter eight times as much summative as formative assessment, a dozen different types of assessment, more than three quarters by coursework. The presence of high summative and low formative assessment diets is likely to compound students' gradeorientation, reinforcing narrow and instrumental approaches to learning. High varieties of assessment are probable contributors to student confusion about goals and standards. Making systematic headway to improve student learning from assessment requires a programmatic and evidence-led approach to design, characterised by dialogue and social practice.
This paper explores disciplinary patterns of assessment and feedback, using data from the Transforming the Experience of Students through Assessment project. Its central research question concerns the effect of disciplinary assessment patterns on student learning. Audit data from 18 degree programmes at 8 UK universities showed variations in assessment patterns across three disciplinary fields: Humanities, Professional and Science courses. There were variations in assessment demands; in the quantity of feedback and in the proportion of examinations. Statistical analysis of Assessment Experience Questionnaire data (n = 762) explored whether these differences influenced students' perceptions of learning across the disciplines. Findings showed that there were no significant differences in students' perceptions of learning from examinations. Humanities students evaluated the appropriateness of their assessment lower than other discipline groups; professional students were less clear about goals and standards. The researchers propose explanations for these findings and suggest avenues for further research.
Lecture capture is used increasingly in the UK, and has become a normal feature of higher education. Most studies on the impact of lecture capture have focused on benefits to student learning, the flipped classroom or student non-attendance at lectures following its introduction. It is less clear how the use of lecture capture has impacted on lecturers' own academic practice. In this study, we use a mixed-methods approach to explore the impact of this intrusive yet invisible technology on the quality of teaching. We have mapped our findings to the UK Professional Standards Framework (UKPSF). In doing so, our data paints a mixed picture of lecture capture's Janus-faced reality. On the one hand, it enhances lecturer self-awareness, planning and conscious 'performance'; on the other hand, it crushes spontaneity, impairs interaction and breeds wariness through constant surveillance. While the Teaching Excellence Framework rewards institutions for providing state-ofthe-art technology and lecture recording systems, our findings pose awkward questions as to whether lecture capture is making teaching more bland and instrumental, albeit neatly aligned to dimensions of the UKPSF. We provide contradictory evidence about lecture capture technology, embraced by students, yet tentatively adopted by most academics. The implications of our study are not straightforward, except to proceed with caution, valuing the benefits but ensuring that learning is not dehumanised through blind acceptance at the moment we press the record button.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.