Student engagement is an important factor for learning outcomes in higher education. Engagement with learning at campus-based higher education institutions is difficult to quantify due to the variety of forms that engagement might take (e.g. lecture attendance, self-study, usage of online/digital systems). Meanwhile, there are increasing concerns about student wellbeing within higher education, but the relationship between engagement and wellbeing is not well understood. Here we analyse results from a longitudinal survey of undergraduate students at a campus-based university in the UK, aiming to understand how engagement and wellbeing vary dynamically during an academic term. The survey included multiple dimensions of student engagement and wellbeing, with a deliberate focus on self-report measures to capture students’ subjective experience. The results show a wide range of engagement with different systems and study activities, giving a broad view of student learning behaviour over time. Engagement and wellbeing vary during the term, with clear behavioural changes caused by assessments. Results indicate a positive interaction between engagement and happiness, with an unexpected negative relationship between engagement and academic outcomes. This study provides important insights into subjective aspects of the student experience and provides a contrast to the increasing focus on analysing educational processes using digital records.
The question: “What is an appropriate role for AI?” is the subject of much discussion and interest. Arguments about whether AI should be a human replacing technology or a human assisting technology frequently take centre stage. Education is no exception when it comes to questions about the role that AI should play, and as with many other professional areas, the exact role of AI in education is not easy to predict. Here, we argue that one potential role for AI in education is to provide opportunities for human intelligence augmentation, with AI supporting us in decision‐making processes, rather than replacing us through automation. To provide empirical evidence to support our argument, we present a case study in the context of debate tutoring, in which we use prediction and classification models to increase the transparency of the intuitive decision‐making processes of expert tutors for advanced reflections and feedback. Furthermore, we compare the accuracy of unimodal and multimodal classification models of expert human tutors' decisions about the social and emotional aspects of tutoring while evaluating trainees. Our results show that multimodal data leads to more accurate classification models in the context we studied.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.