Affective computing is an emerging research area which provides insights on human's mental state through human-machine interaction. During the interaction process, bio-signal analysis is essential to detect human affective changes. Currently, machine learning methods to analyse bio-signals are the state of the art to detect the affective states, but most empirical works mainly deploy traditional machine learning methods rather than deep learning models due to the need for explainability. In this paper, we propose a deep learning model to process multimodal-multisensory bio-signals for affect recognition. It supports batch training for different sampling rate signals at the same time, and our results show significant improvement compared to the state of the art. Furthermore, the results are interpreted at the sensor-and signal-level to improve the explainaibility of our deep learning model.
CCS CONCEPTS• Information systems → Data stream mining; • Applied computing → Bioinformatics.
This research investigates whether students’ level of domain expertise can be detected during authentic learning activities by analyzing their physical activity patterns. More expert students reduced their manual activity by a substantial 50%, which was evident in fine-grained signal analyses and total rate of gesturing. The quality of experts’ discrete hand movements also averaged shorter in distance, briefer in duration, and slower in velocity than those of non-experts. Interestingly, experts adapted by nearly eliminating gestures on easier problems, while selectively increasing them on harder ones. They also strategically produced 62% more iconic gestures, which serve to retain spatial information in working memory while extracting inferences required to solve problems correctly. These findings highlight the close relation between hand movements and mental state and, more specifically, that hand movements provide an unusually clear window on students’ level of domain expertise. Embodied Cognition and Limited Resource theories only partially account for the present findings, which specify future directions for theoretical work.
Educational feedback has been widely acknowledged as an effective approach to improving student learning. However, scaling effective practices can be laborious and costly, which motivated researchers to work on automated feedback systems (AFS). Inspired by the recent advancements in the pre-trained language models (e.g., ChatGPT), we posit that such models might advance the existing knowledge of textual feedback generation in AFS because of their capability to offer natural-sounding and detailed responses. Therefore, we aimed to investigate the feasibility of using ChatGPT to provide students with feedback to help them learn better. Specifically, we first examined the readability of ChatGPT-generated feedback. Then, we measured the agreement between ChatGPT and the instructor when assessing students' assignments according to the marking rubric. Finally, we used a well-known theoretical feedback framework to further investigate the effectiveness of the feedback generated by ChatGPT. Our results show that i) ChatGPT is capable of generating more detailed feedback that fluently and coherently summarizes students' performance than human instructors; ii) ChatGPT achieved high agreement with the instructor when assessing the topic of students' assignments; and iii) ChatGPT could provide feedback on the process of students completing the task, which benefits students developing learning skills.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.