Learning involves a substantial amount of cognitive, social and emotional states. Therefore, recognizing and understanding these states in the context of learning is key in designing informed interventions and addressing the needs of the individual student to provide personalized education. In this paper, we explore the automatic detection of learner's nonverbal behaviors involving handover face gestures, head and eye movements and emotions via facial expressions during learning. The proposed computer vision-based behavior monitoring method uses a low-cost webcam and can easily be integrated with modern tutoring technologies. We investigate these behaviors in-depth over time in a classroom session of 40 minutes involving reading and problem-solving exercises. The exercises in the sessions are divided into three categories: an easy, medium and difficult topic within the context of undergraduate computer science. We found that there is a significant increase in head and eye movements as time progresses, as well as with the increase of difficulty level. We demonstrated that there is a considerable occurrence of handover face gestures (on average 21.35%) during the 40 minutes session and is unexplored in the education domain. We propose a novel deep learning approach for automatic detection of handover face gestures in images with a classification accuracy of 86.87%. There is a prominent increase in handover face gestures when the difficulty level of the given exercise increases. The handover face gestures occur more frequently during problem-solving (easy 23.79%, medium 19.84% and difficult 30.46%) exercises in comparison to reading (easy 16.20%, medium 20.06% and difficult 20.18%).
This paper looks at the possibility of creating an algorithm that will combine liveness and coercion modalities, along with organisational factors such as workforce composition. The hypothesis is that the algorithm can produce a value that can further be used to compare different setups of biometric security for self-optimisation by taking in the context of technique compatibility and user requirements. To this end, the algorithm focuses on four main aspects: time, participants, anomalous user-bases, and device redundancy inside a typical organisation. An experimental methodology has been used, focusing on the development of the algorithm, its associated effects, and how different parameters can be reliably estimated. After testing, the algorithm is proved to work as it creates an appropriate value, called the security value, which can be used to discover the best combinations of modalities for fusion development or practical installation for a given situation. There are some issues with this primarily due to data provision, the requirements for more data to parse through the algorithm, and finally, the need for a suitable interface, otherwise it may be too complex for efficient usage in a traditional security environment. There are potential implications within a general security application such as liveness and coercion multimodal fusion and autonomous system development and pervasive environments, allowing dynamic security systems to be developed. However, the main focus of this algorithm is to highlight the fusion of liveness and coercion detection and how they can be best applied to specific security scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.