Abstract:Two key challenges in education relate to how traditional educational providers can personalise online provisions to the students' skill level, optimise the use of tools and increase both the generation and utilisation of feedback (in terms of timing, content, and subsequent use by students). The application of traditional programmes in the online setting is often complicated by the legacy of traditional universities infrastructures, knowledge bases (or lack thereof in the human-computerinteraction/HCI realm),… Show more
“…As online learning platforms become more popular, improving the student experience during courses is important. Therefore, implementing HCI systems principles when designing an e-learning website can improve the student experience by optimizing user interaction and creating a tailored system that's created for users' need and preferences, moreover, by applying HCI principles lead to increased motivation and engagement of students, resulting in an improvement, learning experience [19][20][21][22][23][24][25][26][27][28].…”
Tongue-based Human-Computer Interaction (HCI) systems have surfaced as alternative input devices offering significant benefits to individuals with severe disabilities. However, these systems often employ invasive methods such as dental retainers, tongue piercings, and multiple mouth electrodes. These methods, due to hygiene issues and obtrusiveness, are deemed impractical for daily use. This paper presents a novel non-invasive tonguebased HCI system that utilizes deep learning for microgesture detection. The proposed system overcomes the limitations of previous methods by non-invasively detecting gestures. This is accomplished by measuring tongue vibrations via an accelerometer positioned on the Genioglossus muscle, thereby eliminating the need for in-mouth installations. The system's performance was evaluated by comparing the classification results of deep learning with four widely-used supervised machine learning algorithms, namely K-Nearest Neighbors (KNN), Support Vector Machines (SVM), Decision Trees, and Random Forests. Raw data were preprocessed in both time and frequency domains to extract relevant patterns before classification. In addition, a deep learning Convolutional Neural Network (CNN) model was trained on the raw data, leveraging its proficiency in processing time series data and capturing intricate patterns automatically using convolutional and pooling layers. The CNN model demonstrated a 97% success rate in tongue gesture detection, indicating its high accuracy. The proposed system is also lowprofile, lightweight, and cost-effective, making it suitable for daily use in various contexts. This study thus introduces a non-invasive, efficient, and practical approach to tongue-based HCI systems.
“…As online learning platforms become more popular, improving the student experience during courses is important. Therefore, implementing HCI systems principles when designing an e-learning website can improve the student experience by optimizing user interaction and creating a tailored system that's created for users' need and preferences, moreover, by applying HCI principles lead to increased motivation and engagement of students, resulting in an improvement, learning experience [19][20][21][22][23][24][25][26][27][28].…”
Tongue-based Human-Computer Interaction (HCI) systems have surfaced as alternative input devices offering significant benefits to individuals with severe disabilities. However, these systems often employ invasive methods such as dental retainers, tongue piercings, and multiple mouth electrodes. These methods, due to hygiene issues and obtrusiveness, are deemed impractical for daily use. This paper presents a novel non-invasive tonguebased HCI system that utilizes deep learning for microgesture detection. The proposed system overcomes the limitations of previous methods by non-invasively detecting gestures. This is accomplished by measuring tongue vibrations via an accelerometer positioned on the Genioglossus muscle, thereby eliminating the need for in-mouth installations. The system's performance was evaluated by comparing the classification results of deep learning with four widely-used supervised machine learning algorithms, namely K-Nearest Neighbors (KNN), Support Vector Machines (SVM), Decision Trees, and Random Forests. Raw data were preprocessed in both time and frequency domains to extract relevant patterns before classification. In addition, a deep learning Convolutional Neural Network (CNN) model was trained on the raw data, leveraging its proficiency in processing time series data and capturing intricate patterns automatically using convolutional and pooling layers. The CNN model demonstrated a 97% success rate in tongue gesture detection, indicating its high accuracy. The proposed system is also lowprofile, lightweight, and cost-effective, making it suitable for daily use in various contexts. This study thus introduces a non-invasive, efficient, and practical approach to tongue-based HCI systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.