“…The bands are frequency ranges and are strongly correlated to cognitive states (Hassib, Khamis, Friedl, Schneegass, & Alt, 2017). For example, the alpha band power has been associated with attention (Huang, Jung, & Makeig, 2007), the lower-beta band is related tomemory and theta is related to cognitive load (Kumar & Bhuvaneswari, 2012).…”
Students' on‐task engagement during adaptive learning activities has a significant effect on their performance, and at the same time, how these activities influence students' behavior is reflected in their effort exertion. Capturing and explaining effortful (or effortless) behavior and aligning it with learning performance within contemporary adaptive learning environments, holds the promise to timely provide proactive and actionable feedback to students. Using sophisticated machine learning (ML) algorithms and rich learner data, facilitates inference‐making about several behavioral aspects (including effortful behavior) and about predicting learning performance, in any learning context. Researchers have been using ML methods in a “black‐box” approach, ie, as a tool where the input data is the learner data and the output is a given class from the chosen construct. This work proposes a methodological shift from the “black‐box” approach to a “grey‐box” approach that bridges the hypothesis/literature‐driven (feature extraction) “white‐box” approach with the computation/data‐driven (feature fusion) “black‐box” approach. This will allow us to utilize data features that are educationally and contextually meaningful. This paper aims to extend current methodological paradigms, and puts into practice the proposed approach in an adaptive self‐assessment case study taking advantage of new, cutting‐edge, interdisciplinary work on building pipelines for educational data, using innovative tools and techniques.
What is already known about this topic
Capturing and measuring learners' engagement and behavior using physiological data has been explored during the last years and exhibits great potential.
Effortless behavioral patterns commonly exhibited by learners, such as “cheating,” “guessing” or “gaming the system” counterfeit the learning outcome.
Multimodal data can accurately predict learning engagement, performance and processes.
What this paper adds
Generalizes a methodology for building machine learning pipelines for multimodal educational data, using a modularized approach, namely the “grey‐box” approach.
Showcases that fusion of eye‐tracking, facial expressions and arousal data provide the best prediction of effort and performance in adaptive learning settings.
Highlights the importance of fusing data from different channels to obtain the most suited combinations from the different multimodal data streams, to predict and explain effort and performance in terms of pervasiveness, mobility and ubiquity.
Implications for practice and/or policy
Learning analytics researchers shall be able to use an innovative methodological approach, namely the “grey‐box,” to build machine learning pipelines from multimodal data, taking advantage of artificial intelligence capabilities in any educational context.
Learning design professionals shall have the opportunity to fuse specific features of the multimodal data to drive the interpretation of learning outcomes in terms of physiological learner states.
The constraints from th...
“…The bands are frequency ranges and are strongly correlated to cognitive states (Hassib, Khamis, Friedl, Schneegass, & Alt, 2017). For example, the alpha band power has been associated with attention (Huang, Jung, & Makeig, 2007), the lower-beta band is related tomemory and theta is related to cognitive load (Kumar & Bhuvaneswari, 2012).…”
Students' on‐task engagement during adaptive learning activities has a significant effect on their performance, and at the same time, how these activities influence students' behavior is reflected in their effort exertion. Capturing and explaining effortful (or effortless) behavior and aligning it with learning performance within contemporary adaptive learning environments, holds the promise to timely provide proactive and actionable feedback to students. Using sophisticated machine learning (ML) algorithms and rich learner data, facilitates inference‐making about several behavioral aspects (including effortful behavior) and about predicting learning performance, in any learning context. Researchers have been using ML methods in a “black‐box” approach, ie, as a tool where the input data is the learner data and the output is a given class from the chosen construct. This work proposes a methodological shift from the “black‐box” approach to a “grey‐box” approach that bridges the hypothesis/literature‐driven (feature extraction) “white‐box” approach with the computation/data‐driven (feature fusion) “black‐box” approach. This will allow us to utilize data features that are educationally and contextually meaningful. This paper aims to extend current methodological paradigms, and puts into practice the proposed approach in an adaptive self‐assessment case study taking advantage of new, cutting‐edge, interdisciplinary work on building pipelines for educational data, using innovative tools and techniques.
What is already known about this topic
Capturing and measuring learners' engagement and behavior using physiological data has been explored during the last years and exhibits great potential.
Effortless behavioral patterns commonly exhibited by learners, such as “cheating,” “guessing” or “gaming the system” counterfeit the learning outcome.
Multimodal data can accurately predict learning engagement, performance and processes.
What this paper adds
Generalizes a methodology for building machine learning pipelines for multimodal educational data, using a modularized approach, namely the “grey‐box” approach.
Showcases that fusion of eye‐tracking, facial expressions and arousal data provide the best prediction of effort and performance in adaptive learning settings.
Highlights the importance of fusing data from different channels to obtain the most suited combinations from the different multimodal data streams, to predict and explain effort and performance in terms of pervasiveness, mobility and ubiquity.
Implications for practice and/or policy
Learning analytics researchers shall be able to use an innovative methodological approach, namely the “grey‐box,” to build machine learning pipelines from multimodal data, taking advantage of artificial intelligence capabilities in any educational context.
Learning design professionals shall have the opportunity to fuse specific features of the multimodal data to drive the interpretation of learning outcomes in terms of physiological learner states.
The constraints from th...
“…Maintaining upright posture is a complex task governed by the integration of afferent sensorimotor information with compensatory neuromuscular reactions [1], and the compensation for unpredictable perturbations in balance is essential to retaining stability and avoiding injury from falling. The cerebral cortex and central nervous system (CNS) play integral roles in postural control, incorporating information from visual, somatosensory, and vestibular systems to carry out the corrective motions needed to maintain balance [2][3][4].…”
Objective. Maintaining upright posture is a complex task governed by the integration of afferent sensorimotor and visual information with compensatory neuromuscular reactions. The objective of the present work was to characterize the visual dependency and functional dynamics of cortical activation during postural control.Approach. Proprioceptic vibratory stimulation of calf muscles at 85 Hz was performed to evoke postural perturbation in open-eye (OE) and closed-eye (CE) experimental trials, with pseudorandom binary stimulation phases divided into four segments of 16 stimuli. 64-channel EEG was recorded at 512 Hz, with perturbation epochs defined using bipolar electrodes placed proximal to each vibrator. Power spectra variation and linearity analysis was performed via fast Fourier transformation into six frequency bands (Δ, θ, α, β, γ_low, and γ_high,. Finally, functional connectivity assessment was explored via network segregation and integration analyses.Main Results. Spectra variation showed waveform and vision-dependent activation within cortical regions specific to both postural adaptation and habituation. Generalized spectral variation yielded significant shifts from low to high frequencies in CE adaptation trials, with overall activity suppressed in habituation; OE trials showed the opposite phenomenon, with both adaptation and habituation yielding increases in spectral power. Finally, our analysis of functional dynamics reveals novel cortical networks implicated in postural control using EEG source-space brain networks. In particular, our reported significant increase in local θ connectivity may signify the planning of corrective steps and/or the analysis of falling consequences, while α band network integration results reflect an inhibition of error detection within the cingulate cortex, likely due to habituation.Significance. Our findings principally suggest that specific cortical waveforms are dependent upon the availability of visual feedback, and we furthermore present the first evidence that local and global brain networks undergo characteristic modification during postural control.
“…Amongst these modalities, EEG is the most common modality employed to drive various BCI systems [9]. It is popular because it is not an invasive approach to record the brain's electrical activities, and because EEG signals have the highest resolution in time-domain [10].…”
Improving the hand motor skills in post-stroke patients through rehabilitation based on movement intention derived signals from the brain in conjunction with robot-assistive technologies are explored. The experimental work is conducted using Electroencephalogram based Brain-Computer Interface (EEG-BCI) system and the AMADEO hand rehabilitation robotic device. Two protocols using visual-cues and then using a 2-Dimensional (2D) interactive game is presented on a computer screen to healthy subjects as well as post-stroke patients performing the hand movements. The movement intention signals during hand movement are detected through the Support Vector Machine (SVM) classifier. The intent signals produced at six distinct electrodes are investigated to determine electrodes contributing most to the SVM classifier's performance. Overall, the game protocol shows better classification results for both healthy and stroke patients compared to the visual-cues protocol. FC3 is found to be the most consistent electrode site for the detection of the motor intention of the hand for both protocols. In the experimental work, average classification accuracy for the visual-cues protocol of 67.56% for healthy subjects and 56.24% for stroke patients were obtained. For the game protocol, the classifier accuracy produced for healthy participants was 79.7% and for the post-stroke patients was 66.64%. The results confirm that the intention signal is more pronounced during more engaging activities, such as playing games, for both healthy and stroke subjects. Therefore, the effectiveness of rehabilitation therapy for post-stroke patients could be significantly enhanced using interactive and engaging exercise protocols.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.