Abstract:An accurate measure of mental workload level has diverse neuroergonomic applications ranging from brain computer interfacing to improving the efficiency of human operators. In this study, we integrated electroencephalogram (EEG), functional near-infrared spectroscopy (fNIRS), and physiological measures for the classification of three workload levels in an n-back working memory task. A significantly better than chance level classification was achieved by EEG-alone, fNIRS-alone, physiological alone, and EEG+fNIR… Show more
“…Brain Sci. 2020, 10, x FOR PEER REVIEW 3 of 20 respectively), a hybrid brain data incorporates more information and enabling higher mental decoding accuracy [43] confirming earlier findings [47]. Specifically, in [43] we showed that body physiological measures (heart rate and breathing) did not contribute any new information to fNIRS + EEG based classification of cognitive workload.…”
Section: Introductionsupporting
confidence: 79%
“…The measurement of neural correlates of cognitive and affective processes using concurrent EEG and fNIRS, multimodal functional neuroimaging, has seen growing interest [43][44][45][46]. As fNIRS and EEG measure complementary aspects of brain activity (hemodynamic and electrophysiological, respectively), a hybrid brain data incorporates more information and enabling higher mental decoding accuracy [43] confirming earlier findings [47]. Specifically, in [43] we showed that body physiological measures (heart rate and breathing) did not contribute any new information to fNIRS + EEG based classification of cognitive workload.…”
Human facial expressions are regarded as a vital indicator of one’s emotion and intention, and even reveal the state of health and wellbeing. Emotional states have been associated with information processing within and between subcortical and cortical areas of the brain, including the amygdala and prefrontal cortex. In this study, we evaluated the relationship between spontaneous human facial affective expressions and multi-modal brain activity measured via non-invasive and wearable sensors: functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) signals. The affective states of twelve male participants detected via fNIRS, EEG, and spontaneous facial expressions were investigated in response to both image-content stimuli and video-content stimuli. We propose a method to jointly evaluate fNIRS and EEG signals for affective state detection (emotional valence as positive or negative). Experimental results reveal a strong correlation between spontaneous facial affective expressions and the perceived emotional valence. Moreover, the affective states were estimated by the fNIRS, EEG, and fNIRS + EEG brain activity measurements. We show that the proposed EEG + fNIRS hybrid method outperforms fNIRS-only and EEG-only approaches. Our findings indicate that the dynamic (video-content based) stimuli triggers a larger affective response than the static (image-content based) stimuli. These findings also suggest joint utilization of facial expression and wearable neuroimaging, fNIRS, and EEG, for improved emotional analysis and affective brain–computer interface applications.
“…Brain Sci. 2020, 10, x FOR PEER REVIEW 3 of 20 respectively), a hybrid brain data incorporates more information and enabling higher mental decoding accuracy [43] confirming earlier findings [47]. Specifically, in [43] we showed that body physiological measures (heart rate and breathing) did not contribute any new information to fNIRS + EEG based classification of cognitive workload.…”
Section: Introductionsupporting
confidence: 79%
“…The measurement of neural correlates of cognitive and affective processes using concurrent EEG and fNIRS, multimodal functional neuroimaging, has seen growing interest [43][44][45][46]. As fNIRS and EEG measure complementary aspects of brain activity (hemodynamic and electrophysiological, respectively), a hybrid brain data incorporates more information and enabling higher mental decoding accuracy [43] confirming earlier findings [47]. Specifically, in [43] we showed that body physiological measures (heart rate and breathing) did not contribute any new information to fNIRS + EEG based classification of cognitive workload.…”
Human facial expressions are regarded as a vital indicator of one’s emotion and intention, and even reveal the state of health and wellbeing. Emotional states have been associated with information processing within and between subcortical and cortical areas of the brain, including the amygdala and prefrontal cortex. In this study, we evaluated the relationship between spontaneous human facial affective expressions and multi-modal brain activity measured via non-invasive and wearable sensors: functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) signals. The affective states of twelve male participants detected via fNIRS, EEG, and spontaneous facial expressions were investigated in response to both image-content stimuli and video-content stimuli. We propose a method to jointly evaluate fNIRS and EEG signals for affective state detection (emotional valence as positive or negative). Experimental results reveal a strong correlation between spontaneous facial affective expressions and the perceived emotional valence. Moreover, the affective states were estimated by the fNIRS, EEG, and fNIRS + EEG brain activity measurements. We show that the proposed EEG + fNIRS hybrid method outperforms fNIRS-only and EEG-only approaches. Our findings indicate that the dynamic (video-content based) stimuli triggers a larger affective response than the static (image-content based) stimuli. These findings also suggest joint utilization of facial expression and wearable neuroimaging, fNIRS, and EEG, for improved emotional analysis and affective brain–computer interface applications.
“…An F1 score of 0.811 was achieved when incorporating all sensor modalities. This value falls within the upper range of classification accuracies reported in domains outside of RAS, which varied from 45% to 90% [55,[87][88][89][90]. These differences may be due to experiment designs, baseline selections, and task demands.…”
Section: Fusionsupporting
confidence: 56%
“…This suggests that EEG is the most predictive modality for characterizing workload levels. Other studies in domains outside of RAS [55,[87][88][89][90] also concluded that EEG was the salient modality for workload characterization. In RAS, EEG may be especially reliable due to the design of the dVSS.…”
Section: Fusionmentioning
confidence: 93%
“…Baseline indicates how ground-truth values for the workload was determined. Although the raw groundtruth workload values have a continuous range, many studies discretized the raw scores into two [31], three [53,55], or four levels [34,56] for training the classification algorithms to predict workload. Subjects describes the sample size and background of the participants.…”
Monitoring surgeon workload during robot-assisted surgery can guide allocation of task demands, adapt system interfaces, and assess the robotic system's usability. Current practices for measuring cognitive load primarily rely on questionnaires that are subjective and disrupt surgical workflow. To address this limitation, a computational framework is demonstrated to predict user workload during telerobotic surgery. This framework leverages wireless sensors to monitor surgeons' cognitive load and predict their cognitive states. Continuous data across multiple physiological modalities (e.g., heart rate variability, electrodermal, and electroencephalogram activity) were simultaneously recorded for twelve surgeons performing surgical skills tasks on the validated da Vinci Skills Simulator. These surgical tasks varied in difficulty levels, e.g., requiring varying visual processing demand and degree of fine motor control. Collected multimodal physiological signals were fused using independent component analysis, and the predicted results were compared to the ground-truth workload level. Results compared performance of different classifiers, sensor fusion schemes, and physiological modality (i.e., prediction with single vs. multiple modalities). It was found that our multisensor approach outperformed individual signals and can correctly predict cognitive workload levels 83.2% of the time during basic and complex surgical skills tasks. CCS Concepts: • Human-centered computing → Human computer interaction (HCI); Interactive systems and tools; User interface management systems;
Background: Pilots must process multiple streams of information simultaneously.Mental workload is one of the main issues in man-machine interactive mode when dealing with multiple tasks. This study aimed to combine functional near-infrared spectroscopy (fNIRS) and electrocardiogram (ECG) to detect changes in mental workload during multitasking in a simulated flight.Methods: Twenty-six participants performed three multitasking tasks at different mental workload levels. These mental workload levels were set by varying the number of subtasks. fNIRS and ECG signals were recorded during tasks. Participants filled in the national aeronautics and space administration task load index (NASA-TLX) scale after each task. The effects of mental workload on scores of NASA-TLX, performance of tasks, heart rate (HR), heart rate variability (HRV), and the prefrontal cortex (PFC) activation were analyzed.Results: Compared to multitasking in lower mental workload conditions, participants exhibited higher scores of NASA-TLX, HR, and PFC activation when multitasking in high mental workload conditions. Their performance was worse during the high mental workload multitasking condition, as evidenced by the higher average tracking distance, smaller number of response times, and longer response time of the meter. The standard deviation of the RR intervals (SDNN) was negatively correlated with subjective mental workload in the low task load condition and PFC activation was positively correlated with HR and subjective mental workload in the medium task load condition.
Conclusion:HR and PFC activation can be used to detect changes in mental workload during simulated flight multitasking tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.