Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the alpha and theta band) and event-related potentials (ERPs) (in particular the P300) can be used as a measure of mental work or memory load. We compare their ability to estimate workload level in a well-controlled task. In addition, we combine both types of measures in a single classification model to examine whether this results in higher classification accuracy than either one alone. Participants watched a sequence of visually presented letters and indicated whether or not the current letter was the same as the one (n instances) before. Workload was varied by varying n. We developed different classification models using ERP features, frequency power features or a combination (fusion). Training and testing of the models simulated an online workload estimation situation. All our ERP, power and fusion models provide classification accuracies between 80% and 90% when distinguishing between the highest and the lowest workload condition after 2 min. For 32 out of 35 participants, classification was significantly higher than chance level after 2.5 s (or one letter) as estimated by the fusion model. Differences between the models are rather small, though the fusion model performs better than the other models when only short data segments are available for estimating workload.
While studies exist that compare different physiological variables with respect to their association with mental workload, it is still largely unclear which variables supply the best information about momentary workload of an individual and what is the benefit of combining them. We investigated workload using the n-back task, controlling for body movements and visual input. We recorded EEG, skin conductance, respiration, ECG, pupil size and eye blinks of 14 subjects. Various variables were extracted from these recordings and used as features in individually tuned classification models. Online classification was simulated by using the first part of the data as training set and the last part of the data for testing the models. The results indicate that EEG performs best, followed by eye related measures and peripheral physiology. Combining variables from different sensors did not significantly improve workload assessment over the best performing sensor alone. Best classification accuracy, a little over 90%, was reached for distinguishing between high and low workload on the basis of 2 min segments of EEG and eye related variables. A similar and not significantly different performance of 86% was reached using only EEG from single electrode location Pz.
Research on food experience is typically challenged by the way questions are worded. We therefore developed the EmojiGrid: a graphical (language-independent) intuitive self-report tool to measure food-related valence and arousal. In a first experiment participants rated the valence and the arousing quality of 60 food images, using either the EmojiGrid or two independent visual analog scales (VAS). The valence ratings obtained with both tools strongly agree. However, the arousal ratings only agree for pleasant food items, but not for unpleasant ones. Furthermore, the results obtained with the EmojiGrid show the typical universal U-shaped relation between the mean valence and arousal that is commonly observed for a wide range of (visual, auditory, tactile, olfactory) affective stimuli, while the VAS tool yields a positive linear association between valence and arousal. We hypothesized that this disagreement reflects a lack of proper understanding of the arousal concept in the VAS condition. In a second experiment we attempted to clarify the arousal concept by asking participants to rate the valence and intensity of the taste associated with the perceived food items. After this adjustment the VAS and EmojiGrid yielded similar valence and arousal ratings (both showing the universal U-shaped relation between the valence and arousal). A comparison with the results from the first experiment showed that VAS arousal ratings strongly depended on the actual wording used, while EmojiGrid ratings were not affected by the framing of the associated question. This suggests that the EmojiGrid is largely self-explaining and intuitive. To test this hypothesis, we performed a third experiment in which participants rated food images using the EmojiGrid without an associated question, and we compared the results to those of the first two experiments. The EmojiGrid ratings obtained in all three experiments closely agree. We conclude that the EmojiGrid appears to be a valid and intuitive affective self-report tool that does not rely on written instructions and that can efficiently be used to measure food-related emotions.
Estimating cognitive or affective state from neurophysiological signals and designing applications that make use of this information requires expertise in many disciplines such as neurophysiology, machine learning, experimental psychology, and human factors. This makes it difficult to perform research that is strong in all its aspects as well as to judge a study or application on its merits. On the occasion of the special topic “Using neurophysiological signals that reflect cognitive or affective state” we here summarize often occurring pitfalls and recommendations on how to avoid them, both for authors (researchers) and readers. They relate to defining the state of interest, the neurophysiological processes that are expected to be involved in the state of interest, confounding factors, inadvertently “cheating” with classification analyses, insight on what underlies successful state estimation, and finally, the added value of neurophysiological measures in the context of an application. We hope that this paper will support the community in producing high quality studies and well-validated, useful applications.
We here introduce a new experimental paradigm to induce mental stress in a quick and easy way while adhering to ethical standards and controlling for potential confounds resulting from sensory input and body movements. In our Sing-a-Song Stress Test, participants are presented with neutral messages on a screen, interleaved with 1-min time intervals. The final message is that the participant should sing a song aloud after the interval has elapsed. Participants sit still during the whole procedure. We found that heart rate and skin conductance during the 1-min intervals following the sing-a-song stress message are substantially higher than during intervals following neutral messages. The order of magnitude of the rise is comparable to that achieved by the Trier Social Stress Test. Skin conductance increase correlates positively with experienced stress level as reported by participants. We also simulated stress detection in real time. When using both skin conductance and heart rate, stress is detected for 18 out of 20 participants, approximately 10 s after onset of the sing-a-song message. In conclusion, the Sing-a-Song Stress Test provides a quick, easy, controlled and potent way to induce mental stress and could be helpful in studies ranging from examining physiological effects of mental stress to evaluating interventions to reduce stress.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.