This paper presents a proof-of-concept for contactless and nonintrusive estimation of electrodermal activity (EDA) correlates using a camera. RGB video of the palm under three different lighting conditions showed that for a suitably chosen illumination strategy the data from the camera is sufficient to estimate EDA correlates which agree with the measurements done using laboratory grade physiological sensors. The effects we see in the recorded video can be attributed to sweat gland activity, which inturn is known to be correlated with EDA. These effects are so pronounced that simple pixel statistics can be used to quantify them. Such a method benefits from advances in computer vision and graphics research and has the potential to be used in affective computing and psychophysiology research where contact based sensors may not be suitable.
Recent work in the field of neural speech tracking provided evidence for a cortical representation of speech through superposition of event-related responses to acoustic edges, an idea closely related to the popular linear modeling approach to study cortical synchronization to speech via magneto- or electroencephalography (M/EEG). However, it is still unclear to what extent speech-evoked event-related potentials (ERPs) including well-established phenomena, e.g., the N1 selective attention effect, contribute to the regression-based analyses. Here, we addressed this question by analyzing an EEG dataset obtained during a simple multispeaker selective attention task in which participants were cued to attend to only one of two competing speakers. Segmenting the ongoing EEG based on acoustic edges, we were able to replicate previous findings of event-related responses to speech in MEG data with particularly clear P1-N1-P2 complexes. Crucially, speech-evoked ERPs exhibited significant effects of attention in line with the auditory N1 effect. Comparing speech-evoked ERPs to the linear regression results revealed two major findings. First, temporal response functions (TRFs) obtained from forward modeling were strongly temporally as well as spatially correlated with corresponding true ERPs. Second, effects of attention demonstrated by the stimulus reconstruction (SR) accuracies obtained from backward modeling appeared to be driven by a consistent generation of speech-evoked ERPs including the N1 effect. Taken together, our observations reveal a direct link between ERPs to acoustic edges in speech and the linear TRF and SR modeling techniques. We emphasize the enhancement in signal-to-noise ratio provided by repeatedly evoked N1 responses to be a critical factor in facilitating the tracking and subsequent higher-order processing of selectively attended speech. In addition to that, the findings imply a cortical speech representation through superimposed speech-evoked ERPs in accordance with recent arguments promoting the neural evoked-response speech tracking model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.