People respond more slowly if an irrelevant feature of a target stimulus is incompatible with the relevant feature or the correct response. Such compatibility effects are often reduced in trials following an incompatible trial, which has been taken to reflect increased cognitive control. This pattern holds only if two trials share some similarities, however, suggesting that it may be modulated by the episodic context. To look into this possibility, we had participants respond to high- or low-pitched tones by saying "high" or "low," respectively, and ignore the simultaneously presented auditory word "high" or "low." As expected, performance was impaired if the heard word was incompatible with the required response, and this Stroop-like effect was reduced after incompatible trials. This sequential modulation was observed, however, only if the voice in the two successive trials was the same, whereas no modulation was obtained when the speaker changed. The results suggest that sequential modulations are due to the automatic retrieval of episodic event representations that integrate stimuli, actions, and situational and task-specific control information, so that later reactivation of some elements of a given representation tends to retrieve the other elements as well.
Understanding how the human brain integrates features of perceived events calls for the examination of binding processes within and across diVerent modalities and domains. Recent studies of feature-repetition eVects have demonstrated interactions between shape, color, and location in the visual modality and between pitch, loudness, and location in the auditory modality: repeating one feature is beneWcial if other features are also repeated, but detrimental if not. These partial-repetition costs suggest that cooccurring features are spontaneously bound into temporary event Wles. Here, we investigated whether these observations can be extended to features from diVerent sensory modalities, combining visual and auditory features in Experiment 1 and auditory and tactile features in Experiment 2. The same types of interactions, as for unimodal feature combinations, were obtained including interactions between stimulus and response features. However, the size of the interactions varied with the particular combination of features, suggesting that the salience of features and the temporal overlap between feature-code activations plays a mediating role.
Term-Relevance Prediction from Brain Signals (TRPB) is proposed to automatically detect relevance of text information directly from brain signals. An experiment with forty participants was conducted to record neural activity of participants while providing relevance judgments to text stimuli for a given topic. High-precision scientific equipment was used to quantify neural activity across 32 electroencephalography (EEG) channels. A classifier based on a multi-view EEG feature representation showed improvement up to 17% in relevance prediction based on brain signals alone. Relevance was also associated with brain activity with significant changes in certain brain areas. Consequently, TRPB is based on changes identified in specific brain areas and does not require user-specific training or calibration. Hence, relevance predictions can be conducted for unseen content and unseen participants. As an application of TRPB we demonstrate a high-precision variant of the classifier that constructs sets of relevant terms for a given unknown topic of interest. Our research shows that detecting relevance from brain signals is possible and allows the acquisition of relevance judgments without a need to observe any other user interaction. This suggests that TRPB could be used in combination or as an alternative for conventional implicit feedback signals, such as dwell time or click-through activity.
Although the previous studies have shown that an emotional context may alter touch processing, it is not clear how visual contextual information modulates the sensory signals, and at what levels does this modulation take place. Therefore, we investigated how a toucher’s emotional expressions (anger, happiness, fear, and sadness) modulate touchee’s somatosensory-evoked potentials (SEPs) in different temporal ranges. Participants were presented with tactile stimulation appearing to originate from expressive characters in virtual reality. Touch processing was indexed using SEPs, and self-reports of touch experience were collected. Early potentials were found to be amplified after angry, happy and sad facial expressions, while late potentials were amplified after anger but attenuated after happiness. These effects were related to two stages of emotional modulation of tactile perception: anticipation and interpretation. The findings show that not only does touch affect emotion, but also emotional expressions affect touch perception. The affective modulation of touch was initially obtained as early as 25 ms after the touch onset suggesting that emotional context is integrated to the tactile sensation at a very early stage.
Ideomotor theory considers bidirectional action-effect associations to be the fundamental building blocks for intentional action. The present study employed a novel pupillometric and oculomotor paradigm to study developmental changes in the role of action-effects in the acquisition of voluntary action. Our findings suggest that both 7- and 12-month-olds (and adults) can use acquired action-effect bindings to predict action outcomes but only 12-month-olds (and adults) showed evidence for employing action-effects to select actions. This dissociation supports the idea that infants acquire action-effect knowledge before they have developed the cognitive machinery necessary to make use of that knowledge to perform intentional actions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.