This study used event-related brain potentials (ERPs) to compare the time course of emotion processing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo-utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal expression type and emotion were analyzed for three ERP components (N100, P200, Late Positive Component). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited stronger, earlier, and more differentiated P200 responses than speech. At later stages (450-700ms), anger vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar but delayed for angry speech.Individuals with high trait anxiety exhibited early, heightened sensitivity to vocal emotions (particularly vocalizations). These data provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions in the human voice.
Background: In this study, we investigated the influence of two types of emotional auditory primes - vocalizations and pseudoutterances - on the ability to judge a subsequently presented emotional facial expression in an event-related potential (ERP) study using the facial-affect decision task. We hypothesized that accuracy would be greater for congruent trials than for incongruent trials. This is due to the possibility that a congruent prime would allow the listener to implicitly identify the particular emotion of the face more effectively. We also hypothesized that the normal priming effect would be observed in the N400 for both prime types, i.e. a greater negativity for incongruent trials than for congruent trials.
Methods: Emotional primes (vocalization or pseudoutterance) were presented to participants who were then asked to make a judgment regarding whether or not a facial expression conveyed an emotion. Behavioural data on participant accuracy and experimental electroencephalogram (EEG) data were collected and subsequently analyzed for six participants.
Results: Behavioural results showed that participants were more accurate in judging faces when primed with vocalizations than pseudoutterances. ERP results revealed that a normal priming effect was observed for vocalizations in the 150 msec - 250 msec temporal window – where greater negativities were produced during incongruent trials than during congruent trials – whereas the reverse effect was observed for pseudoutterances. Few participants were tested (n = 7). Hence, this study is a pilot study preceding a further study conducted with a greater sample size (n = 25) and slight modifications in the methodology (such as the duration of auditory primes.)
Conclusions: Vocalizations showed the expected priming effect of greater negativities for incongruent trials than for congruent trials, while pseudoutterances unexpectedly showed the opposite effect. These results suggest that vocalizations may provide more prosodic information in a shorter time and thereby generate the expected congruency effect.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.