Over time, both the functional and anatomical boundaries of 'Wernicke's area' have become so broad as to be meaningless. We have re-analysed four functional neuroimaging (PET) studies, three previously published and one unpublished, to identify anatomically separable, functional subsystems in the left superior temporal cortex posterior to primary auditory cortex. From the results we identified a posterior stream of auditory processing. One part, directed along the supratemporal cortical plane, responded to both non-speech and speech sounds, including the sound of the speaker's own voice. Activity in its most posterior and medial part, at the junction with the inferior parietal lobe, was linked to speech production rather than perception. The second, more lateral and ventral part lay in the posterior left superior temporal sulcus, a region that responded to an external source of speech. In addition, this region was activated by the recall of lists of words during verbal fluency tasks. The results are compatible with an hypothesis that the posterior superior temporal cortex is specialized for processes involved in the mimicry of sounds, including repetition, the specific role of the posterior left superior temporal sulcus being to transiently represent phonetic sequences, whether heard or internally generated and rehearsed. These processes are central to the acquisition of long- term lexical memories of novel words.
Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169-200; Scherer, 1986, Psychological Bulletin, 99, 143-165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169-200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
P-centres are the subjective moments of occurrence of acoustic stimuli and capture properties of regularity and synchrony in production and perception. Two experiments are described which compare the effects of onset and oset amplitude variation and stimulus duration on P-centre location. The discussion is extended to consider the role of P-centres in cross modal temporal phenomena.
Background. The results from recent studies suggest that alexithymia, a disorder characterized by impairments in understanding personal experiences of emotion, is frequently co-morbid with autism spectrum disorder (ASD). However, the extent that alexithymia is associated with primary deficits in recognizing external emotional cues, characteristic in ASD, has yet to be determined.Method. Twenty high-functioning adults with ASD and 20 age-and intelligence-matched typical controls categorized vocal and verbal expressions of emotion and completed an alexithymia assessment.Results. Emotion recognition scores in the ASD group were significantly poorer than in the control group and performance was influenced by the severity of alexithymia and the psycho-acoustic complexity of the presented stimuli. For controls, the effect of complexity was significantly smaller than for the ASD group, although the association between total emotion recognition scores and alexithymia was still strong.Conclusions. Higher levels of alexithymia in the ASD group accounted for some, but not all, of the group difference in emotion recognition ability. However, alexithymia was insufficient to explain the different sensitivities of the two groups to the effects of psycho-acoustic complexity on performance. The results showing strong associations between emotion recognition and alexithymia scores in controls suggest a potential explanation for variability in emotion recognition in non-clinical populations.
Faces and voices, in isolation, prompt consistent social evaluations. However, most human interactions involve both seeing and talking with another person. Our main goal was to investigate how facial and vocal information are combined to reach an integrated person impression. In Study 1, we asked participants to rate faces and voices separately for perceived trustworthiness, attractiveness, and dominance. Most previous studies relied on stimuli in which extra-vocal information (e.g., verbal content, prosody) may have confounded voice-based effects; to prevent these unwanted influences, we used brief, neutral vowel sounds. Voices, like faces, led to the formation of highly reliable impressions. Voice trustworthiness correlated with voice attractiveness, mirroring the relation between face trustworthiness and attractiveness, but did not correlate with voice dominance. Inconsistent with the possibility that face and voice evaluations are indicative of real character traits, we found no positive correlations between judgments of trustworthiness or dominance based on faces and the same judgments based on voices (there was also no correlation between face attractiveness and voice attractiveness). In Study 2, we asked participants to evaluate male targets after seeing their faces and hearing their voices. Faces and voices contributed equally to judgments of trustworthiness and combined to produce a significant interaction effect. For attractiveness, faces were given more weight than voices, possibly due to the predominantly visual character of the attractiveness concept (there was no interaction effect). For dominance, the reverse pattern was true, with voices having a larger effect than faces on final judgments. In this case the auditory cues may be perceived to be more reliable because of the strong links between voice pitch, masculinity, and dominance.
Spoken conversations typically take place in noisy environments and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous functional Magnetic Resonance Imaging (fMRI), whilst they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioural task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream, and that individuals who perform better in speech in noise tasks activate the left midposterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment, activity was found within right lateralised frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise.
Death audits and reviews for reducing maternal, perinatal and child mortality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.