Why is it that people cannot keep their hands still when they talk? One reason may be that gesturing actually lightens cognitive load while a person is thinking of what to say. We asked adults and children to remember a list of letters or words while explaining how they solved a math problem. Both groups remembered significantly more items when they gestured during their math explanations than when they did not gesture. Gesturing appeared to save the speakers' cognitive resources on the explanation task, permitting the speakers to allocate more resources to the memory task. It is widely accepted that gesturing reflects a speaker's cognitive state, but our observations suggest that, by reducing cognitive load, gesturing may also play a role in shaping that state.
Humans regularly produce new utterances that are understood by other members of the same language community 1 . Linguistic theories account for this ability through the use of syntactic rules (or generative grammars) that describe the acceptable structure of utterances 2 . The recursive, hierarchical embedding of language units (for example, words or phrases within shorter sentences) that is part of the ability to construct new utterances minimally requires a 'context-free' grammar 2, 3 that is more complex than the 'finite-state' grammars thought sufficient to specify the structure of all non-human communication signals. Recent hypotheses make the central claim that the capacity for syntactic recursion forms the computational core of a uniquely human language faculty 4,5 . Here we show that European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar. They are also able to classify new patterns defined by the grammar and reliably exclude agrammatical patterns. Thus, the capacity to classify sequences from recursive, centre-embedded grammars is not uniquely human. This finding opens a new range of complex syntactic processing mechanisms to physiological investigation.The computational complexity of generative grammars is formally defined 3 such that certain classes of temporally patterned strings can only be produced (or recognized) by specific classes of grammars (Fig. 1). Starlings sing long songs composed of iterated motifs (smaller acoustic units) 6 that form the basic perceptual units of individual song recognition 7-9 . Here we used eight 'rattle' and eight 'warble' motifs (see Methods) to create complete 'languages' (4,096 sequences) for two distinct grammars: a context-free grammar (CFG) of the form A 2 B 2 that entails recursive centre-embedding, and a finite-state grammar (FSG) of the form (AB) 2 that does not ( Fig. 2a, b; 'A' refers to rattles and 'B' to warbles).We trained 11 European starlings, using a go/nogo operant conditioning procedure, to classify subsets of sequences from these languages (see Methods and Supplementary Information). Nine out of eleven starlings learned to classify the FSG and CFG sequences accurately (as assessed by d', which provides an unbiased measure of sensitivity to differentiating between two classes of patterns), but this task was difficult (Fig. 2c). The rate of acquisition varied widely among the starlings that learned the task (303.44 ± 57.11 blocks to reach criterion (mean ± s.e.m.), range 94-562 blocks with 100 trials per block), and was slow by comparison to other operant song-recognition tasks 7 .To assess the possibility that starlings learned to classify correctly the motif patterns described by the CFG and FSG grammars through rote memorization of the training exemplars, we further (Fig. 3a). The mean d' over the first 100 trials with new stimuli (roughly six responses to each exemplar) was 1.08 ± 0.50, which is significantly better than chance performance (d' = 0). Over th...
Observing a speaker's mouth profoundly influences speech perception. For example, listeners perceive an "illusory" "ta" when the video of a face producing /ka/ is dubbed onto an audio /pa/. Here, we show how cortical areas supporting speech production mediate this illusory percept and audiovisual (AV) speech perception more generally. Specifically, cortical activity during AV speech perception occurs in many of the same areas that are active during speech production. We find that different perceptions of the same syllable and the perception of different syllables are associated with different distributions of activity in frontal motor areas involved in speech production. Activity patterns in these frontal motor areas resulting from the illusory "ta" percept are more similar to the activity patterns evoked by AV(/ta/) than they are to patterns evoked by AV(/pa/) or AV(/ka/). In contrast to the activity in frontal motor areas, stimulus-evoked activity for the illusory "ta" in auditory and somatosensory areas and visual areas initially resembles activity evoked by AV(/pa/) and AV(/ka/), respectively. Ultimately, though, activity in these regions comes to resemble activity evoked by AV(/ta/). Together, these results suggest that AV speech elicits in the listener a motor plan for the production of the phoneme that the speaker might have been attempting to produce, and that feedback in the form of efference copy from the motor system ultimately influences the phonetic interpretation.
Memory consolidation resulting from sleep has been seen broadly: in verbal list learning, spatial learning, and skill acquisition in visual and motor tasks. These tasks do not generalize across spatial locations or motor sequences, or to different stimuli in the same location. Although episodic rote learning constitutes a large part of any organism's learning, generalization is a hallmark of adaptive behaviour. In speech, the same phoneme often has different acoustic patterns depending on context. Training on a small set of words improves performance on novel words using the same phonemes but with different acoustic patterns, demonstrating perceptual generalization. Here we show a role of sleep in the consolidation of a naturalistic spoken-language learning task that produces generalization of phonological categories across different acoustic patterns. Recognition performance immediately after training showed a significant improvement that subsequently degraded over the span of a day's retention interval, but completely recovered following sleep. Thus, sleep facilitates the recovery and subsequent retention of material learned opportunistically at any time throughout the day. Performance recovery indicates that representations and mappings associated with generalization are refined and stabilized during sleep.
BackgroundRecent neuroscientific evidence suggests that empathy for pain activates similar neural representations as the first-hand experience of pain. However, empathy is not an all-or-none phenomenon but it is strongly malleable by interpersonal, intrapersonal and situational factors. This study investigated how two different top-down mechanisms – attention and cognitive appraisal - affect the perception of pain in others and its neural underpinnings.Methodology/Principal FindingsWe performed one behavioral (N = 23) and two functional magnetic resonance imaging (fMRI) experiments (N = 18). In the first fMRI experiment, participants watched photographs displaying painful needle injections, and were asked to evaluate either the sensory or the affective consequences of these injections. The role of cognitive appraisal was examined in a second fMRI experiment in which participants watched injections that only appeared to be painful as they were performed on an anesthetized hand. Perceiving pain in others activated the affective-motivational and sensory-discriminative aspects of the pain matrix. Activity in the somatosensory areas was specifically enhanced when participants evaluated the sensory consequences of pain. Perceiving non-painful injections into the anesthetized hand also led to signal increase in large parts of the pain matrix, suggesting an automatic affective response to the putatively harmful stimulus. This automatic response was modulated by areas involved in self/other distinction and valence attribution – including the temporo-parietal junction and medial orbitofrontal cortex.Conclusions/SignificanceOur findings elucidate how top-down control mechanisms and automatic bottom-up processes interact to generate and modulate other-oriented responses. They stress the role of cognitive processing in empathy, and shed light on how emotional and bodily awareness enable us to evaluate the sensory and affective states of others.
Prior research has shown that perceived social isolation (loneliness) motivates people to attend to and connect with others but to do so in a self-protective and paradoxically self-defeating fashion. Although recent research has shed light on the neural correlates of social perception, cooperation, empathy, rejection and love, little is known about how individual differences in loneliness relate to neural responses to social and emotional stimuli. Using functional MRI we show that there are at least two neural mechanisms differentiating social perception in lonely and nonlonely young adults. For pleasant depictions, lonely individuals appear to be less rewarded by social stimuli, as evidenced by weaker activation of the ventral striatum to pictures of people than of objects, whereas nonlonely individuals showed stronger activation of the ventral striatum to pictures of people than of objects. For unpleasant depictions, lonely individuals were characterized by greater activation of the visual cortex to pictures of people than of objects, suggesting their attention is drawn more to the distress of others; whereas nonlonely individuals showed greater activation of the right and left temporoparietal junction to pictures of people than of objects, consistent with the notion that they are more likely to reflect spontaneously on the perspective of distressed others.As a social species, humans create emergent organizations beyond the individual-structures that range from dyads, families, and groups to cities, civilizations, and cultures. These emergent structures evolved hand in hand with neural and hormonal mechanisms to support them because the consequent social behaviors helped these organisms survive, reproduce, and care for offspring sufficiently long that they too reproduced (Cacioppo & Patrick, in press;Dunbar & Shultz, 2007). The multimodal neurophysiological processes involved in the execution of an action, for instance, give rise to parallel neurophysiological sensorimotor processes in the observer of these actions (Rizzolatti & Craighero, 2004). This mirror neuron system appears to play a role in a variety of social processes including mimicry, synchrony, contagion, coordination, and co-regulation (e.g., Rizzolatti & Fabbri-Destro, in press; Semin & Cacioppo, in press).Empathy for another person's pain is also associated with many of the same neural mechanisms associated with one's personal experience, including activation of the dorsal anterior cingulate (dACC), thalamus, and anterior insula (Decety & Lamm, in press-a;Jackson, Rainville, & Decety, 2006). In an illustrative study, Jackson, Meltzoff, and Decety (2005) found that the level of activity in the dACC was strongly correlated with ratings of the intensity of pain experienced by the observed person, a result reminiscent of Eisenberger, Lieberman, and Williams ' (2003) exclusion was strongly correlated with activity in the dACC. In the case of empathy and of social pain, evolutionarily older neural mechanisms appear to have been co-opted to ...
Two talkers' productions of the same phoneme may be quite different acoustically, whereas their productions of different speech sounds may be virtually identical. Despite this lack of invariance in the relationship between the speech signal and linguistic categories, listeners experience phonetic constancy across a wide range of talkers, speaking styles, linguistic contexts, and acoustic environments. The authors present evidence that perceptual sensitivity to talker variability involves an active cognitive mechanism: Listeners expecting to hear 2 different talkers differing only slightly in average pitch showed performance costs typical of adjusting to talker variability, whereas listeners hearing the same materials but expecting a single talker or given no special instructions did not show these performance costs. The authors discuss the implications for understanding phonetic constancy despite variability between talkers (and other sources of variability) and for theories of speech perception. The results provide further evidence for active, controlled processing in real-time speech perception and are consistent with a model of talker normalization that involves contextual tuning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.