Do English speakers think about time the way they talk about it? In spoken English, time appears to flow along the sagittal axis (front/back): the future is ahead and the past is behind us. Here we show that when asked to gesture about past and future events deliberately, English speakers often use the sagittal axis, as language suggests they should. By contrast, when producing co-speech gestures spontaneously, they use the lateral axis (left/right) overwhelmingly more often, gesturing leftward for earlier times and rightward for later times. This left-right mapping of time is consistent with the flow of time on calendars and graphs in English-speaking cultures, but is completely absent from conventional spoken metaphors.English speakers gesture on the lateral axis even when they are using front/back metaphors in their co-occurring speech. This speech-gesture dissociation is not due to any lack of lexical or constructional resources to spatialize time laterally in language, nor to any lack of physical resources to spatialize time sagittally in gesture. We propose that when speakers are describing sequences of events, they often use neither the Moving Ego nor Moving Time perspectives. Rather, they adopt a “Moving Attention” perspective, which is grounded in patterns of interaction with cultural artifacts, not in patterns of interaction with the natural environment. We suggest possible pragmatic, kinematic, and mnemonic motivations for the use of a lateral mental timeline in gesture and in thought. Gestures reveal an implicit spatial conceptualization of time that cannot be inferred from language.
BackgroundAccording to the body-specificity hypothesis, people with different bodily characteristics should form correspondingly different mental representations, even in highly abstract conceptual domains. In a previous test of this proposal, right- and left-handers were found to associate positive ideas like intelligence, attractiveness, and honesty with their dominant side and negative ideas with their non-dominant side. The goal of the present study was to determine whether ‘body-specific’ associations of space and valence can be observed beyond the laboratory in spontaneous behavior, and whether these implicit associations have visible consequences.Methodology and Principal FindingsWe analyzed speech and gesture (3012 spoken clauses, 1747 gestures) from the final debates of the 2004 and 2008 US presidential elections, which involved two right-handers (Kerry, Bush) and two left-handers (Obama, McCain). Blind, independent coding of speech and gesture allowed objective hypothesis testing. Right- and left-handed candidates showed contrasting associations between gesture and speech. In both of the left-handed candidates, left-hand gestures were associated more strongly with positive-valence clauses and right-hand gestures with negative-valence clauses; the opposite pattern was found in both right-handed candidates.ConclusionsSpeakers associate positive messages more strongly with dominant hand gestures and negative messages with non-dominant hand gestures, revealing a hidden link between action and emotion. This pattern cannot be explained by conventions in language or culture, which associate ‘good’ with ‘right’ but not with ‘left’; rather, results support and extend the body-specificity hypothesis. Furthermore, results suggest that the hand speakers use to gesture may have unexpected (and probably unintended) communicative value, providing the listener with a subtle index of how the speaker feels about the content of the co-occurring speech.
Perception involves integration of multiple dimensions that often serve overlapping, redundant functions, for example, pitch, duration, and amplitude in speech. Individuals tend to prioritize these dimensions differently (stable, individualized perceptual strategies), but the reason for this has remained unclear.Here we show that perceptual strategies relate to perceptual abilities. In a speech cue weighting experiment (trial N ϭ 990), we first demonstrate that individuals with a severe deficit for pitch perception (congenital amusics; N ϭ 11) categorize linguistic stimuli similarly to controls (N ϭ 11) when the main distinguishing cue is duration, which they perceive normally. In contrast, in a prosodic task where pitch cues are the main distinguishing factor, we show that amusics place less importance on pitch and instead rely more on duration cues-even when pitch differences in the stimuli are large enough for amusics to discern. In a second experiment testing musical and prosodic phrase interpretation (N ϭ 16 amusics; 15 controls), we found that relying on duration allowed amusics to overcome their pitch deficits to perceive speech and music successfully. We conclude that auditory signals, because of their redundant nature, are robust to impairments for specific dimensions, and that optimal speech and music perception strategies depend not only on invariant acoustic dimensions (the physical signal), but on perceptual dimensions whose precision varies across individuals. Computational models of speech perception (indeed, all types of perception involving redundant cues e.g., vision and touch) should therefore aim to account for the precision of perceptual dimensions and characterize individuals as well as groups.
The anterior region of the left superior temporal gyrus/superior temporal sulcus (aSTG/STS) has been implicated in two very different cognitive functions: sentence processing and social-emotional processing. However, the vast majority of the sentence stimuli in previous reports have been of a social or social-emotional nature suggesting that sentence processing may be confounded with semantic content. To evaluate this possibility we had subjects read word lists that differed in phrase/constituent size (single words, 3-word phrases, 6-word sentences) and semantic content (social-emotional, social, and inanimate objects) while scanned in a 7 T environment. This allowed us to investigate if the aSTG/STS responded to increasing constituent structure (with increased activity as a function of constituent size) with or without regard to a specific domain of concepts, i.e., social and/or social-emotional content. Activity in the left aSTG/STS was found to increase with constituent size. This region was also modulated by content, however, such that social-emotional concepts were preferred over social and object stimuli. Reading also induced content type effects in domain-specific semantic regions. Those preferring social-emotional content included aSTG/STS, inferior frontal gyrus, posterior STS, lateral fusiform, ventromedial prefrontal cortex, and amygdala, regions included in the “social brain”, while those preferring object content included parahippocampal gyrus, retrosplenial cortex, and caudate, regions involved in object processing. These results suggest that semantic content affects higher-level linguistic processing and should be taken into account in future studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.