2021
DOI: 10.1098/rspb.2020.2419
|View full text |Cite
|
Sign up to set email alerts
|

Beat gestures influence which speech sounds you hear

Abstract: Beat gestures—spontaneously produced biphasic movements of the hand—are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world's languages, how beat gestures impact spoken word recognition is unclear. Can these simple ‘flicks of the hand' influence speech perception? Across a range of experi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
19
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 37 publications
(25 citation statements)
references
References 57 publications
3
19
0
Order By: Relevance
“…Indeed, in a study by Perlman, Dale, and Lupyan (2015), it is shown how dynamic aspects of vocalization signaling systems become more efficient, similar to our current reduction in kinematic complexity. These findings, together with work showing the tight connection between speech and gesture (Bosker & Peeters, 2020;Pouw, de Jonge-Hoekstra, Harrison, Paxton, & Dixon, 2020;Pouw, Harrison, Esteve-Gibert, & Dixon, 2020), make it a natural next step to look at multimodal iterated learning experiments. Furthermore, our approach can inform work on communicative alignment in conversations (Rasenberg, Özyürek, & Dingemanse, 2020) or the ways in which people can repeat aspects of each other's communicative behavior.…”
Section: Future Directionsmentioning
confidence: 92%
“…Indeed, in a study by Perlman, Dale, and Lupyan (2015), it is shown how dynamic aspects of vocalization signaling systems become more efficient, similar to our current reduction in kinematic complexity. These findings, together with work showing the tight connection between speech and gesture (Bosker & Peeters, 2020;Pouw, de Jonge-Hoekstra, Harrison, Paxton, & Dixon, 2020;Pouw, Harrison, Esteve-Gibert, & Dixon, 2020), make it a natural next step to look at multimodal iterated learning experiments. Furthermore, our approach can inform work on communicative alignment in conversations (Rasenberg, Özyürek, & Dingemanse, 2020) or the ways in which people can repeat aspects of each other's communicative behavior.…”
Section: Future Directionsmentioning
confidence: 92%
“…It is the auditory–visual–motor co-regularity that makes the visual or haptic perception of articulatory gestures possible in this classic McGurk effect [61]. Recently, a ‘manual gesture McGurk-effect’ has been discovered [62]. When asked to detect a particular lexical stress in a uniformly stressed speech sequence, participants who see a hand gesture's beat timed with a particular speech segment tend to hear a lexical stress for that segment [62].…”
Section: Body Level: Multimodal Signalling and Peripheral Bodily Constraintsmentioning
confidence: 99%
“…Recently, a ‘manual gesture McGurk-effect’ has been discovered [62]. When asked to detect a particular lexical stress in a uniformly stressed speech sequence, participants who see a hand gesture's beat timed with a particular speech segment tend to hear a lexical stress for that segment [62]. We think it is possible that the gesture–speech–respiratory link as reviewed previously is actually important for understanding the manual McGurk effect as listeners attune to features of the visual–acoustic signal that are informative about such coordinated modes of production [63].…”
Section: Body Level: Multimodal Signalling and Peripheral Bodily Constraintsmentioning
confidence: 99%
“…Finally, it is important to realize that lexical stress is not only an acoustic property of speech; stress is a multimodal phenomenon too. Although suprasegmental cues such as intonation and intensity are arguably less visually salient than some segmental features (e.g., consonantal place of articulation), humans are keenly sensitive to visual prosody (Bosker & Peeters, 2021). For instance, humans perform above chance on stress discrimination when presented with muted videos of a talking face (Jesse & McQueen, 2014;Scarborough et al, 2009).…”
Section: Prosody In Communicationmentioning
confidence: 99%
“…Moreover, visual stress is not restricted to articulatory cues alone. Recently, Bosker and Peeters (2021) demonstrated that the temporal alignment of relatively simple beat gestures to speech influences lexical stress perception. That is, the same Dutch disyllabic auditory word could be perceived differently depending on whether a hand gesture was produced on the first or second syllable (e.g., distinguishing Dutch PLAto vs. plaTEAU).…”
Section: Prosody In Communicationmentioning
confidence: 99%