2009
DOI: 10.1075/gest.9.3.02ali
|View full text |Cite
|
Sign up to set email alerts
|

Gesture–speech integration in narrative

Abstract: Speakers sometimes express information in gestures that they do not express in speech. In this research, we developed a system that could be used to assess the redundancy of gesture and speech in a narrative task. We then applied this system to examine whether children and adults produce non-redundant gesture–speech combinations at similar rates. The coding system was developed based on a sample of 30 children. A crucial feature of the system is that gesture meanings can be assessed based on form alone; thus, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
37
3
1

Year Published

2011
2011
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 44 publications
(42 citation statements)
references
References 26 publications
1
37
3
1
Order By: Relevance
“…It is this capacity that predicts their performance in inferential tasks (Barrouillet, Grosset, & Lecas, 2000). Moreover, they tend to make more gestures than adults even as their speech develops (e.g., Alibali, Evans, et al, 2009;Chu & Kita, 2008;Colletta et al, 2015). We accordingly investigated children of around the age of ten years.…”
Section: Insert Figure 1 About Herementioning
confidence: 99%
“…It is this capacity that predicts their performance in inferential tasks (Barrouillet, Grosset, & Lecas, 2000). Moreover, they tend to make more gestures than adults even as their speech develops (e.g., Alibali, Evans, et al, 2009;Chu & Kita, 2008;Colletta et al, 2015). We accordingly investigated children of around the age of ten years.…”
Section: Insert Figure 1 About Herementioning
confidence: 99%
“…We classified speakers as spatial dominant, verbal dominant, or equally matched on the basis of the difference in their performance on a spatial visualization test and a verbal fluency test. We used the coding procedure developed by Alibali et al (2009) to code speakers' gesture-speech redundancy as they narrated an animated cartoon. Spatial-dominant speakers produced a higher proportion of non-redundant gesture-speech combinations than other speakers.…”
mentioning
confidence: 99%
“…Even when children completely understand the event they are describing and know exactly which aspects of their spatial knowledge they wish to articulate, their vocabularies may not contain the words they need to describe the precise spatial or motor properties of the events they are thinking about. Indeed, Alibali et al (2009) found that children aged 5 to 10 produced non-redundant gesture-speech combinations while narrating a cartoon at a rate more than twice that of adults. Further, the children studied by Alibali et al seemed particularly likely to produce non-redundant gestures when they were having trouble formulating their ideas into speech.…”
mentioning
confidence: 99%
“…Pretest and posttest consisted of the children retelling what they had seen in four short (~41-50 s) animated cartoons about a small mouse and his friends (Westdeutscher Rundfunk Köln, http://www.wdrmaus.de, [2,34]), which were previously unfamiliar to the children. The video clips contained only background music, without any speech.…”
Section: Methodsmentioning
confidence: 99%