In this paper, we advance a comprehensive gesture labelling proposal which highlights the independence of the prosodic and semantic properties of different gesture types and at the same time challenges a simplistic definition of beat gestures as biphasic rhythmic non-meaningful gestures (e.g., [1][2]). Following McNeill's [3] original proposal on gesture dimensions, we defend that all gesture types can associate with prosodic prominence, and even though beat gestures typically display this rhythmic behavior, this is also the case with other representational and pointing gestures too. Second, with respect to meaning, while beat gestures do not represent referential nor metaphoric content, they can serve a range of meaningful pragmatic and discursive functions in speech, which deserve to be further investigated. From a practical point of view, we propose that all non-referential gestures be initially classified as forms of beat gestures with a set of associated properties related to gesture form, prosodic form and pragmatic form. This gesture labelling proposal independently codes for (a) the form of gestures, (b) their properties of temporal association with prosodic prominence, and (c) their pragmatic meaning. We claim that this move allows for a more complete analysis of gestures in large-scale studies and opens the way for more comprehensive assessments of the interaction between gesture forms, prosodic forms, and semantic forms using labelled corpora.
Previous work has shown how native listeners benefit from observing iconic gestures during speech comprehension tasks of both degraded and non-degraded speech. By contrast, effects of the use of gestures in non-native listener populations are less clear and studies have mostly involved iconic gestures. The current study aims to complement these findings by testing the potential beneficial effects of beat gestures (non-referential gestures which are often used for information-and discourse marking) on language recall and discourse comprehension using a narrative-drawing task carried out by native and non-native listeners. Using a within-subject design, 51 French intermediate learners of English participated in a narrative-drawing task. Each participant was assigned 8 videos to watch, where a native speaker describes the events of a short comic strip. Videos were presented in random order, in four conditions: in Native listening conditions with frequent, naturally-modeled beat gestures, in Native listening conditions without any gesture, in Non-native listening conditions with frequent, naturally-modeled beat gestures, and in Non-native listening conditions without any gesture. Participants watched each video twice and then immediately recreated the comic strip through their own drawings. Participants' drawings were then evaluated for discourse comprehension (via their ability to convey the main goals of the narrative through their drawings) and recall (via the number of gesturally-marked elements in the narration that were included in their drawings). Results showed that for native listeners, beat gestures had no significant effect on either recall or comprehension. In non-native speech, however, beat gestures led to significantly lower comprehension and recall scores. These results suggest that frequent, naturally-modeled beat gestures in longer discourses may increase cognitive load for language learners, resulting in negative effects on both memory and language understanding. These findings add to the growing body of literature that suggests that gesture benefits are not a "one-sizefits-all" solution, but rather may be contingent on factors such as language proficiency and gesture rate, particularly in that whenever beat gestures are repeatedly used in discourse, they inherently lose their saliency as markers of important information.
The aim of this study is to assess whether a brief training with rhythmic beat gestures helps L2 pronunciation in a reading aloud task with high school students. In a between-subjects pretest-posttest design, a total of 59 high school students were randomly assigned to one of the following two conditions: the beat gesture group and no-beat gesture group. In the beat gesture condition they were asked to first read two short stories aloud without any gestural instruction (pretest) and in the following two texts they were asked to move their hands (training). Students in the no-beat condition (control condition) were asked to read the four texts aloud (pretest and training) without any gestural instruction. Then, in order to see the benefits of gesture, both groups were asked to read a fifth text aloud (posttest) which was more difficult (more complex syntactic structure and longer) than the ones they read in the pretest or the training. Results showed that speakers who were asked to produce beat gestures during the training had better pronunciation measures (specifically accentedness, comprehensibility, and fluency) in the posttest than the ones that were not asked to produce any specific gesture during the training.
While recent studies have claimed that non-referential gestures (i.e., gestures that do not visually represent any semantic content in speech) are used to mark discourse-new and/or -accessible referents and focused information in adult speech, to our knowledge, no prior investigation has studied the relationship between information structure (IS) and gesture referentiality in children’s narrative speech from a developmental perspective. A longitudinal database consisting of 332 narratives performed by 83 children at two different time points in development was coded for IS and gesture referentiality (i.e., referential and non-referential gestures). Results revealed that at both time points, both referential and non-referential gestures were produced more with information that moves discourse forward (i.e., focus) and predication (i.e., comment) rather than topical or background information. Further, at 7–9 years of age, children tended to use more non-referential gestures to mark focus and comment constituents than referential gestures. In terms of the marking of the newness of discourse referents, non-referential gestures already seem to play a key role at 5–6 years old, whereas referential gestures did not show any patterns. This relationship was even stronger at 7–9 years old. All in all, our findings offer supporting evidence that in contrast with referential gestures, non-referential gestures have been found to play a key role in marking IS, and that the development of this relationship solidifies at a period in development that coincides with a spurt in non-referential gesture production.
Purpose: This study aims to analyze the development of gesture–speech temporal alignment patterns in children's narrative speech from a longitudinal perspective and, specifically, the potential differences between different gesture types, namely, gestures that imagistically portray or refer to semantic content in speech (i.e., referential gestures) and those that lack semantic content (i.e., non-referential gestures). Method: This study uses an audiovisual corpus of narrative productions ( n = 332) from 83 children (43 girls, 40 boys) who participated in a narrative retelling task at two time points in development (at 5–6 and 7–9 years of age). The 332 narratives were coded for both manual co-speech gesture types and prosody. Gestural annotations included gesture phasing (i.e., preparation, stroke, hold, and recovery) and gesture types (in terms of referentiality, i.e., referential and non-referential), whereas prosodic annotations included pitch-accented syllables. Results: Results revealed that by ages 5–6 years, children already temporally aligned the stroke of both referential and non-referential gestures with pitch-accented syllables, showing no significant differences between these two gesture types. Conclusions: The results of the present study contribute to the view that both referential and non-referential gestures are aligned with pitch accentuation, and therefore, this is not only a characteristic of non-referential gestures. Our results also add support to McNeill's phonological synchronization rule from a developmental perspective and indirectly back up recent theories about the biomechanics of gesture–speech alignment, suggesting that this is an inherent ability of oral communication.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.