Abstract:We investigate direct speech quotation in informal oral narratives by analyzing the contribution of bodily articulators (character viewpoint gestures, character facial expression, character intonation, and the meaningful use of gaze) in three quote environments, or quote sequences -single quotes, quoted monologues and quoted dialogues -and in initial vs. non-initial position within those sequences. Our analysis draws on findings from the linguistic and multimodal realization of quotation, where multiple articulators are often observed to be co-produced with single direct speech quotes (e.g. Thompson & Suzuki 2014), especially on the so-called left boundary of the quote (Sidnell 2006). We use logistic regression to model multimodal quote production across and within quote sequences, and find unique sets of multimodal articulators accompanying each quote sequence type. We do not, however, find unique sets of multimodal articulators which distinguish initial from non-initial utterances; utterance position is instead predicted by type of quote and presence of a quoting predicate. Our findings add to the growing body of research on multimodal quotation, and suggest that the multimodal production of quotation is more sensitive to the number of characters and utterances which are quoted than to the difference between introducing and maintaining a quoted characters' perspective.
Speakers perform manual gestures in the physical space nearest them, called gesture space. We used a controlled elicitation task to explore whether speakers use gesture space in a consistent way (assign spaces to ideas and use those spaces for those ideas) and whether they use space in a contrastive way (assign different spaces to different ideas when using contrastive speech) when talking about abstract referents. Participants answered two questions designed to elicit contrastive, abstract discourse. We investigated manual gesture behavior. Gesture hand, location on the horizontal axis, and referent in corresponding speech were coded. We also coded contrast in speech. Participants’ overall tendency to use the same hand (t(17) = 13.12, p = .001, 95% CI [.31, .43], d = 2.53) and same location (t(17) = 7.47, p = .001, 95% CI [.27, .47], d = 1.69) when referring to an entity was higher than expected frequency. When comparing pairs of gestures produced with contrastive speech to pairs of gestures produced with non-contrastive speech, we found a greater tendency to produce gestures with different hands for contrastive speech: (t(17) = 4.19, p = .001, 95% CI [.27, .82], d = 1.42). We did not find associations between dominant side and positive concepts or between left, center, and right space and past, present, and future, respectively, as predicted by previous studies. Taken together, our findings suggest that speakers do produce spatially consistent and contrastive gestures for abstract as well as concrete referents. They may be using spatial resources to assist with abstract thinking, and/or to help interlocutors with reference tracking. Our findings also highlight the complexity of predicting gesture hand and location, which appears to be the outcome of many competing variables.
This review describes the primary strategies used to express changes in conceptual viewpoint (Parrill, 2012) in co-speech gesture and sign language. We describe the use of the face, eye gaze, body orientation and hands to represent these differences in viewpoint, focusing particularly on McNeill’s (1992) division of iconic gestures into observer versus character viewpoint gestures, and on the situations in which they occur. We also draw a parallel between the strategies used in co-speech gesture and those used in different signed languages (see Cormier, Quinto-Pozos, Sevcikova, & Schembri, 2012), and suggest possibilities for further research in this area.
Events with a motor action component (e.g., handling an object) tend to evoke gestures from the point of view of a character (character viewpoint, or CVPT) while events with a path component (moving through space) tend to evoke gestures from the point of view of an observer (observer viewpoint, or OVPT). Events that combine both components (e.g., rowing a boat across a lake) seem to evoke both types of gesture, but it is unclear why narrators use one or the other. We carry out two manipulations to explore whether gestural viewpoint can be manipulated. Participants read a series of stories and retold them in two conditions. In the image condition, story sentences were presented with images from either the actor’s perspective (actor version) or the observer’s perspective (observer version). In the linguistic condition, the same sentences were presented in either the second person (you…) or the third person point of view (h/she…). The second person led participants to use the first person (I) in retelling. Gestures produced during retelling were coded as CVPT or OVPT. Participants produced significantly more CVPT gestures after seeing images from the point of view of an actor, but the linguistic manipulation did not affect viewpoint in gesture. Neither manipulation affected overall gesture rate, or co-occurring speech. We relate these findings to frameworks in which motor action and mental imagery are linked to viewpoint in gesture.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.