2018
DOI: 10.15398/jlm.v5i3.167
|View full text |Cite
|
Sign up to set email alerts
|

Aligning speech and co-speech gesture in a constraint-based grammar

Abstract: This paper concerns the form-meaning mapping of communicative actions consisting of speech and improvised co-speech gestures. Based on the findings of previous cognitive and computational approaches, we advance a new theory in which this form-meaning mapping is analysed in a constraint-based grammar. Motivated by observations in naturally occurring examples, we propose several construction rules, which use linguistic form, gesture form and their relative timing to constrain the derivation of a single speech-ge… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
13
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 21 publications
1
13
0
Order By: Relevance
“…This research and work by Johnston (1998) inspired grammar-bound models of speech-gesture integration, such as HPSG approaches (see, e.g., Alahverdzhieva & Lascarides 2010, Lücking 2013, to employ nuclear stress for modeling speechgesture integration. These accounts combine a temporal constraint with a phonological constraint differently.…”
Section: Multi-modal Meaningmentioning
confidence: 99%
“…This research and work by Johnston (1998) inspired grammar-bound models of speech-gesture integration, such as HPSG approaches (see, e.g., Alahverdzhieva & Lascarides 2010, Lücking 2013, to employ nuclear stress for modeling speechgesture integration. These accounts combine a temporal constraint with a phonological constraint differently.…”
Section: Multi-modal Meaningmentioning
confidence: 99%
“…A formal treatment of gestural negation and its grammatical role–in particular, its scope–has been provided by, e.g., Harrison (2010). More generally, recent years have witnessed the development of so-called multimodal grammars , which provide an integrated account of both the spoken and the gestural aspect of human utterance (Johnston et al, 1997; Lascarides and Stone, 2009; Poesio and Rieser, 2009; Alahverdzhieva and Lascarides, 2010; Fricke, 2013). …”
Section: Interactional Aspects Of Communication Already Accepted Amentioning
confidence: 99%
“…In particular, we focus on the handshapes that they use to represent the objects that they are locating in space, what Perniss et al ( 2015 ) term “entity representation.” Focusing on static, rather than moving, objects is expected to facilitate greater precision in our comparison of the handshapes of sign-naïve adults and of signers. The depiction of moving objects runs the risk of gesturers choosing to illustrate the path of the movement and not necessarily the object itself (see similar arguments for gestural ambiguity in Alahverdzhieva and Lascarides, 2010 ). Our focus on static objects avoids this potential confound.…”
Section: Introductionmentioning
confidence: 99%