Proceedings of the 6th International Conference on Multimodal Interfaces 2004
DOI: 10.1145/1027933.1027952
|View full text |Cite
|
Sign up to set email alerts
|

Towards integrated microplanning of language and iconic gesture for multimodal output

Abstract: When talking about spatial domains, humans frequently accompany their explanations with iconic gestures to depict what they are referring to. For example, when giving directions, it is common to see people making gestures that indicate the shape of buildings, or outline a route to be taken by the listener, and these gestures are essential to the understanding of the directions. Based on results from an ongoing study on language and gesture in direction-giving, we propose a framework to analyze such gestural im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
61
0
1

Year Published

2005
2005
2014
2014

Publication Types

Select...
4
2
2

Relationship

4
4

Authors

Journals

citations
Cited by 73 publications
(63 citation statements)
references
References 21 publications
0
61
0
1
Order By: Relevance
“…SmartKom [Wahlster et al, 2001] is an influential early example of multimodal fission for dialogue generation. By contrast, problem-solving models of multimodal generation, such as Cassell et al [2000], Kopp et al [2004], reason about the affordances and interdependencies of body and speech to creatively explore the space of possible multimodal utterances and synthesize utterances that link specific behaviors to specific functions opportunistically and flexibly.…”
Section: Grounding With Multimodal Communicative Actionmentioning
confidence: 99%
“…SmartKom [Wahlster et al, 2001] is an influential early example of multimodal fission for dialogue generation. By contrast, problem-solving models of multimodal generation, such as Cassell et al [2000], Kopp et al [2004], reason about the affordances and interdependencies of body and speech to creatively explore the space of possible multimodal utterances and synthesize utterances that link specific behaviors to specific functions opportunistically and flexibly.…”
Section: Grounding With Multimodal Communicative Actionmentioning
confidence: 99%
“…A systematic extension that is based on the anatomically joints of arm and hands that bring about movement has been implemented in the FORM scheme [47]. The same basic approach has also been pursued by Kopp et al [39]. The kinematic-or "morphologic", as we call it-part of the SaGA scheme is closely related to the latter two schemes.…”
Section: Data and Data Annotationmentioning
confidence: 99%
“…The scheme of Calbris is only concerned with "straight-line gestures in space […]" [15, p. 104] but not with gestural movements of any kind as we are here. The present annotation schemes builds on the scheme used in Kopp et al [39] to capture two-handed gestures and the manifold configurations they can manifest.…”
Section: Data and Data Annotationmentioning
confidence: 99%
See 1 more Smart Citation
“…Section 2 introduces the experimental setting and the data coding methodology, Section 3 presents results from the corpus analysis. Based on these findings, we describe in Section 4 a computational modeling account that goes beyond previous systems, which either rely on generalized rule-based models that disregard idiosyncrasy in gesture use [6,18], or employ data-based methods that approximate single speakers but have difficulties with extracting systematicities of gesture use. These data-based approaches are typically (and successfully) employed to generate gesturing behavior which has no particular meaning-carrying function, e.g., discourse gestures [27] or beat gestures (Theune & Brandhorst, this volume).…”
Section: Introductionmentioning
confidence: 99%