Proceedings of the Workshop on Speech and Natural Language - HLT '89 1989
DOI: 10.3115/1075434.1075499
|View full text |Cite
|
Sign up to set email alerts
|

Natural language with integrated deictic and graphic gestures

Abstract: People frequently and effectively integrate deictic and graphic gestures with their natural language (NL) when conducting human-to-human dialogue. Similar multi-modal communication can facilitate human interaction with modern sophisticated information processing and decision-aiding computer systems. As part of the CUBRICON project, we are developing NL processing technology that incorporates deictic and graphic gestures with simultaneous coordinated NL for both user inputs and system-generated outputs. Such mu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

1996
1996
2017
2017

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(19 citation statements)
references
References 21 publications
0
19
0
Order By: Relevance
“…For example, Figure 1 shows the CUBRICON system architecture [10]. CUBRICON enables a user to interact using spoken or typed natural language and gesture, displaying results using combinations of language, maps, and graphics.…”
Section: Examples Of Multimedia Information Accessmentioning
confidence: 99%
“…For example, Figure 1 shows the CUBRICON system architecture [10]. CUBRICON enables a user to interact using spoken or typed natural language and gesture, displaying results using combinations of language, maps, and graphics.…”
Section: Examples Of Multimedia Information Accessmentioning
confidence: 99%
“…Based on Grosz and Sidner's conversation theory [Grosz and Sidner, 1986], MIND establishes a refined discourse structure as conversation proceeds. This is different from other multimodal systems that maintain the conversation history by using a global focus space [Neal et al, 1998], segmenting a focus space based on intention [Burger and Marshall, 1993], or establishing a single dialogue stack to keep track of open discourse segments [Stent et al, 1999].…”
Section: Semantic Modelling Of Conversation Discoursementioning
confidence: 99%
“…For multimodal reference resolution, some early work keeps track of a focus space from the dialog (Grosz & Sidner, 1986) and a display model to capture all objects visible on the graphical display (Neal, Thielman, Dobes, M., & Shapiro, 1998). It then checks semantic constraints such as the type of the candidate objects being referenced and their properties for reference resolution.…”
Section: Related Workmentioning
confidence: 99%