2006
DOI: 10.1007/3-540-36678-4_17
|View full text |Cite
|
Sign up to set email alerts
|

Overlay: The Basic Operation for Discourse Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2006
2006
2014
2014

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(22 citation statements)
references
References 12 publications
0
22
0
Order By: Relevance
“…A sweet spot between the two extremes is to constrain natural language in order to create a formal, user-friendly query language [5] or a controlled language for posing questions [6]. There are diverse examples for current dialogue systems, for instance SmartKom -a multimodal dialogue system that combines speech, gesture and mimics input [7] as well as DELFOS, an dialogue manager system that enables the integration of OWL ontologies as external knowledge resources for dialogue systems [8]. The combination of NLP and ontologies facilitates the development of novel dialogue systems that use ontologies as a core knowledge component regarding linguistic and non-linguistic knowledge representations.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A sweet spot between the two extremes is to constrain natural language in order to create a formal, user-friendly query language [5] or a controlled language for posing questions [6]. There are diverse examples for current dialogue systems, for instance SmartKom -a multimodal dialogue system that combines speech, gesture and mimics input [7] as well as DELFOS, an dialogue manager system that enables the integration of OWL ontologies as external knowledge resources for dialogue systems [8]. The combination of NLP and ontologies facilitates the development of novel dialogue systems that use ontologies as a core knowledge component regarding linguistic and non-linguistic knowledge representations.…”
Section: Related Workmentioning
confidence: 99%
“…Presentation of the answer asked to rate questionnaire items with regard to the theoretical constructs described above. Consistent with prior research, we adopted 7-point Likert scales that range from strongly disagree (1) to strongly agree (7).…”
Section: Utility Of Ontology-based Dialogue Interactionmentioning
confidence: 99%
“…remove ambiguities and contradictory outputs and produce a final semantic interpretation of the multimedia content. Techniques presented in the literature for multimodal fusion, not necessarily for the purpose of knowledge-assisted analysis, include probabilistic approaches [22] and methods that treat information fusion as a structure fusion problem [23], [24]. Among the advantages of techniques belonging to the latter category, such as overlay, is that they rely on structures that represent the semantics of the information to be fused, to identify which portions of information are competing or contradictory.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Currently, MIND applies an operation, called covering, to draw inferences from the conversation context [Chai, 2002a]. Although the mechanism of our covering operation is similar to the overlay operation described in [Alexandersson and Becker, 2001], not only can our covering infer the focus of attention (as overlay does), but it can also infer the intention. What makes this operation possible is our underlying consistent representation of intention and attention at both the discourse and the input levels.…”
Section: 21mentioning
confidence: 99%
“…There are also more sophisticated systems that combine multimodal inputs and outputs [Cassell et al, 1999], and those that work in a mobile environment [Johnston et al, 2002;Oviatt, 2000]. Recently, we have seen a new generation of systems that not only support multimodal user inputs, but can also engage users in an intelligent conversation [Alexandersson and Becker, 2001;Gustafson et al, 2000;Johnston et al, 2002]. To function effectively, each of these systems must be able to adequately interpret multimodal user inputs.…”
Section: Related Workmentioning
confidence: 99%