2014
DOI: 10.1609/aaai.v28i1.8934
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Models for Referring Expression Generation in Situated Dialogue

Abstract: In situated dialogue with artificial agents (e.g., robots), although a human and an agent are co-present, the agent's representation and the human's representation of the shared environment are significantly mismatched. Because of this misalignment, our previous work has shown that when the agent applies traditional approaches to generate referring expressions for describing target objects with minimum descriptions, the intended objects often cannot be correctly identified by the human. To address this problem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…The learned weights can be applied to referential grounding and/or word model learning algorithms to improve the referential grounding performance. They can also be utilized by referring expression generation algorithms (e.g., Fang et al 2013;Fang, Doering, and Chai 2014) to facilitate referential communication between robots and humans. Our current evaluation is based on several simplifications, including the simulation of perceptual errors and the strong assumption that the correct grounding information can be provided to the robot through dialogue with a human.…”
Section: Discussionmentioning
confidence: 99%
“…The learned weights can be applied to referential grounding and/or word model learning algorithms to improve the referential grounding performance. They can also be utilized by referring expression generation algorithms (e.g., Fang et al 2013;Fang, Doering, and Chai 2014) to facilitate referential communication between robots and humans. Our current evaluation is based on several simplifications, including the simulation of perceptual errors and the strong assumption that the correct grounding information can be provided to the robot through dialogue with a human.…”
Section: Discussionmentioning
confidence: 99%
“…Going beyond one-shot reference, Zarrieß and Schlangen 2016 generate incrementally produced installments to gradually guide the addressee to the intended referent. In Fang et al ( 2014 , 2015 ), both installments and deictic gestures are used to account for perceptual mismatches between humans and artificial agents in situated dialog. Mental states and perceptual capabilities of interlocutors play an important role in natural communication, but are rarely considered in REG (but see e.g., Horacek 2005 for an exception).…”
Section: The Reg Taskmentioning
confidence: 99%
“…For linguistic interaction in shared visual environments, additional problems arise, such as perceptual mismatches between interlocutors (cf. Fang et al 2013Fang et al , 2014Fang et al , 2015 for related work in situated dialog with artificial agents).…”
Section: Visual Regmentioning
confidence: 99%
“…These results have shown that collaborative models are more effective in mitigating perceptual differences between humans and robots in referential communication. More details about the approach and empirical results are described by Fang, Doering, and Chai (2014).…”
Section: Grounded Language Generationmentioning
confidence: 99%
“…To understand this new challenge, we have revisited the problem of REG in the context of mismatched perceptual basis (Fang et al 2013). We extended a well‐known graph‐based approach (Krahmer, Van Erk, and Verleg 2003) that has shown effective in previous works and by incorporating uncertainties in perception into cost functions.…”
Section: Grounding Language To Perceptionmentioning
confidence: 99%