2021
DOI: 10.48550/arxiv.2109.05794
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Building and Evaluating Open-Domain Dialogue Corpora with Clarifying Questions

Abstract: Enabling open-domain dialogue systems to ask clarifying questions when appropriate is an important direction for improving the quality of the system response. Namely, for cases when a user request is not specific enough for a conversation system to provide an answer right away, it is desirable to ask a clarifying question to increase the chances of retrieving a satisfying answer. To address the problem of 'asking clarifying questions in opendomain dialogues': (1) we collect and release a new dataset focused on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 37 publications
(50 reference statements)
0
3
0
Order By: Relevance
“…Recently, the community has focused on continuous learning through interactions, including systems that learn a new task from instructions [57], assess their uncertainty [89] and ask feedback from humans in case of uncertainty [3,4] or for correcting possible mistakes [30].…”
Section: Background and Related Workmentioning
confidence: 99%
“…Recently, the community has focused on continuous learning through interactions, including systems that learn a new task from instructions [57], assess their uncertainty [89] and ask feedback from humans in case of uncertainty [3,4] or for correcting possible mistakes [30].…”
Section: Background and Related Workmentioning
confidence: 99%
“…Recent efforts have also focused on interactivity and continuous learning to enable agents to interact with users to resolve the knowledge gap between them for better accuracy and transparency. This includes systems that can learn new task from instructions [Li et al, 2020], assess their uncertainty [Yao et al, 2019], ask clarifying questions [Aliannejadi et al, 2020[Aliannejadi et al, , 2021 and seek and leverage feedback from humans to correct mistakes [Elgohary et al, 2020].…”
Section: Competition Typementioning
confidence: 99%
“…There is a long history of competitions focused on NLU/G tasks. Especially in recent years we have seen a large number of challenges dedicated to open-domain dialog systems [Hauff et al, 2021, Dalton et al, 2020, Spina et al, 2019, Chuklin et al, 2018, such as ConvAI [Burtsev and Logacheva, 2020], ConvAI2 , ConvAI3: Clarifying Questions for Open-Domain Dialogue Systems (ClariQ) [Aliannejadi et al, 2020[Aliannejadi et al, , 2021, as well as a series of competitions of the Alexa Prize 5 . There are great efforts in the community to advance task-oriented dialogs by suggesting competitions, such as the Dialog System Technology Challenge (DSTC-8) [Kim et al, 2019]; benchmarks and experimental platforms, e.g., Convlab, which offers the annotated MultiWOZ dataset [Budzianowski et al, 2018] and associated pre-trained reference models .…”
Section: Noveltymentioning
confidence: 99%