Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval 2019
DOI: 10.1145/3331184.3331265
|View full text |Cite
|
Sign up to set email alerts
|

Asking Clarifying Questions in Open-Domain Information-Seeking Conversations

Abstract: Users often fail to formulate their complex information needs in a single query. As a consequence, they may need to scan multiple result pages or reformulate their queries, which may be a frustrating experience. Alternatively, systems can improve user satisfaction by proactively asking questions of the users to clarify their information needs. Asking clarifying questions is especially important in conversational systems since they can only return a limited number of (often only one) result(s).In this paper, we… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
290
1

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 221 publications
(303 citation statements)
references
References 95 publications
2
290
1
Order By: Relevance
“…However, there are known game elements that can be reused in many studies, for instance leader boards and points. Also, we plan to extend Omicron to enable studies on conversational search on mobile devices [4].…”
Section: Discussionmentioning
confidence: 99%
“…However, there are known game elements that can be reused in many studies, for instance leader boards and points. Also, we plan to extend Omicron to enable studies on conversational search on mobile devices [4].…”
Section: Discussionmentioning
confidence: 99%
“…Different from typical ad-hoc search, here we assume that after the initial query (Q 0 ) posed by a user, a clarifying question (Q) is asked by the system, followed by a user's answer (A). We use the document corpus and single-round conversations provided by Qulac, the only large-scale conversational search dataset with clarifying questions we are aware of [1]. Qulac is built on top of the TREC Web Track 2009-2012 data and consists of 10K query-question-answer tuples for 198 TREC topics with 762 facets, with each Q&A corresponding to a facet.…”
Section: Methodsmentioning
confidence: 99%
“…We define four polarity classes: Positive P, Negative (N ), "I don't know" (idk), and Other (O). 1 Motivated by Figure 1, we use a simple yet effective heuristic to annotate positive and negative polarity in an answer. More specifically, we tag an answer with P if it contains the term "yes", and with N if it contains the term "no".…”
Section: Answer Polarity and Informativenessmentioning
confidence: 99%
See 2 more Smart Citations