Proceedings of the 28th ACM International Conference on Information and Knowledge Management 2019
DOI: 10.1145/3357384.3358016
|View full text |Cite
|
Sign up to set email alerts
|

Look before you Hop: Conversational Question Answering over Knowledge Graphs Using Judicious Context Expansion

Abstract: Fact-centric information needs are rarely one-shot; users typically ask follow-up questions to explore a topic. In such a conversational setting, the user's inputs are often incomplete, with entities or predicates left out, and ungrammatical phrases. This poses a huge challenge to question answering (QA) systems that typically rely on cues in full-fledged interrogative sentences. As a solution, we develop Convex: an unsupervised method that can answer incomplete questions over a knowledge graph (KG) by maintai… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
114
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 78 publications
(115 citation statements)
references
References 32 publications
(55 reference statements)
1
114
0
Order By: Relevance
“…The evaluation of interactive systems requires real user sessions, as explained in §3. Using a prototype INTSUMM system, described in §5.1, we conducted several cycles of session collection which uncovered multiple user-related challenges, in line with previous work on user task design (Christmann et al, 2019;Roit et al, 2020;Zuccon et al, 2013). In particular, recruited users may make undue interactions due to insincere or experimental behavior, yielding noisy sessions that do not reflect realistic system use.…”
Section: Session Collectionmentioning
confidence: 98%
“…The evaluation of interactive systems requires real user sessions, as explained in §3. Using a prototype INTSUMM system, described in §5.1, we conducted several cycles of session collection which uncovered multiple user-related challenges, in line with previous work on user task design (Christmann et al, 2019;Roit et al, 2020;Zuccon et al, 2013). In particular, recruited users may make undue interactions due to insincere or experimental behavior, yielding noisy sessions that do not reflect realistic system use.…”
Section: Session Collectionmentioning
confidence: 98%
“…3.1.1 Dataset. We constructed datasets for CQR based on conversational challenges in CSQA [9] and ConvQuestions [3]. CSQA is the largest dataset (200k dialogues, 1.6M turns) for conversational QA over KB (Wikidata with 12.8M entities).…”
Section: Experimentation 31 Setupmentioning
confidence: 99%
“…Considering originality of action-based CQR for general CQU, we present reconstructed datasets based on conversational challenges in Complex Sequential Question Answering (CSQA) [9] and ConvQuestions [3], typical multi-domain conversational QA, with actions summarized from query types and transformation patterns.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, conversational Knowledge Base Question Answering (KBQA) has started to attract people's attention (Saha et al, 2018;Christmann et al, 2019;Guo et al, 2018;Shen et al, 2019). Motivated by real-world conversational applications, particularly personal assistants such as Apple Siri and Amazon Alexa, the task aims to answer questions over KBs in a conversational manner.…”
Section: Introductionmentioning
confidence: 99%