Proceedings of the 14th ACM International Conference on Web Search and Data Mining 2021
DOI: 10.1145/3437963.3441748
|View full text |Cite
|
Sign up to set email alerts
|

Question Rewriting for Conversational Question Answering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
66
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 90 publications
(66 citation statements)
references
References 23 publications
0
66
0
Order By: Relevance
“…QR effectively modifies all follow-up questions such that they can be correctly interpreted outside of the conversational context as well. This extension to the conversational QA task proved especially useful while allowing retrieval models to incorporate conversational context (Voskarides et al, 2020;Vakulenko et al, 2020;Lin et al, 2020).…”
Section: Datasetmentioning
confidence: 99%
“…QR effectively modifies all follow-up questions such that they can be correctly interpreted outside of the conversational context as well. This extension to the conversational QA task proved especially useful while allowing retrieval models to incorporate conversational context (Voskarides et al, 2020;Vakulenko et al, 2020;Lin et al, 2020).…”
Section: Datasetmentioning
confidence: 99%
“…Many recent works attempted to conquer this task with graph-based neural architectures. Talmor and Berant (2018) and Kumar et al (2019) ( Elgohary et al, 2019;Vakulenko et al, 2020), and QA pipelines could also decompose the original complex question into multiple shorter questions to improve model performance Khot et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…1, four simplification operations are applied to obtain the conversational question (CQ4), which is context-dependent and superior to its origin one (SQ4) in terms of naturalness and conveying. The reverse process, i.e., Conversational Question Rewriting (CQR) (Elgohary et al, 2019;Voskarides et al, 2020) which rewrites CQ4 into SQ4, has been widely explored in the literature (Vakulenko et al, 2020;. Although the proposed methods for CQR can be easily adopted for CQS, they do not always generate satisfactory results as they are all trained to optimize a maximum likelihood estimation (MLE) objective, which gives equal attention to generate each question token.…”
Section: Ira Hayes Himmentioning
confidence: 99%
“…As with previous studies (Elgohary et al, 2019;Vakulenko et al, 2020;Lin et al, 2020a), we conduct experiments on the CANARD 1 (Elgohary et al, 2019) In addition, we evaluate the model performance on the CAsT 2 dataset (Dalton et al, 2019), which is built for conversational search. Different from CANARD, its context only contains questions without corresponding answers.…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation