Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval 2020
DOI: 10.1145/3397271.3401110
|View full text |Cite
|
Sign up to set email alerts
|

Open-Retrieval Conversational Question Answering

Abstract: Recent studies on Question Answering (QA) and Conversational QA (ConvQA) emphasize the role of retrieval: a system first retrieves evidence from a large collection and then extracts answers. This open-retrieval ConvQA setting typically assumes that each question is answerable by a single span of text within a particular passage (a span answer). The supervision signal is thus derived from whether or not the system can recover an exact match of this ground-truth answer span from the retrieved passages. This meth… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
147
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 91 publications
(149 citation statements)
references
References 53 publications
(164 reference statements)
1
147
1
Order By: Relevance
“…It should be noted that different from previous work that only leverages the first term in reading score, i.e., Xiong et al, 2020;Qu et al, 2020), our added second term improved inference performance. This is because during the training time, the span label of a document that does not contain an answer is set to (0, 0), and such negative documents are the majority.…”
Section: Framework Componentscontrasting
confidence: 55%
“…It should be noted that different from previous work that only leverages the first term in reading score, i.e., Xiong et al, 2020;Qu et al, 2020), our added second term improved inference performance. This is because during the training time, the span label of a document that does not contain an answer is set to (0, 0), and such negative documents are the majority.…”
Section: Framework Componentscontrasting
confidence: 55%
“…It should be noted that different from previous work that only leverages the first term in reading score, i.e., Qu et al, 2020), our added second term improved inference performance. This is because during the training time, the span label of a document that does not contain an answer is set to (0, 0), and such negative documents are the majority.…”
Section: Framework Componentsmentioning
confidence: 71%
“…While there are some prior data engineering solutions to "model patching", including augmentation Wei and Zou, 2019;Kaushik et al, 2019;Goel et al, 2021a), weak labeling Chen et al, 2020), and synthetic data generation (Murty et al, 2020), due to the noise in WIKIPEDIA, we repurpose BOOTLEGSPORT using weak labeling to modify training labels and correct for this noise. Our weak labeling technique works as follows: any existing mention from strong-sport-cues that is labeled as a country is relabeled as a national sports team for Subpop.…”
Section: Repurposing With Weak Labelingmentioning
confidence: 99%
See 2 more Smart Citations