Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining 2019
DOI: 10.1145/3289600.3291012
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Transform, Combine, and Reason in Open-Domain Question Answering

Abstract: Users seek direct answers to complex questions from large open-domain knowledge sources like the Web. Open-domain question answering has become a critical task to be solved for building systems that help address users' complex information needs. Most open-domain question answering systems use a search engine to retrieve a set of candidate documents, select one or a few of them as context, and then apply reading comprehension models to extract answers. Some questions, however, require taking a broader context i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(26 citation statements)
references
References 11 publications
(2 reference statements)
0
25
0
Order By: Relevance
“…In this work we propose an unsupervised algorithm for the selection of multi-hop justifications from unstructured knowledge bases (KB). Unlike other supervised selection methods (Dehghani et al, 2019;Bao et al, 2016;Wang et al, 2018b,a;Tran and Niedereée, 2018;Trivedi et al, 2019), our approach does not require any training data for justification selection. Unlike approaches that rely on structured KBs, which are expensive to create, (Khashabi et al, 2016;Khot et al, 2017;Khashabi et al, 2018b;Cui et al, 2017;Bao et al, 2016), our method operates over KBs of only unstructured texts.…”
Section: Introductionmentioning
confidence: 99%
“…In this work we propose an unsupervised algorithm for the selection of multi-hop justifications from unstructured knowledge bases (KB). Unlike other supervised selection methods (Dehghani et al, 2019;Bao et al, 2016;Wang et al, 2018b,a;Tran and Niedereée, 2018;Trivedi et al, 2019), our approach does not require any training data for justification selection. Unlike approaches that rely on structured KBs, which are expensive to create, (Khashabi et al, 2016;Khot et al, 2017;Khashabi et al, 2018b;Cui et al, 2017;Bao et al, 2016), our method operates over KBs of only unstructured texts.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, we compared with, and outperformed, the state-of-the-art system DrQA [12], which can both select relevant documents and extract answers from them. Traditional fact-centric QA over text, and multidocument reading comprehension are recently emerging as a joint topic referred to as open-domain question answering [16,42].…”
Section: Related Workmentioning
confidence: 99%
“…Some of the above datasets provide additional meta-data, we do not use this additional information in our experiments. We observe that those low-ranked passages play a critical role in improving the accuracy, thus we remain all supporting passages as the inputs of our (Chen et al, 2017), R 3 (Wang et al, 2018a), TraCRNet (Dehghani et al, 2019a), Shared-Norm (Clark and Gardner, 2018), HAS-QA (Pang et al, 2019). Human performance is referenced from the dataset paper.…”
Section: Experiments 41 Datasetsmentioning
confidence: 99%
“…In Wang et al (2018b), cross-passage answer verification is definitely proposed, in which all the word embeddings in a passage are summed through attention mechanism to represent an answer candidate, and then each answer candidate attends to other candidates to collect supportive information. In Dehghani et al (2019a), multihop reasoning is implemented by a Universal Transformer (Dehghani et al, 2019b) which is mainly based on Multi-head Self-attention (Vaswani et al, 2017) and a transition function.…”
Section: Related Workmentioning
confidence: 99%