Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.200
|View full text |Cite
|
Sign up to set email alerts
|

KILT: a Benchmark for Knowledge Intensive Language Tasks

Abstract: Challenging problems such as open-domain question answering, fact checking, slot filling and entity linking require access to large, external knowledge sources. While some models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
129
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 157 publications
(166 citation statements)
references
References 31 publications
0
129
0
Order By: Relevance
“…Unlike using a single vector for each passage, DensePhrases represents each passage with multiple phrase vectors and the score of a passage can be computed by the maximum score of phrases within it. to outperform traditional sparse retrieval methods such as TF-IDF and BM25 in a range of knowledgeintensive NLP tasks (Petroni et al, 2021), including open-domain question answering (QA) (Chen et al, 2017), entity linking , and knowledge-grounded dialogue (Dinan et al, 2019).…”
Section: Has Been Shownmentioning
confidence: 99%
See 4 more Smart Citations
“…Unlike using a single vector for each passage, DensePhrases represents each passage with multiple phrase vectors and the score of a passage can be computed by the maximum score of phrases within it. to outperform traditional sparse retrieval methods such as TF-IDF and BM25 in a range of knowledgeintensive NLP tasks (Petroni et al, 2021), including open-domain question answering (QA) (Chen et al, 2017), entity linking , and knowledge-grounded dialogue (Dinan et al, 2019).…”
Section: Has Been Shownmentioning
confidence: 99%
“…Following this positive finding, we further explore whether phrase retrieval can be extended to retrieval of coarser granularities, or other NLP tasks. Through fine-tuning of the query encoder with document-level supervision, we are able to obtain competitive performance on entity linking (Hoffart et al, 2011) and knowledge-grounded dialogue retrieval (Dinan et al, 2019) in the KILT benchmark (Petroni et al, 2021).…”
Section: Has Been Shownmentioning
confidence: 99%
See 3 more Smart Citations