2021
DOI: 10.1007/978-3-030-72113-8_10
|View full text |Cite
|
Sign up to set email alerts
|

Complement Lexical Retrieval Model with Semantic Residual Embeddings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
69
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 82 publications
(70 citation statements)
references
References 22 publications
1
69
0
Order By: Relevance
“…We implement approximate search to retrieve using a linear combination of two systems by re-ranking n-best top scoring candidates from each system. Prior and concurrent work has also used hybrid sparse-dense models (Guo et al, 2016a;Seo et al, 2019;Karpukhin et al, 2020;Ma et al, 2020;Gao et al, 2020). Our contribution is to assess the impact of sparse-dense hybrids as the document length grows.…”
Section: Sparse-dense Hybrids (Hybrid)mentioning
confidence: 99%
“…We implement approximate search to retrieve using a linear combination of two systems by re-ranking n-best top scoring candidates from each system. Prior and concurrent work has also used hybrid sparse-dense models (Guo et al, 2016a;Seo et al, 2019;Karpukhin et al, 2020;Ma et al, 2020;Gao et al, 2020). Our contribution is to assess the impact of sparse-dense hybrids as the document length grows.…”
Section: Sparse-dense Hybrids (Hybrid)mentioning
confidence: 99%
“…In the context of transformers, the general setup of ranking with dense representations involves learning transformer-based encoders that convert queries and texts into dense, fixed-size vectors. In the simplest approach, ranking becomes the problem of approximate nearest neighbor (ANN) search based on some simple metric such as cosine similarity Xiong et al, 2020;Lu et al, 2020;Reimers and Gurevych, 2019;Gao et al, 2020b;Karpukhin et al, 2020;Qu et al, 2020;Hofstätter et al, 2020a;Lin et al, 2020b). However, recognizing that accurate ranking cannot be captured via simple metrics, researchers have explored using more complex machinery to compare dense representations (Humeau et al, 2020;Khattab and Zaharia, 2020).…”
Section: Learned Dense Representationsmentioning
confidence: 99%
“…A common choice for dense retrieval is to finetune a transformer network like BERT (Devlin et al, 2018) on a given training corpus with queries and relevant documents (Guo et al, 2020;Guu et al, 2020;Gao et al, 2020;Karpukhin et al, 2020;Luan et al, 2020). Recent work showed that combining dense approaches with sparse, lexical approaches can further boost the performance (Luan et al, 2020;Gao et al, 2020). While the approaches have been tested on various information and question answering retrieval datasets, the performance was only evaluated on fixed, rather small indexes.…”
Section: Related Workmentioning
confidence: 99%
“…by using cosine-similarity. Out-performance over sparse lexical approaches has been shown for various datasets (Gillick et al, 2018;Guo et al, 2020;Guu et al, 2020;Gao et al, 2020).…”
Section: Introductionmentioning
confidence: 99%