Proceedings of the 31st ACM International Conference on Information &Amp; Knowledge Management 2022
DOI: 10.1145/3511808.3557456
|View full text |Cite
|
Sign up to set email alerts
|

SpaDE

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…This analysis led to several findings about the components, including that we can remove the query expansion from a SOTA system, leading to a significant latency improvement without compromising the system's effectiveness. While this study covered the most prominent transformer-based LSR methods, several others could not be considered due to time and computing constraints (e.g., [2,4,11,25]). We plan to incorporate them into our implementation as future work.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This analysis led to several findings about the components, including that we can remove the query expansion from a SOTA system, leading to a significant latency improvement without compromising the system's effectiveness. While this study covered the most prominent transformer-based LSR methods, several others could not be considered due to time and computing constraints (e.g., [2,4,11,25]). We plan to incorporate them into our implementation as future work.…”
Section: Discussionmentioning
confidence: 99%
“…In Table 2, we present a summary of LSR methods fit into our conceptual framework. We cover nearly all transformer-based LSR methods for text ranking in the literature 4 , but omit several due to time and space limitations [2,4,11,25].…”
Section: Surveyed Learned Sparse Retrieval Methodsmentioning
confidence: 99%
“…TILDEv2, suggested by Zhuang and Zuccon [75], applies a neural network to learn term-based representations of passages and expand the query. Similarly, SpaDE, proposed by Choi et al [76], uses a neural network to learn a sparse representation of a document and encode the document with two encoders. These models demonstrate the potential of neural networks in enhancing information retrieval by refining term weighting, expanding queries, or learning sparse representations.…”
Section: ) Neural Weighting Schemementioning
confidence: 99%
“…Tilde (Zhuang & Zuccon, 2021a) and Tildev2 (Zhuang & Zuccon, 2021b) proposed a deep query and document likelihood based model instead of a query encoder to improve the ranking efficiency. The SpaDE (Choi et al, 2022) model improves the ranking efficiency by using simplified query representations and a dual document encoder containing term weighting and term expansion components. Other approaches also tried to improve the ranking efficiency by compressing document representations (Cohen et al, 2022) and removing unnecessary word representations (COLBERTER) (Hofstätter et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, we particularly focus on BERT based reranking approaches that jointly model query and document sequences using cross attention approach. Our objective is to explore the trade-off between effective transfer and efficient transfer rather than new architectural improvements as in (MacAvaney et al, 2019;Li et al, 2020;Hofstätter et al, 2020b;Choi et al, 2022;Fan et al, 2023;Leonhardt et al, 2023a). However, we believe our study could be extended to other kinds of reranking models such as dual encoder, hybrid models.…”
Section: Related Workmentioning
confidence: 99%