2023
DOI: 10.1007/978-3-031-28238-6_2
|View full text |Cite
|
Sign up to set email alerts
|

Parameter-Efficient Sparse Retrievers and Rerankers Using Adapters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Thus, our PEFA is also applicable to both pre-trained and fine-tuned ERMs, even ones initialized from black-box LLMs. Note that PEFA is orthogonal and complement to most existing literature that aims to obtain better pre-trained or fine-tuned ERMs at the learning stage, including recent studies of the parameter-efficient fine-tuning of ERMs [28,37,44]. Finally, for the ease of discussion, we assume embeddings obtained from ERMs are unit-norm (i.e., ℓ 2 normalized), hence the inner product is equivalent to the cosine similarity.…”
Section: Problem Statementmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, our PEFA is also applicable to both pre-trained and fine-tuned ERMs, even ones initialized from black-box LLMs. Note that PEFA is orthogonal and complement to most existing literature that aims to obtain better pre-trained or fine-tuned ERMs at the learning stage, including recent studies of the parameter-efficient fine-tuning of ERMs [28,37,44]. Finally, for the ease of discussion, we assume embeddings obtained from ERMs are unit-norm (i.e., ℓ 2 normalized), hence the inner product is equivalent to the cosine similarity.…”
Section: Problem Statementmentioning
confidence: 99%
“…To avoid the expensive full-parameter fine-tuning of ERMs for various downstream tasks, there are some preliminary studies on parameter-efficient fine-tuning of ERMs [28,37,44]. Nevertheless, as pointed out by [37], naively apply existing parameter-efficient fine-tuning methods in the NLP literature, such as Adapter [18], prefix-tuning [33] and LoRA [19], often results in limited success for ERM in the retrieval applications.…”
Section: Parameter-efficient Tuning Of Ermsmentioning
confidence: 99%