Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval 2024
DOI: 10.1145/3626772.3657861
|View full text |Cite
|
Sign up to set email alerts
|

Systematic Evaluation of Neural Retrieval Models on the Touché 2020 Argument Retrieval Subset of BEIR

Nandan Thakur,
Luiz Bonifacio,
Maik Fröbe
et al.

Abstract: The zero-shot effectiveness of neural retrieval models is often evaluated on the BEIR benchmark-a combination of different IR evaluation datasets. Interestingly, previous studies found that particularly on the BEIR subset Touché 2020, an argument retrieval task, neural retrieval models are considerably less effective than BM25. Still, so far, no further investigation has been conducted on what makes argument retrieval so "special". To more deeply analyze the respective potential limits of neural retrieval mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 66 publications
0
0
0
Order By: Relevance