2019
DOI: 10.1007/978-3-030-36805-0_13
|View full text |Cite
|
Sign up to set email alerts
|

THUIR at the NTCIR-14 WWW-2 Task

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…Learning-to-rank, a popular machine learning method for ranking, has been widely used in information retrieval and data mining [12,15]. In this section, we implement several learning-to-rank methods with diverse features.…”
Section: Learning To Rankmentioning
confidence: 99%
“…Learning-to-rank, a popular machine learning method for ranking, has been widely used in information retrieval and data mining [12,15]. In this section, we implement several learning-to-rank methods with diverse features.…”
Section: Learning To Rankmentioning
confidence: 99%
“…MAP nDCG@10 2019 TREC BM25 0.237 0.517 ucas_runid1 [13] 0.264 0.644 TUW19-d3-re [35] 0.271 0.644 idst_bert_r1 [100] 0 Model nDCG@10 Q@10 nERR@10 BM25 0.5748. 0.5850 0.6757 Technion-E-CO-NEW-1 [81] 0.6581 0.6815 0.7791 KASYS-E-CO-NEW-1 [89] 0.6935 0.7123 0.7959 PARADE-Max 0.6337 0.6556 0.7395 PARADE-Transformer 0.6897 0.7016 0.8090 structBERT [95], which strengthens on the task of sentence order prediction. All PARADE variants outperform ucas_runid1 and TUW19-d3-re in terms of nDCG@10, but cannot outperform idst_bert_r1.…”
Section: Year Group Runidmentioning
confidence: 99%
“…However, state-of-the-art retrieval systems are usually constructed based on neural models trained with large-scale annotated data. Hence, IR researchers propose to utilize pre-trained language models (PLM), i.e., large-scale neural models trained without supervised data for language understanding, to conduct effective retrieval [13,17,36,43]. Previous studies [7,10,16,42] have shown that PLM such as BERT [9] and RoBERTa [18] significantly outperform existing neural retrieval models on passage and document retrieval datasets like MS MARCO and TREC DL in both zero-shot and few-shot settings [8,25,35].…”
Section: Introductionmentioning
confidence: 99%