2019
DOI: 10.48550/arxiv.1909.02768
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Pairwise Learning to Rank by Neural Networks Revisited: Reconstruction, Theoretical Analysis and Practical Performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…While the authors in (Damke & Hüllermeier, 2021) propose the family of so-called RankGNNs, their competitors are graphs. (Rigutini et al, 2011) applies a neural network approach for preference learning, (Köppel et al, 2019) generalizes (Burges et al, 2005), but these methods require queries as input, which solve a different problem from ours. (Maurya et al, 2021) proposes the first GNN-based model to approximate betweenness and closeness centrality, facilitating locating influential nodes in the graphs in terms of information spread and connectivity.…”
Section: Related Workmentioning
confidence: 99%
“…While the authors in (Damke & Hüllermeier, 2021) propose the family of so-called RankGNNs, their competitors are graphs. (Rigutini et al, 2011) applies a neural network approach for preference learning, (Köppel et al, 2019) generalizes (Burges et al, 2005), but these methods require queries as input, which solve a different problem from ours. (Maurya et al, 2021) proposes the first GNN-based model to approximate betweenness and closeness centrality, facilitating locating influential nodes in the graphs in terms of information spread and connectivity.…”
Section: Related Workmentioning
confidence: 99%
“…Learning to Rank formulations for answer selection in QA systems is common practice, most frequently relying on pointwise ranking models (Severyn and Moschitti, 2015;Garg et al, 2019). Our use of discriminative re-ranking (Collins and Koo, 2005) with softmax loss is closer to learning a pairwise ranking by maximizing the multiclass margin between correct and incorrect answers (Joachims, 2002;Burges et al, 2005;Köppel et al, 2019). This is an important distinction from TREC-style answer selection as our ST-generated candidate responses have lower semantic, syntactic, and lexical variance, making pointwise methods less effective.…”
Section: Related Workmentioning
confidence: 99%