2020
DOI: 10.1609/aaai.v34i05.6323
|View full text |Cite
|
Sign up to set email alerts
|

Deep Attentive Ranking Networks for Learning to Order Sentences

Abstract: We present an attention-based ranking framework for learning to order sentences given a paragraph. Our framework is built on a bidirectional sentence encoder and a self-attention based transformer network to obtain an input order invariant representation of paragraphs. Moreover, it allows seamless training using a variety of ranking based loss functions, such as pointwise, pairwise, and listwise ranking. We apply our framework on two tasks: Sentence Ordering and Order Discrimination. Our framework outperforms … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
36
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(37 citation statements)
references
References 17 publications
1
36
0
Order By: Relevance
“…This allows for the original input to then be matrix-multiplied by the row-stochastic permutation matrix, resulting in the desired reordering (Nishida & Nakayama, 2017), (Emami & Ranka, 2018). It is a sign of the complexity of the field that no clear preferred formalization of its core challenge has emerged, with ranking (for example) still finding useful application in active research (Kumar et al, 2020).…”
Section: Progress In Permutation Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…This allows for the original input to then be matrix-multiplied by the row-stochastic permutation matrix, resulting in the desired reordering (Nishida & Nakayama, 2017), (Emami & Ranka, 2018). It is a sign of the complexity of the field that no clear preferred formalization of its core challenge has emerged, with ranking (for example) still finding useful application in active research (Kumar et al, 2020).…”
Section: Progress In Permutation Learningmentioning
confidence: 99%
“…Alternatively, if a ranking framework is applied, where a score is generated for each element and subsequently used to sort all elements into a new permutation, we gain access to well documented listwise losses, such as the ones successfully employed in the ListNet or ListMLE (Kumar et al, 2020) frameworks. Many metrics that would lend themselves to ordering challenges do not have defined derivatives for their entire domain.…”
Section: Differentiable Loss Functionsmentioning
confidence: 99%
“…(1) Ranking or Sorting frameworks: Pairwise Model ; RankTxNet (Kumar et al, 2020); B-TSort (Prabhumoye et al, 2020).…”
Section: Baselinesmentioning
confidence: 99%
“…In recent years, several approaches based on ranking or sorting frameworks have been devel- oped to deal with this task. RankTxNet (Kumar et al, 2020) computes a score for each sentence and sorts these scores with ranking based loss functions. Pairwise Model adopts a pairwise ranking algorithm to learn the relative order of each sentence pair.…”
Section: Introductionmentioning
confidence: 99%
“…Cui et al (2018) use a transformer network for the paragraph encoder to allow for reliable paragraph encoding. Prior work (Logeswaran et al, 2018;Cui et al, 2018;Kumar et al, 2020) has treated this task as a sequence prediction task where the order of the sentences is predicted as a sequence. The decoder is initialized by the document representation and it outputs the index of sentences in sequential order.…”
Section: Introductionmentioning
confidence: 99%