2017
DOI: 10.1007/s11390-017-1758-3
|View full text |Cite
|
Sign up to set email alerts
|

Recent Advances on Neural Headline Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
14
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(15 citation statements)
references
References 25 publications
1
14
0
Order By: Relevance
“…More recently, our approach has been successfully applied to summarization (Ayana et al, 2016). They optimize neural networks for headline generation with respect to ROUGE (Lin, 2004) and also achieve significant improvements, confirming the effectiveness and applicability of our approach.…”
Section: Related Worksupporting
confidence: 66%
“…More recently, our approach has been successfully applied to summarization (Ayana et al, 2016). They optimize neural networks for headline generation with respect to ROUGE (Lin, 2004) and also achieve significant improvements, confirming the effectiveness and applicability of our approach.…”
Section: Related Worksupporting
confidence: 66%
“…Optimization methods for optimizing a model with respect to evaluation scores, such as reinforcement learning (Ranzato et al, 2015;Paulus et al, 2018;Chen and Bansal, 2018;Wu and Hu, 2018) and minimum risk training (Ayana et al, 2017), have been proposed for summarization models based on neural encoder-decoders. Our method is similar to that of Ayana et al (2017) in terms of applying MRT to neural encoder-decoders.…”
Section: Related Workmentioning
confidence: 99%
“…Optimization methods for optimizing a model with respect to evaluation scores, such as reinforcement learning (Ranzato et al, 2015;Paulus et al, 2018;Chen and Bansal, 2018;Wu and Hu, 2018) and minimum risk training (Ayana et al, 2017), have been proposed for summarization models based on neural encoder-decoders. Our method is similar to that of Ayana et al (2017) in terms of applying MRT to neural encoder-decoders. There are two differences between our method and Ayana et al's: (i) our method uses only the part of the summary generated by a model within the length constraint for calculating the ROUGE score and (ii) it penalizes summaries that exceed the length of the reference regardless of its ROUGE score.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent success in deep learning, especially encoder-decoder models (Sutskever et al, 2014;Bahdanau et al, 2015), has dramatically improved the performance of various text-generation tasks, such as translation (Johnson et al, 2017), summarization (Ayana et al, 2017), question-answering (Choi et al, 2017), and dialogue response generation (Dhingra et al, 2017). In these studies on neural text generation, it has been known that a modelensemble method, which predicts output text by averaging multiple text-generation models at decoding time, is effective even for text-generation tasks, and many state-of-the-art results have been obtained with ensemble models.…”
Section: Introductionmentioning
confidence: 99%