Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1207
|View full text |Cite
|
Sign up to set email alerts
|

BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization

Abstract: The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of the available training data by applying filters during summarization. In this paper, we propose a novel Bi-directional Selective Encoding with Template (BiSET) model, which leverages template discovered from training data to softly select key information from each source article to guide its … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 59 publications
(48 citation statements)
references
References 20 publications
(32 reference statements)
0
48
0
Order By: Relevance
“…In recent years, sequence-to-sequence (seq2seq) [61] based neural networks have been proved effective in generating a fluent sentence. The seq2seq model is originally proposed for machine translation and later adapted to various natural language generation tasks, such as text summarization [10,18,19,22,25,41,48,69,71] and dialogue generation [6,17,20,21,40,50,64,81,85,86]. Rush et al [53] apply the seq2seq mechanism with attention model to text summarization field.…”
Section: Text Generation Methodsmentioning
confidence: 99%
“…In recent years, sequence-to-sequence (seq2seq) [61] based neural networks have been proved effective in generating a fluent sentence. The seq2seq model is originally proposed for machine translation and later adapted to various natural language generation tasks, such as text summarization [10,18,19,22,25,41,48,69,71] and dialogue generation [6,17,20,21,40,50,64,81,85,86]. Rush et al [53] apply the seq2seq mechanism with attention model to text summarization field.…”
Section: Text Generation Methodsmentioning
confidence: 99%
“…allowed because of limitations in the space where the headline appears. The technology of automatic headline generation has the potential to contribute greatly to this domain, and the problems of news headline generation have motivated a wide range of studies (Wang et al, 2018;Chen et al, 2018;Kiyono et al, 2018;Cao et al, 2018;Wang et al, 2019). Table 1 shows sample headlines in three different lengths written by professional editors of a media company for the same news article: The length of the first headline for the digital media is restricted to 10 characters, the second to 13 charac-ters, and the third to 26 characters.…”
Section: トヨタ、 エンジン車だけの車種ゼ ロへ 2025年ごろmentioning
confidence: 99%
“…Inspired by results of "Retrieve, rerank and rewrite" [20] with soft templates Wang and others [35] proposed a new model called BiSET (Bi-directional Selective Encoding with Template for Abstractive Summarization) to enhance soft template usage in text summarization. The work introduces: 1.…”
Section: Rush Chopra and Westonmentioning
confidence: 99%