Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1064
|View full text |Cite
|
Sign up to set email alerts
|

Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation

Abstract: An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
95
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 137 publications
(106 citation statements)
references
References 38 publications
0
95
0
Order By: Relevance
“…A common approach in abstractive summarization is to use attention and copying mechanisms (See et al, 2017;Tan et al, 2017;Cohan et al, 2018). Other approaches include using multi-task and multireward training (Paulus et al, 2017;Jiang and Bansal, 2018;Guo et al, 2018;Kryściński et al, 2018), and unsupervised training strategies (Chu and Liu, 2018;Schumann, 2018).…”
Section: Modelsmentioning
confidence: 99%
“…A common approach in abstractive summarization is to use attention and copying mechanisms (See et al, 2017;Tan et al, 2017;Cohan et al, 2018). Other approaches include using multi-task and multireward training (Paulus et al, 2017;Jiang and Bansal, 2018;Guo et al, 2018;Kryściński et al, 2018), and unsupervised training strategies (Chu and Liu, 2018;Schumann, 2018).…”
Section: Modelsmentioning
confidence: 99%
“…The linguistic quality of a summary encompasses many different qualities such as fluency, grammatically, readability, formatting, naturalness and coherence. Most recent work uses a single human judgment to capture all linguistic qualities of the summary (Hsu et al, 2018;Kryściński et al, 2018;Narayan et al, 2018b;Song et al, 2018;Guo et al, 2018); we group them under "Fluency" in Table 1 with an exception of "Clarity" which was evaluated in the DUC evaluation campaigns (Dang, 2005). The "Clarity" metric puts emphasis in easy identification of noun and pronoun phrases in the summary which is a different dimension than "Fluency", as a summary may be fluent but difficult to be understood due to poor clarity.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Absolute vs Relative Summary Ranking. In relative assessment of summarization, annotators are shown two or more summaries and are asked to rank them according to the dimension at question (Yang et al, 2017;Chen and Bansal, 2018;Narayan et al, 2018a;Guo et al, 2018;Krishna and Srinivasan, 2018). The relative assessment is often done using the paired comparison (Thurstone, 1994) or the best-worst scaling (Woodworth and G, 1991;Louviere et al, 2015), to improve inter-annotator agreement.…”
Section: Literature Reviewmentioning
confidence: 99%
“…[18] 39.60 16.20 35.30 Refresh [20] 40.00 18.20 36.60 Rnes w/o coherence [28] 41. 25 18.87 37.75 BanditSum [6] 41.50 18.70 37.60 Latent [29] 41.05 18.77 37.54 rnn-ext+RL [1] 41.47 18.72 37.76 NeuSum [30] 41.59 19.01 37.98 Abstractive Pointer-Generator [23] 39.53 17.28 36.38 KIGN+Prediction-guide [15] 38.95 17.12 35.68 Multi-Task(EG+QG) [10] 39.81 17.64 36.54 RL+pg+cbdec [13] 40.66 17.87 37.06 Saliency+Entail. [21] 40.43 18.00 37.10 Inconsistency loss [12] 40.68 17.97 37.13 Bottom-up [9] 41.22 18.68 38.34 rnn-ext+abs+RL [1] 40.04 17.61 37.59 Mixed Extractive-Abstractive EditNet 41.42 19.03 38.36…”
Section: Dataset and Setupmentioning
confidence: 99%