Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-1153
|View full text |Cite
|
Sign up to set email alerts
|

Generating Topic-Oriented Summaries Using Neural Attention

Abstract: Summarizing a document requires identifying the important parts of the document with an objective of providing a quick overview to a reader. However, a long article can span several topics and a single summary cannot do justice to all the topics. Further, the interests of readers can vary and the notion of importance can change across them. Existing summarization algorithms generate a single summary and are not capable of generating multiple summaries tuned to the interests of the readers. In this paper, we pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
35
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(35 citation statements)
references
References 11 publications
0
35
0
Order By: Relevance
“…Absolute vs Relative Summary Ranking. In relative assessment of summarization, annotators are shown two or more summaries and are asked to rank them according to the dimension at question (Yang et al, 2017;Chen and Bansal, 2018;Narayan et al, 2018a;Guo et al, 2018;Krishna and Srinivasan, 2018). The relative assessment is often done using the paired comparison (Thurstone, 1994) or the best-worst scaling (Woodworth and G, 1991;Louviere et al, 2015), to improve inter-annotator agreement.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Absolute vs Relative Summary Ranking. In relative assessment of summarization, annotators are shown two or more summaries and are asked to rank them according to the dimension at question (Yang et al, 2017;Chen and Bansal, 2018;Narayan et al, 2018a;Guo et al, 2018;Krishna and Srinivasan, 2018). The relative assessment is often done using the paired comparison (Thurstone, 1994) or the best-worst scaling (Woodworth and G, 1991;Louviere et al, 2015), to improve inter-annotator agreement.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Our models are trained on documents paired with aspect-specific summaries. A sizable data set does not exist, and we adopt a scalable, synthetic training setup (Choi, 2000;Krishna and Srinivasan, 2018). We leverage aspect labels (such as news or health) associated with each article in the CNN/Daily Mail dataset (Hermann et al, 2015), and construct synthetic multi-aspect documents by interleaving paragraphs of articles pertaining to different aspects, and pairing them with the original summary of one of the included articles.…”
Section: Introductionmentioning
confidence: 99%
“…Although assuming one aspect per source article may seem crude, we demonstrate that our model trained on this data picks up subtle aspect changes within natural news articles. Importantly, our setup requires no supervision such as pre-trained topics (Krishna and Srinivasan, 2018) or aspect-segmentation of documents.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Tunable or controlled summary generation has picked up pace in recent times. Algorithms allow for controlling various dimensions of the output summary such as the length or entities (Fan et al, 2017) and topics (Krishna and Srinivasan, 2018). Since these approaches primarily rely on the diversity in the given dataset, extending these approaches for formality tailored summarization would require a diverse summarization corpus that captures subtleties in various formal variants.…”
Section: Introductionmentioning
confidence: 99%