Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1014
|View full text |Cite
|
Sign up to set email alerts
|

Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks

Abstract: We present a new neural sequence-tosequence model for extractive summarization called SWAP-NET (Sentences and Words from Alternating Pointer Networks). Extractive summaries comprising a salient subset of input sentences, often also contain important key words. Guided by this principle, we design SWAP-NET that models the interaction of key words and salient sentences using a new twolevel pointer network based architecture. SWAP-NET identifies both salient sentences and key words in an input document, and then c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
51
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 83 publications
(57 citation statements)
references
References 19 publications
0
51
0
1
Order By: Relevance
“…Traditional summarization methods usually depend on manual rules and expert knowledge, such as the expanding rules of noisy-channel models (Daumé III and Marcu 2002;Knight and Marcu 2002), objectives and constraints of Integer Linear Programming (ILP) models (Woodsend and Lapata 2012; Parveen, Ramsl, and Strube 2015;Bing et al 2015), human-engineered features of some sequence classification methods (Shen et al 2007), and so on. Deep learning models can learn continuous features automatically and have made substantial progress in multiple NLP areas.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Traditional summarization methods usually depend on manual rules and expert knowledge, such as the expanding rules of noisy-channel models (Daumé III and Marcu 2002;Knight and Marcu 2002), objectives and constraints of Integer Linear Programming (ILP) models (Woodsend and Lapata 2012; Parveen, Ramsl, and Strube 2015;Bing et al 2015), human-engineered features of some sequence classification methods (Shen et al 2007), and so on. Deep learning models can learn continuous features automatically and have made substantial progress in multiple NLP areas.…”
Section: Related Workmentioning
confidence: 99%
“…(Narayan, Cohen, and Lapata 2018) conceptualizes extractive summarization as a sentence ranking task and optimizes the ROUGE evaluation metric through an RL objective. (Jadhav and Rajan 2018) models the interaction of keywords and salient sentences using a two-level pointer network and combines them to generate the extractive summary.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The exploration on document summarization can be broadly divided into extractive and abstractive summarization. The extractive methods (Nallapati et al, 2017;Jadhav and Rajan, 2018;Shi Article: poundland has been been forced to pull decorative plastic easter eggs from their shelves over fears children may choke -because they look like cadbury mini eggs . trading standards officials in buckinghamshire and surrey raised the alarm over the chinese made decorations , as they were ' likely to contravene food imitation safety rules ' .…”
Section: Related Workmentioning
confidence: 99%