Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-1155
|View full text |Cite
|
Sign up to set email alerts
|

Higher-Order Syntactic Attention Network for Longer Sentence Compression

Abstract: Sentence compression methods based on LSTM can generate fluent compressed sentences. However, the performance of these methods is significantly degraded when compressing long sentences since it does not explicitly handle syntactic features. To solve this problem, we propose a higher-order syntactic attention network (HiSAN) that can handle higher-order dependency features as an attention distribution on LSTM hidden states. Furthermore, to avoid the influence of incorrect parse results, we train HiSAN by maximi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 23 publications
(33 citation statements)
references
References 22 publications
0
28
0
Order By: Relevance
“…For our experiments, we use the large Google News text compression corpus 3 by Filippova and Altun (2013), which contains 250k automatically extracted the deletion-based compressions from aligned headlines and first sentences of news articles. Recent studies on text compression have extensively used this dataset (e.g., Zhao et al, 2018;Kamigaito et al, 2018). We carry out in-domain active learning experiments on the Google News compression corpus.…”
Section: Datamentioning
confidence: 99%
See 1 more Smart Citation
“…For our experiments, we use the large Google News text compression corpus 3 by Filippova and Altun (2013), which contains 250k automatically extracted the deletion-based compressions from aligned headlines and first sentences of news articles. Recent studies on text compression have extensively used this dataset (e.g., Zhao et al, 2018;Kamigaito et al, 2018). We carry out in-domain active learning experiments on the Google News compression corpus.…”
Section: Datamentioning
confidence: 99%
“…Neural sequence-to-sequence (Seq2Seq) models have shown remarkable success in many areas of natural language processing and specifically in natural language generation tasks, including text compression (Rush et al, 2015;Filippova et al, 2015;Yu et al, 2018;Kamigaito et al, 2018). Despite their success, Seq2Seq models have a major drawback, as they require huge parallel cor-pora with pairs of source and compressed text to be able to learn the parameters for the model.…”
Section: Introductionmentioning
confidence: 99%
“…Chopra et al proposed a convolutional attention-based model [31] in which the attention helps to understand a source sentence by capturing its context. On the other hand, Kamigatio et al incorporated a higher-order syntactic attention into a sequence-to-sequence model [8]. The attention is computed with a chain of dependency relations in the dependency graph of a source sentence.…”
Section: Related Workmentioning
confidence: 99%
“…There have been many studies about sentence compression and the studies are separated into two approaches: a deletion-based approach and an abstractive approach. The deletion-based approach generates a target sentence by removing unimportant and unnecessary words from the source sentence [3][4][5][6][7][8]. That is, a target compressed sentence is a subsequence of a source sentence.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation