Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2016
DOI: 10.18653/v1/n16-1179
|View full text |Cite
|
Sign up to set email alerts
|

Improving sentence compression by learning to predict gaze

Abstract: We show how eye-tracking corpora can be used to improve sentence compression models, presenting a novel multi-task learning algorithm based on multi-layer LSTMs. We obtain performance competitive with or better than state-of-the-art approaches.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
102
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
5

Relationship

2
8

Authors

Journals

citations
Cited by 84 publications
(104 citation statements)
references
References 20 publications
1
102
1
Order By: Relevance
“…Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step. Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al, 2011) or sentence compression (Klerke et al, 2016). It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al, 2015;Luong et al, 2016), though so far not with attentional decoders.…”
Section: Related Workmentioning
confidence: 99%
“…Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step. Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al, 2011) or sentence compression (Klerke et al, 2016). It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al, 2015;Luong et al, 2016), though so far not with attentional decoders.…”
Section: Related Workmentioning
confidence: 99%
“…As reported in recent papers (Klerke et al, 2016;Wang et al, 2017), the F 1 scores of Tagger match or exceed those of the Seq2Seq-based methods. The compressed sentence of the first example in Table 3 output by Tagger is ungrammatical.…”
Section: Discussionmentioning
confidence: 55%
“…Our second baseline is a cascading, three-layered LSTM, as described by (Klerke et al, 2016). See §3 for hyper-parameters.…”
Section: Evaluation Metricsmentioning
confidence: 99%