Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019) 2019
DOI: 10.18653/v1/w19-4324
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Incrementality in Sequence-to-Sequence Models

Abstract: Since their inception, encoder-decoder models have successfully been applied to a wide array of problems in computational linguistics. The most recent successes are predominantly due to the use of different variations of attention mechanisms, but their cognitive plausibility is questionable. In particular, because past representations can be revisited at any point in time, attention-centric methods seem to lack an incentive to build up incrementally more informative representations of incoming sentences. This … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…Hupkes et al (2018) use a diagnostic classifier to analyze the representations that are incrementally built by sequence-to-sequence models in disfluency detection and conclude that the semantic information is only kept encoded for a few steps after it appears in the dialogue, being soon forgotten afterwards. Ulmer et al (2019) propose three metrics to assess the incremental encoding abilities of LSTMs and compare it with the addition of attention mechanisms.…”
Section: Incremental Processingmentioning
confidence: 99%
“…Hupkes et al (2018) use a diagnostic classifier to analyze the representations that are incrementally built by sequence-to-sequence models in disfluency detection and conclude that the semantic information is only kept encoded for a few steps after it appears in the dialogue, being soon forgotten afterwards. Ulmer et al (2019) propose three metrics to assess the incremental encoding abilities of LSTMs and compare it with the addition of attention mechanisms.…”
Section: Incremental Processingmentioning
confidence: 99%