Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1287
|View full text |Cite
|
Sign up to set email alerts
|

Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study

Abstract: Neural language models have achieved stateof-the-art performances on many NLP tasks, and recently have been shown to learn a number of hierarchically-sensitive syntactic dependencies between individual words. However, equally important for language processing is the ability to combine words into phrasal constituents, and use constituent-level features to drive downstream expectations. Here we investigate neural models' ability to represent constituent-level features, using coordinated noun phrases as a case st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…Bayesian models of word learning have shown successes in acquiring proper syntactic generalizations from minimal exposure (Tenenbaum and Xu, 2000;Wang et al, 2017), however it is not clear how well neural network models would exhibit these rapid generalizations. Comparing between neural network architectures, recent work has shown that models enhanced with explicit structural supervision during training produce more humanlike syntactic generalizations (Kuncoro et al, 2017(Kuncoro et al, , 2018Wilcox et al, 2019), but it remains untested whether such supervision helps learn properties of tokens that occur rarely during training.…”
Section: Related Workmentioning
confidence: 99%
“…Bayesian models of word learning have shown successes in acquiring proper syntactic generalizations from minimal exposure (Tenenbaum and Xu, 2000;Wang et al, 2017), however it is not clear how well neural network models would exhibit these rapid generalizations. Comparing between neural network architectures, recent work has shown that models enhanced with explicit structural supervision during training produce more humanlike syntactic generalizations (Kuncoro et al, 2017(Kuncoro et al, , 2018Wilcox et al, 2019), but it remains untested whether such supervision helps learn properties of tokens that occur rarely during training.…”
Section: Related Workmentioning
confidence: 99%