2019
DOI: 10.1162/tacl_a_00277
|View full text |Cite
|
Sign up to set email alerts
|

Still a Pain in the Neck: Evaluating Text Representations on Lexical Composition

Abstract: Building meaningful phrase representations is challenging because phrase meanings are not simply the sum of their constituent meanings. Lexical composition can shift the meanings of the constituent words and introduce implicit information. We tested a broad range of textual representations for their capacity to address these issues. We found that, as expected, contextualized word representations perform better than static word embeddings, more so on detecting meaning shift than in recovering implicit informati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
59
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 53 publications
(65 citation statements)
references
References 36 publications
2
59
0
Order By: Relevance
“…The BERT model has been a particular focus of analysis work since its introduction. Previous work has focused on analyzing the attention mechanism (Vig and Belinkov, 2019;Clark et al, 2019), parameters Radford et al, 2019; and embeddings (Shwartz and Dagan, 2019;Liu et al, 2019a). We build on this work with a particular, controlled focus on the evolution of phrasal representation in a variety of state-of-the-art transformers.…”
Section: Related Workmentioning
confidence: 99%
“…The BERT model has been a particular focus of analysis work since its introduction. Previous work has focused on analyzing the attention mechanism (Vig and Belinkov, 2019;Clark et al, 2019), parameters Radford et al, 2019; and embeddings (Shwartz and Dagan, 2019;Liu et al, 2019a). We build on this work with a particular, controlled focus on the evolution of phrasal representation in a variety of state-of-the-art transformers.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, Shwartz and Dagan (2019) found that while these representations excel at detecting non-compositional noun compounds, they perform much worse at revealing implicit information such as the relationship between the constituents. Moreover, looking into these models' predictions of substitute constituents shows that even when they recognize a constituent is not used in its literal sense (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…In particular, the representations produced by BERT (Devlin et al, 2019) have been used to create high performing models for many language understanding tasks, although their status as a linguistically sound model of meaning is debated (Mickus et al, 2020). Shwartz and Dagan (2019) test BERT on several cases of figurative language, but to the best of our knowledge BERT ability to identify metonymy has never been addressed yet.…”
Section: Related Workmentioning
confidence: 99%