Proceedings of the 13th International Conference on Computational Semantics - Long Papers 2019
DOI: 10.18653/v1/w19-0425
|View full text |Cite
|
Sign up to set email alerts
|

Frame Identification as Categorization: Exemplars vs Prototypes in Embeddingland

Abstract: Categorization is a central capability of human cognition, and a number of theories have been developed to account for properties of categorization. Despite the fact that many semantic tasks involve categorization, theories of categorization do not play a major role in contemporary research in computational linguistics. This paper follows the idea that embedding-based models of semantics lend themselves well to being formulated in terms of classical categorization theories. The benefit is a group of models tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 30 publications
1
5
0
Order By: Relevance
“…With only a single entity per category, the prototype and exemplar models are equivalent; with multiple entities per category the exemplar model performs consistently slightly worse than the NameBased prototype model, but with the same qualitative pattern (performance significantly above NounBased for n4). This is in line with studies which indicate that for current representation learning approaches, the nonlinear decision boundaries of exemplar models may not provide advantages over prototype models (Sikos & Padó, 2019).…”
Section: Methodssupporting
confidence: 90%
“…With only a single entity per category, the prototype and exemplar models are equivalent; with multiple entities per category the exemplar model performs consistently slightly worse than the NameBased prototype model, but with the same qualitative pattern (performance significantly above NounBased for n4). This is in line with studies which indicate that for current representation learning approaches, the nonlinear decision boundaries of exemplar models may not provide advantages over prototype models (Sikos & Padó, 2019).…”
Section: Methodssupporting
confidence: 90%
“…The latest generation of embedding architectures are the so-called transformers which can learn contextual dependencies in an unsupervised fashion and construct context-dependent meaning representations: tree will receive one embedding in the phrase the tree in the forest and another one in the phrase dependency tree. Not surprisingly, one of the best-known transformer models, BERT (Devlin et al, 2019), is the basis of state-of-the-art frame identification models for English (Sikos andPadó, 2019, Tan andNa, 2019).…”
Section: Modeling Multilingual Frame Identificationmentioning
confidence: 99%
“…Thus, the embeddings produced by these new frameworks are said to be contextualized, as opposed to the static vectors produced by the earlier frameworks, and they aim at modeling the specific sense assumed by the word in context (Wiedemann et al 2019). Interestingly, the distinction between traditional and contextualized embeddings has been recently discussed by drawing a parallel between the prototype and exemplar models of categorization in cognitive psychology (Sikos and Padó 2019).…”
Section: From Static Distributional Models To Contextualized Embeddingsmentioning
confidence: 99%