Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1378
|View full text |Cite
|
Sign up to set email alerts
|

What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue

Abstract: Humans use language to refer to entities in the external world. Motivated by this, in recent years several models that incorporate a bias towards learning entity representations have been proposed. Such entity-centric models have shown empirical success, but we still know little about why.In this paper we analyze the behavior of two recently proposed entity-centric models in a referential task, Entity Linking in Multi-party Dialogue (SemEval 2018 Task 4). We show that these models outperform the state of the a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 26 publications
(31 reference statements)
0
10
1
Order By: Relevance
“…For this reason a distributional representation of an entire coreference chain (as in Adel & Schütze 2014; Clark & Manning 2016) could potentially supplant the use of names in an entity‐based approach to category representation. While coreference resolution has been the focus of much research interest, it is an open question whether current models are good enough to build entity representations (Aina, Silberer, Sorodoc, Westera, & Boleda, 2019; Clark & Manning, 2015).…”
Section: Discussionmentioning
confidence: 99%
“…For this reason a distributional representation of an entire coreference chain (as in Adel & Schütze 2014; Clark & Manning 2016) could potentially supplant the use of names in an entity‐based approach to category representation. While coreference resolution has been the focus of much research interest, it is an open question whether current models are good enough to build entity representations (Aina, Silberer, Sorodoc, Westera, & Boleda, 2019; Clark & Manning, 2015).…”
Section: Discussionmentioning
confidence: 99%
“…Several models (Henaff et al, 2019;Yang et al, 2017;Ji et al, 2017) were developed as an augmentation of RNN LMs to deal better with entities, with the implicit assumption that standard models do that poorly. Aina et al (2019) achieved good results on an entity-linking task, but showed that the network was not acquiring entity representations.…”
Section: Related Workmentioning
confidence: 99%
“…Wiki2V, which is similarly contextbased, is also informed by its links to other entities which may provide additional information as we see in the next sections. The RNN's poor performance was also observed by Aina et al (2019), who saw low accuracy when probing an "entity-centric" RNN model for entity type information.…”
Section: Probing Experiments Analysismentioning
confidence: 80%
“…These techniques have similarly been applied to entity embeddings, though usually to limited extents. Entity type prediction has been among the most common task explored when proposing a new entity embedding method, in part because fine-grained entity type prediction is a common standalone task itself (Ling and Weld, 2012;Gupta et al, 2017;Yaghoobzadeh and Schütze, 2017;Aina et al, 2019;Chen et al, 2020). Recently, BERT-inspired techniques have been used to probe entity knowledge stored in pretrained language models through Cloze-style tasks, in which part of a fact about an entity is obscured and the model predicts the missing word(s) Petroni et al, 2019;Pörner et al, 2019;.…”
Section: Probing Tasksmentioning
confidence: 99%