Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.7
|View full text |Cite
|
Sign up to set email alerts
|

MZET: Memory Augmented Zero-Shot Fine-grained Named Entity Typing

Abstract: Named entity typing (NET) is a classification task of assigning an entity mention in the context with given semantic types. However, with the growing size and granularity of the entity types, few previous researches concern with newly emerged entity types. In this paper, we propose MZET, a novel memory augmented FNET (Fine-grained NET) model, to tackle the unseen types in a zero-shot manner. MZET incorporates character-level, word-level, and contextural-level information to learn the entity mention representat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(18 citation statements)
references
References 18 publications
0
18
0
Order By: Relevance
“…(1) Proto-HLE (Ma et al, 2016) which introduces prototype-driven hierarchical label embedding for ZFET; (2) ZOE (Zhou et al, 2018) which infers the types of a given mention according to its typecompatible Wikipedia entries; (3) DZET (Obeidat et al, 2019) which derives type representations from Wikipedia pages and leverages a contextdescription matching approach for type inference; (4) NZFET * (Ren et al, 2020) which employs entity type attention to make the model focus on information relevant to the entity type; (5) MZET * (Zhang et al, 2020b) which adopts a memory network to connect the seen and unseen types.…”
Section: Comparison Modelsmentioning
confidence: 99%
See 3 more Smart Citations
“…(1) Proto-HLE (Ma et al, 2016) which introduces prototype-driven hierarchical label embedding for ZFET; (2) ZOE (Zhou et al, 2018) which infers the types of a given mention according to its typecompatible Wikipedia entries; (3) DZET (Obeidat et al, 2019) which derives type representations from Wikipedia pages and leverages a contextdescription matching approach for type inference; (4) NZFET * (Ren et al, 2020) which employs entity type attention to make the model focus on information relevant to the entity type; (5) MZET * (Zhang et al, 2020b) which adopts a memory network to connect the seen and unseen types.…”
Section: Comparison Modelsmentioning
confidence: 99%
“…In many scenarios, the type hierarchy is continuously evolving, which requires newly emerged types to be accounted into FET systems. As a result, zero-shot FET (ZFET) is welcomed to handle the new types which are unseen during training stage (Ma et al, 2016;Ren et al, 2020;Zhang et al, 2020b).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…The ultra fine-grained label set also leads to data bottleneck and the long tail problem. In recent years, some previous approaches try to tackle this problem by introducing zero/few-shot learning methods (Ma et al, 2016;Zhou et al, 2018;Yuan and Downey, 2018;Obeidat et al, 2019;Zhang et al, 2020b;, or using data augmentation with denosing strategies (Ren et al, 2016b;Onoe and Durrett, 2019;Zhang et al, 2020a;Ali et al, 2020) or utilizing external knowledge (Corro et al, 2015;Dai et al, 2019) to introduce more external knowledge.…”
Section: Related Workmentioning
confidence: 99%