The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-1002
|View full text |Cite
|
Sign up to set email alerts
|

Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss

Abstract: The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text. Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence. Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features. Instead, we propose an end-to-end solution with a neural network model that uses a variant of crossentrop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
80
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 61 publications
(80 citation statements)
references
References 17 publications
0
80
0
Order By: Relevance
“…Ren et al [10] designed a novel partial-label loss to further reduce the label noise. Moreover, Xu et al [22] introduced a method of normalization of hierarchical loss to reduce specific types of noise. A recent study [23] introduced a penalty term in the optimization process to effectively diminish the side effect of the label noise and confirmation bias.…”
Section: A Distant Supervision-based Methodsmentioning
confidence: 99%
“…Ren et al [10] designed a novel partial-label loss to further reduce the label noise. Moreover, Xu et al [22] introduced a method of normalization of hierarchical loss to reduce specific types of noise. A recent study [23] introduced a penalty term in the optimization process to effectively diminish the side effect of the label noise and confirmation bias.…”
Section: A Distant Supervision-based Methodsmentioning
confidence: 99%
“…Existing work on FGET focuses on performing context-sensitive typing (Gillick et al, 2014;Corro et al, 2015), learning from noisy training data (Abhishek et al, 2017;Ren et al, 2016;Xu and Barbosa, 2018), and exploiting the type hierarchies to improve the learning and inference (Yogatama et al, 2015;Murty et al, 2018). More recent studies support even finer granularity (Choi et al, 2018;Murty et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Shimaoka et al (2017) encode the hierarchy through a sparse matrix. Xu and Barbosa (2018) model the relations through a hierarchy-aware loss function. Ma et al (2016) and Abhishek et al (2017) learn embeddings for labels and feature representations into a joint space in order to facilitate information sharing among them.…”
Section: Related Workmentioning
confidence: 99%