Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1034
|View full text |Cite
|
Sign up to set email alerts
|

A Boundary-aware Neural Model for Nested Named Entity Recognition

Abstract: In natural language processing, it is common that many entities contain other entities inside them. Most existing works on named entity recognition (NER) only deal with flat entities but ignore nested ones. We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes the boundary-relevant regions t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
125
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 113 publications
(126 citation statements)
references
References 24 publications
1
125
0
Order By: Relevance
“…Each layer has a CRF output generating predictions for the current level of nesting, and its hidden states are passed to the next layer until no new entities are detected. Zheng et al [25] have trained an LSTM-based multitask model to jointly detect the boundaries of named entities and classify them. Recent research has started to utilize transformer [26] architecture.…”
Section: Neural Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Each layer has a CRF output generating predictions for the current level of nesting, and its hidden states are passed to the next layer until no new entities are detected. Zheng et al [25] have trained an LSTM-based multitask model to jointly detect the boundaries of named entities and classify them. Recent research has started to utilize transformer [26] architecture.…”
Section: Neural Methodsmentioning
confidence: 99%
“…More details can be found in the PolEval subsection. [11] 74.5 66.0 70.0 Finkel and Manning [12] 75.4 65.9 70.3 Lu and Roth [13] 74.2 66.7 70.3 Muis and Lu [14] 75.4 66.8 70.8 Wang and Lu [15] 76.2 67.5 71.6 Neural methods Xu et al [17] 71.2 64.3 67.6 Katiyar and Cardie [16] 79.8 68.2 73.6 Ju et al [24] 78.5 71.3 74.7 Wang et al [22] 78.0 70.2 73.9 Wang and Lu [15] 77.0 73.3 75.1 Sohrab and Miwa [18] 93.2 64.0 77.1 Marinho et al [23] 74.0 72.0 73.0 Lin et al [19] 75.8 73.9 74.8 Zheng et al [25] 75.9 73.6 74.7 Sun et al [27] 77.4 74.9 76.2 Shibuya and Hovy [28] 78 To make a fair comparison with other publications, we preprocessed the corpus following the guidance of Finkel and Manning [12]. Their procedure, which involved splitting the dataset and reducing the number of entity types, was reused by most of the studies included in our comparison.…”
Section: A Hyperparameter Selectionmentioning
confidence: 99%
See 3 more Smart Citations