2020
DOI: 10.1162/tacl_a_00334
|View full text |Cite
|
Sign up to set email alerts
|

Nested Named Entity Recognition via Second-best Sequence Learning and Decoding

Abstract: When an entity name contains other names within it, the identification of all combinations of names can become difficult and expensive. We propose a new method to recognize not only outermost named entities but also inner nested ones. We design an objective function for training a neural model that treats the tag sequence for nested entities as the second best path within the span of their parent entity. In addition, we provide the decoding method for inference that extracts entities iteratively from outermost… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
66
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 90 publications
(82 citation statements)
references
References 32 publications
1
66
0
Order By: Relevance
“…Although the second-best path searching algorithm is proposed as the main contribution of Shibuya and Hovy (2020), we claim that forcing the target path at the next level to be the second-best path at the current level is not optimal. As the innermostfirst encoding example above, the best path at level 3 is B-ROLE,I-ROLE,E-ROLE,O,O.…”
Section: Influence Of the Best Pathmentioning
confidence: 98%
See 2 more Smart Citations
“…Although the second-best path searching algorithm is proposed as the main contribution of Shibuya and Hovy (2020), we claim that forcing the target path at the next level to be the second-best path at the current level is not optimal. As the innermostfirst encoding example above, the best path at level 3 is B-ROLE,I-ROLE,E-ROLE,O,O.…”
Section: Influence Of the Best Pathmentioning
confidence: 98%
“…Therefore the second-best path is more likely to be one of those paths that share as many as possible labels with the best path, e.g., B-ROLE,I-ROLE,E-ROLE,O,S-ORG, rather than the actual target label sequence at level 4, i.e., B-PER,I-PER,I-PER,I-PER,E-PER, which does not overlap with the best path at all. In addition, Shibuya and Hovy (2020) reuse the same potential function at all higher levels. This indicates that, for instance, at level 3 and time step 1, their model encourages the dot product of the hidden state and the label embedding h 1 v B-ROLE to be larger than h 1 v B-PER , while at level 4, the remaining influence of the best path reversely forces h 1 v B-PER to be larger than h 1 v B-ROLE .…”
Section: Influence Of the Best Pathmentioning
confidence: 99%
See 1 more Smart Citation
“…Fisher and Vlachos (2019) introduced a BERT-based model that first merges tokens and/or entities into entities, and then assigned labeled to these entities. Shibuya and Hovy (2019) provided inference model that extracts entities iteratively from outermost ones to inner ones. Straková et al (2019) viewed nested NER as a sequence-tosequence generation problem, in which the input sequence is a list of tokens and the target sequence is a list of labels.…”
Section: Nested Named Entity Recognitionmentioning
confidence: 99%
“…• DYGIE: Luan et al (2019) introduces a general framework that share span representations using dynamically constructed span graphs. (Wang and Lu, 2018) 76.8 72.3 74.5 ARN (Lin et al, 2019a) 76.2 73.6 74.9 Path-BERT (Shibuya and Hovy, 2019) 82.98 82.42 82.70 Merge-BERT (Fisher and Vlachos, 2019) micro-averaged precision, recall and F1 scores for evaluation.…”
Section: Baselinesmentioning
confidence: 99%