Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1527
|View full text |Cite
|
Sign up to set email alerts
|

Neural Architectures for Nested NER through Linearization

Abstract: We propose two neural network architectures for nested named entity recognition (NER), a setting in which named entities may overlap and also be labeled with more than one label. We encode the nested labels using a linearized scheme. In our first proposed approach, the nested labels are modeled as multilabels corresponding to the Cartesian product of the nested labels in a standard LSTM-CRF architecture. In the second one, the nested NER is viewed as a sequence-to-sequence problem, in which the input sequence … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
131
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 205 publications
(141 citation statements)
references
References 21 publications
1
131
0
Order By: Relevance
“…Lin et al [34] proposed a sequence-to-nugget architecture that uses a head-driven phrase structure for nested NE recognition. In Table 7, the BERT is used in Xia et al [55], Fisher et al [56], Shibuya et al [57], StrakovÂt'a et al [59] and Jue et al [84]. Compared with them, our model achieves state-of-theart performance in the task of nested NE recognition.…”
Section: Comparing With Other Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Lin et al [34] proposed a sequence-to-nugget architecture that uses a head-driven phrase structure for nested NE recognition. In Table 7, the BERT is used in Xia et al [55], Fisher et al [56], Shibuya et al [57], StrakovÂt'a et al [59] and Jue et al [84]. Compared with them, our model achieves state-of-theart performance in the task of nested NE recognition.…”
Section: Comparing With Other Methodsmentioning
confidence: 99%
“…Lin et al [34] proposed a sequence-to-nugget architecture that uses a headdriven phrase structure for nested NE recognition. Recently, pretrained language models have shown valuable potential to improve performance [55]- [59], [84].…”
Section: Related Workmentioning
confidence: 99%
“…Their method developed the multitask mechanism to improve the recall of the clinical named-entity recognition. Straková [ 24 ] proposed two neural network architectures for nested NER, and the ELMo contextual embeddings were used to enrich their architectures. Dogan et al [ 25 ] proposed a framework that incorporated deep learning models of ELMo with Wikidata to address the issue of the lack of datasets for the task of fine-grained NER.…”
Section: Related Workmentioning
confidence: 99%
“…We use a novel approach [37] for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels.…”
Section: Named Entity Recognitionmentioning
confidence: 99%