2019
DOI: 10.1609/aaai.v33i01.3301817
|View full text |Cite
|
Sign up to set email alerts
|

A Neural Multi-Task Learning Framework to Jointly Model Medical Named Entity Recognition and Normalization

Abstract: State-of-the-art studies have demonstrated the superiority of joint modeling over pipeline implementation for medical named entity recognition and normalization due to the mutual benefits between the two processes. To exploit these benefits in a more sophisticated way, we propose a novel deep neural multi-task learning framework with explicit feedback strategies to jointly model recognition and normalization. On one hand, our method benefits from the general representations of both tasks provided by multi-task… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
92
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 105 publications
(92 citation statements)
references
References 5 publications
0
92
0
Order By: Relevance
“…Lee et al [20] Sachan et al [47] Wang et al [46] Xu et al [49] Hong and Lee [51] Beltagy et al [19] Xu et al [49] Zhao et al [39] Zhao et al [39] Sachan et al [47] Lee et al [20] Hong and Lee [51] Lou et al [48] Wang et al [46] Lou et al [48] Corpus BioBERT v1.1 [20]: self-trained BERT on PubMed pre-trained embeddings + self-trained on PubMed pre-trained: [16] word embeddings [50] self-trained on PubMed and PMC pre-trained word embeddings (PubMed): [16,52] SciBERT [19]: self-trained BERT on biomedical full texts from Semantic Scholar word embeddings [50] self-trained on PubMed and PMC pre-trained: [16,[53][54][55] pre-trained: [16,[53][54][55] pre-trained embeddings + self-trained on PubMed BioBERT V1.1 [20]: self-trained BERT on PubMed pre-trained word embeddings (PubMed): [16,52] n/a pre-trained: [16] n/a…”
Section: Citationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Lee et al [20] Sachan et al [47] Wang et al [46] Xu et al [49] Hong and Lee [51] Beltagy et al [19] Xu et al [49] Zhao et al [39] Zhao et al [39] Sachan et al [47] Lee et al [20] Hong and Lee [51] Lou et al [48] Wang et al [46] Lou et al [48] Corpus BioBERT v1.1 [20]: self-trained BERT on PubMed pre-trained embeddings + self-trained on PubMed pre-trained: [16] word embeddings [50] self-trained on PubMed and PMC pre-trained word embeddings (PubMed): [16,52] SciBERT [19]: self-trained BERT on biomedical full texts from Semantic Scholar word embeddings [50] self-trained on PubMed and PMC pre-trained: [16,[53][54][55] pre-trained: [16,[53][54][55] pre-trained embeddings + self-trained on PubMed BioBERT V1.1 [20]: self-trained BERT on PubMed pre-trained word embeddings (PubMed): [16,52] n/a pre-trained: [16] n/a…”
Section: Citationsmentioning
confidence: 99%
“…Even more ambitious is the task of linking (or grounding) textual mentions and semantic types to unique identifiers of a given terminology or ontology (such as SNOMED-CT, ICD, or the Human Disease Ontology, https://www.ebi.ac.uk/ ols/ontologies/doid), an issue we will not elaborate on in this survey, cf. e.g., [39]. 7 Concrete numbers in the column "Number of Mentions," indicating the number of named entity mentions (possibly split into training, development, and test set, if provided), may slightly differ for the same corpus because of data cleansing (e.g., removal of duplicates), different pre-processing (e.g., tokenization), and other version issues.…”
mentioning
confidence: 99%
“…The subsequent layers are inspired by the work of Zhao et al (2019), who propose a multi-tasklearning framework to jointly tackle span detection (NER) and normalization (NEN). A key step to make NER and NEN compatible was to model NEN as a sequence-labeling problem, where IDs are predicted for each token just like span tags in NER (cf.…”
Section: Bilstm-based Systemmentioning
confidence: 99%
“…In this work, we take the task a step further from existing monolingual research in a single domain [2,3,6,12,13,20,22] by exploring multilingual transfer between EHRs and UGTs in different languages. Our goal is not to outperform state of the art models on each dataset separately, but to ask whether we can transfer knowledge from a high-resource language, such as English, to a lowresource one, e.g., Russian, for NER of biomedical entities.…”
Section: Introductionmentioning
confidence: 99%