Proceedings of the 5th Workshop on BioNLP Open Shared Tasks 2019
DOI: 10.18653/v1/d19-5721
|View full text |Cite
|
Sign up to set email alerts
|

An ensemble CNN method for biomedical entity normalization

Abstract: Different representations of the same concept could often be seen in scientific reports and publications. Entity normalization (or entity linking) is the task to match the different representations to their standard concepts. In this paper, we present a two-step ensemble CNN method that normalizes microbiology-related entities in free text to concepts in standard dictionaries 1 . The method is capable of linking entities when only a small microbiology-related biomedical corpus is available for training, and ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 25 publications
0
17
0
Order By: Relevance
“…In fact, in specialized domains, shallow convolutional neural networks (CNNs) have been used with some success in normalization tasks with small training data. Their purpose is to calculate an intermediate representation of an expression from the embeddings of their tokens [29], or to detect specific tokens (or contiguous sequences of tokens) that could trigger a specific class [30]. There still seems to be considerable room for improvement.…”
Section: Machine-learning and Vector-based Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…In fact, in specialized domains, shallow convolutional neural networks (CNNs) have been used with some success in normalization tasks with small training data. Their purpose is to calculate an intermediate representation of an expression from the embeddings of their tokens [29], or to detect specific tokens (or contiguous sequences of tokens) that could trigger a specific class [30]. There still seems to be considerable room for improvement.…”
Section: Machine-learning and Vector-based Methodsmentioning
confidence: 99%
“…In contrast, vector-based machine learning methods achieve better recall. Thus, a common sieve strategy is to first use a method with a high accuracy, preserve the predictions and pass the mentions without prediction (or the mentions with predictions estimated as uncertain) onto another method [29,31]. A limitation of combining methods in this pipelined way is that the second method will not get the opportunity to make a prediction for every entity and prediction errors from the first method are propagated.…”
Section: Sieve-based and Ensemble Approachesmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, researchers have investigated disease name normalization using deep learning [26]. Deng et al [27] presented a two-step ensemble CNN method that normalizes microbiology-related entities, and achieved reasonable performance in the BioNLP-19 task Bacteria Biotope. Karadeniz and Özgür [28] proposed an unsupervised method for entity linking tasks using word embeddings and a syntactic parser.…”
Section: Related Workmentioning
confidence: 99%