Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Proceedings of the 2nd Clinical Natural Language Processing Workshop 2019
DOI: 10.18653/v1/w19-1912
|View full text |Cite
|
Sign up to set email alerts
|

Medical Entity Linking using Triplet Network

Abstract: Entity linking (or Normalization) is an essential task in text mining that maps the entity mentions in the medical text to standard entities in a given Knowledge Base (KB). This task is of great importance in the medical domain. It can also be used for merging different medical and clinical ontologies. In this paper, we center around the problem of disease linking or normalization. This task is executed in two phases: candidate generation and candidate scoring. In this paper, we present an approach to rank the… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 39 publications
(18 citation statements)
references
References 12 publications
0
18
0
Order By: Relevance
“…To alleviate the problems of classification-based approaches, researchers apply learning to rank in concept normalization, a two-step framework including a non-trained candidate generator and a supervised candidate ranker that takes both mention and candidate concept as input. Previous candidate rankers have used point-wise learning to rank (Li et al, 2017), pair-wise learning to rank (Leaman et al, 2013;Liu and Xu, 2017;Nguyen et al, 2018;Mondal et al, 2019), and list-wise learning to rank (Murty et al, 2018;Ji et al, 2020;Xu et al, 2020). These learning to rank approaches also have drawbacks.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…To alleviate the problems of classification-based approaches, researchers apply learning to rank in concept normalization, a two-step framework including a non-trained candidate generator and a supervised candidate ranker that takes both mention and candidate concept as input. Previous candidate rankers have used point-wise learning to rank (Li et al, 2017), pair-wise learning to rank (Leaman et al, 2013;Liu and Xu, 2017;Nguyen et al, 2018;Mondal et al, 2019), and list-wise learning to rank (Murty et al, 2018;Ji et al, 2020;Xu et al, 2020). These learning to rank approaches also have drawbacks.…”
Section: Related Workmentioning
confidence: 99%
“…Online hard triplet mining allows such a vector space model to generate triplets of (mention, true concept, false concept) within a mini-batch, leading to efficient training and fast convergence (Schroff et al, 2015). In contrast with previous vector space models where mention and candidate concepts are mapped to vectors via TF-IDF (Leaman et al, 2013), TreeLSTMs (Liu and Xu, 2017), CNNs (Nguyen et al, 2018;Mondal et al, 2019) or ELMO (Schumacher et al, 2020), we generate vector representations with BERT (Devlin et al, 2019), since it can encode both surface and semantic information (Ma et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…During token replacement, we need the entire entity to be replaced, but the MLM model (token-level replacement) fails to generate correct synonym of entity fitting in the context. So, we need a BioNER+Entity Linker (Martins et al, 2019), (Mondal et al, 2019) to link entity to ontology for generating correct synonyms.…”
Section: Introductionmentioning
confidence: 99%