Proceedings of the 20th Workshop on Biomedical Language Processing 2021
DOI: 10.18653/v1/2021.bionlp-1.2
|View full text |Cite
|
Sign up to set email alerts
|

Triplet-Trained Vector Space and Sieve-Based Search Improve Biomedical Concept Normalization

Abstract: Concept normalization, the task of linking textual mentions of concepts to concepts in an ontology, is critical for mining and analyzing biomedical texts. We propose a vector-space model for concept normalization, where mentions and concepts are encoded via transformer networks that are trained via a triplet objective with online hard triplet mining. The transformer networks refine existing pre-trained models, and the online triplet mining makes training efficient even with hundreds of thousands of concepts by… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 38 publications
(70 reference statements)
0
1
0
Order By: Relevance
“…The authors used a BiLSTM-CRF tagger for NER and adopted a graph-based model to rank concept candidates for each entity mention. Many other recent works have also tackled hybrid approaches ( 59 ) and edit patterns ( 60 ), analyzed the problem of ambiguity ( 61 ), explored transformer networks trained via a triplet objective ( 62 ) and multi-task frameworks ( 63 ) and experimented using large-scale datasets ( 64 ).…”
Section: Related Workmentioning
confidence: 99%
“…The authors used a BiLSTM-CRF tagger for NER and adopted a graph-based model to rank concept candidates for each entity mention. Many other recent works have also tackled hybrid approaches ( 59 ) and edit patterns ( 60 ), analyzed the problem of ambiguity ( 61 ), explored transformer networks trained via a triplet objective ( 62 ) and multi-task frameworks ( 63 ) and experimented using large-scale datasets ( 64 ).…”
Section: Related Workmentioning
confidence: 99%