2022
DOI: 10.1016/j.jbi.2021.103983
|View full text |Cite
|
Sign up to set email alerts
|

CODER: Knowledge-infused cross-lingual medical term embedding for term normalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
58
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 61 publications
(58 citation statements)
references
References 34 publications
0
58
0
Order By: Relevance
“…However, in the task to measure the degree of contextual relatedness and similarity between biomedical and clinical terms, they showed decreased performance (Table 2). Furthermore, according to our findings on BERT models, CODER Yuan et al (2022) has better performance, probably because it encodes most of the words without splitting them into their sub-words, as shown in Table 4.…”
Section: Discussionmentioning
confidence: 73%
See 1 more Smart Citation
“…However, in the task to measure the degree of contextual relatedness and similarity between biomedical and clinical terms, they showed decreased performance (Table 2). Furthermore, according to our findings on BERT models, CODER Yuan et al (2022) has better performance, probably because it encodes most of the words without splitting them into their sub-words, as shown in Table 4.…”
Section: Discussionmentioning
confidence: 73%
“…MedTCS outperformed BERTbased models by a significant margin in terms of correlation scores on the UMNSRS-Similarity dataset (Table 2). Moreover, in (Table 3), we have compared our best achieved results with recently reported scores of UMNSRS datasets (Mao and Fung, 2020;Singh and Jin, 2020;Yuan et al, 2022). MedTCS achieved significantly better coverage and correlation scores.…”
Section: Intrinsic Evaluationmentioning
confidence: 92%
“…We want to leverage synonyms knowledge from KB to enhance the model's performance. Injecting synonyms knowledge to the encoder-only models can be done by contrastive learning (Liu et al, 2021a;Yuan et al, 2022). However, such a paradigm cannot directly apply to encoder-decoder architecture as entities are not represented by dense embeddings.…”
Section: Kb-guided Pre-trainingmentioning
confidence: 99%
“…Recent methods in biomedical EL mainly used neural networks to encode mentions and each concept name into the same dense space, then linked mentions to corresponding concepts depending on embedding similarities (Sung et al, 2020;Lai et al, 2021;Bhowmik et al, 2021;Ujiie et al, 2021;Agarwal et al, 2021). Synonyms knowledge has been injected into these similarity-based methods by contrastive learning (Liu et al, 2021a;Yuan et al, 2022). For example in UMLS, concept C0085435 has synonyms: Reiter syndrome, Reactive arthritis and ReA which help models to learn different names of a concept entity.…”
Section: Introductionmentioning
confidence: 99%
“…Entity linking task has labeled entities in texts, while ICD coding only provides texts. Synonyms have also been used in biomedical entity linking (Sung et al, 2020;Yuan et al, 2022). BioSYN (Sung et al, 2020) uses marginalization to sum the probabilities of all synonyms as the similarity between a term and a concept.…”
Section: Memory Complexitymentioning
confidence: 99%