2020
DOI: 10.1186/s12911-020-1045-z
|View full text |Cite
|
Sign up to set email alerts
|

Distributed representation and one-hot representation fusion with gated network for clinical semantic textual similarity

Abstract: Background Semantic textual similarity (STS) is a fundamental natural language processing (NLP) task which can be widely used in many NLP applications such as Question Answer (QA), Information Retrieval (IR), etc. It is a typical regression problem, and almost all STS systems either use distributed representation or one-hot representation to model sentence pairs. Methods In this paper, we proposed a novel framework based on a gated network to fuse distributed representation and one-hot representation of sente… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(13 citation statements)
references
References 18 publications
0
13
0
Order By: Relevance
“…We mainly focused on clinical concept extraction, which is a word-level NLP task. Recent studies from the general English domain have explored BERT for sentence-level NLP tasks and reported promising results for semantic textual similarity, 68 clinical records classification, 69 relation extraction, 70,71 and question-answering. 72 Further studies should examine the transformer-based models for sentence-level and document-level NLP tasks.…”
Section: Discussionmentioning
confidence: 99%
“…We mainly focused on clinical concept extraction, which is a word-level NLP task. Recent studies from the general English domain have explored BERT for sentence-level NLP tasks and reported promising results for semantic textual similarity, 68 clinical records classification, 69 relation extraction, 70,71 and question-answering. 72 Further studies should examine the transformer-based models for sentence-level and document-level NLP tasks.…”
Section: Discussionmentioning
confidence: 99%
“…Tang et al [30] have demonstrated that combining representations derived from different models is an efficient strategy in clinical STS. We explored similar strategies to combine sentence-level distributed representations, including vector concatenation, average pooling, max pooling, and convolution.…”
Section: Thank You For Choosing the Name MD Care Team For Your Healmentioning
confidence: 99%
“…STS can be applied to assess the quality of the clinical notes and reduce redundancy to support downstream NLP tasks [28]. However, up until now, only a few studies [29][30][31] have explored STS in the clinical domain due to the limited data resources for developing and benchmarking clinical STS tasks. Recently, a team at the Mayo Clinic developed a clinical STS dataset, MedSTS [32], which consists of more than 1000 annotated sentence pairs extracted from clinical notes.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Textual similarity calculation [1] is a key technology for efficient information screening and matching in the field of text processing. Previous work [2][3][4][5][6][7][8] has proposed some methods for textual similarity calculation, for example, traditional text similarity calculation methods [2], word similarity calculation [3], vector space model [4], and latent Dirichlet allocation model [5]. At present, with the development of deep learning and neural networks, methods based on neural networks have become popular, for example, word vector embedding method [6,7] and one-hot representation [8].…”
Section: Introductionmentioning
confidence: 99%