2021
DOI: 10.1016/j.cmpbup.2021.100042
|View full text |Cite
|
Sign up to set email alerts
|

BERT based clinical knowledge extraction for biomedical knowledge graph construction and analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0
1

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(22 citation statements)
references
References 56 publications
(49 reference statements)
0
21
0
1
Order By: Relevance
“…Harnoune et al [ 24 ] presented an end-to-end strategy for information extraction and analysis from biological, clinical notes using the Bidirectional Encoder Representations from Transformers (BERT) model and the Conditional Random Field (CRF) layer. They also constructed a named entity recognition model capable of recognizing entities such as drug, strength, duration, frequency, adverse drug responses, the rationale for taking medicine, method of administration, and form.…”
Section: Literature Surveymentioning
confidence: 99%
See 1 more Smart Citation
“…Harnoune et al [ 24 ] presented an end-to-end strategy for information extraction and analysis from biological, clinical notes using the Bidirectional Encoder Representations from Transformers (BERT) model and the Conditional Random Field (CRF) layer. They also constructed a named entity recognition model capable of recognizing entities such as drug, strength, duration, frequency, adverse drug responses, the rationale for taking medicine, method of administration, and form.…”
Section: Literature Surveymentioning
confidence: 99%
“…Security should be maintained in clinical data [ 20 ] as sensitive information needs more privacy [ 21 ] and data quality to improve the accessibility [ 22 ] of unstructured data. [ 23 ] requires efficient and accurate data extraction and in [ 24 ], there is a need to consider the security and authority in the clinical text data. Hence, it is understood that the existing techniques face problems in improving the quality of clinical text data; the accessibility of unstructured data is not provided, and it is difficult to maintain data security and authority.…”
Section: Literature Surveymentioning
confidence: 99%
“…At the end of 2018, Google has built one such model, named BERT, that outperforms nearly all existing deep learning models in several NLP tasks [23][24][25]. BERT has recently obtained state-of-the-art results for a wide variety of NLP tasks, such as extracting clinical information for breast cancer [26] and analysis for biomedical clinical data [27]. Fan et al [17] proposed using the BERT model for adverse drug events (ADEs) detection and extraction from online open data, since it allowed more drug side effects to be identified for doctors.…”
Section: Deep Learning In the Medical Domainmentioning
confidence: 99%
“…The use of text summarization in the medical field has been reported in the literature [30]. Medical documents summarization has received significant research attention in different fields such as bioinformatics [31], imaging informatics [32], clinical informatics [26,27], and public health informatics [33][34][35]. Gayathri and Jaisankar [33] developed and applied a semantic dynamic summarizing technique to extract important sentences from medical documents.…”
Section: Text Summarization In the Medical Domainmentioning
confidence: 99%
“…Their findings along with others [15][16][17][18][19][20][21][22] have shown that BERT-based models outperform other techniques for extracting clinical UMLS concepts. Based on their findings and other work [23][24][25] which show optimum performance is obtained for NER and MEL when the context is taken into account, we decide to experiment with a BERT-based contextualized embedding approach. In order to compare the performance of NeighBERT with the latest high-performing BERT-based techniques, we experiment with the following approaches: BioBERT [20], UmlsBERT [17], and BlueBERT [22].…”
Section: Clinical Concept and Relation Extractionmentioning
confidence: 99%