Proceedings of the 20th Workshop on Biomedical Language Processing 2021
DOI: 10.18653/v1/2021.bionlp-1.16
|View full text |Cite
|
Sign up to set email alerts
|

BioELECTRA:Pretrained Biomedical text Encoder using Discriminators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 53 publications
(33 citation statements)
references
References 34 publications
0
31
0
Order By: Relevance
“…advise, int, effect and mechanism) of the DDI, ChemProt [ 10 ] annotated 5 categories of the chemical–protein interaction and DrugProt [ 40 ], an extension of ChemProt, annotated 13 categories. Recently, ChemProt and DDI13 are widely used in evaluating the abilities of biomedical pretrained language models [ 46–49 ] on RE tasks.…”
Section: Overviews Of Ner/nel/re Datasetsmentioning
confidence: 99%
“…advise, int, effect and mechanism) of the DDI, ChemProt [ 10 ] annotated 5 categories of the chemical–protein interaction and DrugProt [ 40 ], an extension of ChemProt, annotated 13 categories. Recently, ChemProt and DDI13 are widely used in evaluating the abilities of biomedical pretrained language models [ 46–49 ] on RE tasks.…”
Section: Overviews Of Ner/nel/re Datasetsmentioning
confidence: 99%
“…The resulting accuracy by f1-metric, micro-averaged over the four relation classes, is 81.46%, which is comparable to the accuracy other language model-based approcahes [18,20,54] achieve for determining relations between entities extracted from this dataset, the state of the art being 84.05% [20].…”
Section: Estimation Of the Relation Extraction Efficiencymentioning
confidence: 57%
“…If the pretraining corpus is more similar to the text of the downstream task, it will have better results. Therefore, we surveyed the related literature and experimental reports to select the following pretrained models: BERTweet ( 21 ), DeBERTa ( 22 ), BioBERT ( 23 ) and BioELECTRA ( 24 ). Table 1 shows the pretraining resources and corpus size of these models.…”
Section: Methodsmentioning
confidence: 99%