2022
DOI: 10.1007/978-3-031-09342-5_38
|View full text |Cite
|
Sign up to set email alerts
|

RuMedBench: A Russian Medical Language Understanding Benchmark

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Today, there are a large number of deep-learning models used for processing medical texts. The largest number of such models work in the English-speaking field of text analysis [14,15]. However, some models have been pre-trained in Russian.…”
Section: Applying Pre-trained Modelsmentioning
confidence: 99%
“…Today, there are a large number of deep-learning models used for processing medical texts. The largest number of such models work in the English-speaking field of text analysis [14,15]. However, some models have been pre-trained in Russian.…”
Section: Applying Pre-trained Modelsmentioning
confidence: 99%
“…Currently, there are several annotated corpora for the extraction of diseases, drugs, and adverse drug reactions from social media and clinical records in Russian ( Tutubalina et al 2021 ; Nesterov et al 2022 ). A recent work on a Russian medical language understanding benchmark ( Blinov et al 2022 ) includes the RuDReC corpus ( Tutubalina et al 2021 ) for named entity recognition (NER). However, these corpora do not cover scientific texts and include flat (non-nesting) entity mentions only.…”
Section: Introductionmentioning
confidence: 99%