2022
DOI: 10.1007/978-3-031-15168-2_3
|View full text |Cite
|
Sign up to set email alerts
|

Continuous Prompt Tuning for Russian: How to Learn Prompts Efficiently with RuGPT3?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 9 publications
0
1
0
Order By: Relevance
“…This idea was further developed in (Liu et al, 2021), where the authors introduce the concept of deep prompt tuning, which involves adding prompts in different layers as prefix tokens. In (Konodyuk and Tikhonova, 2022), the authors studied the applicability of the prompt-tuning method for the Russian language: they showed that it could be a good alternative to model training techniques.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This idea was further developed in (Liu et al, 2021), where the authors introduce the concept of deep prompt tuning, which involves adding prompts in different layers as prefix tokens. In (Konodyuk and Tikhonova, 2022), the authors studied the applicability of the prompt-tuning method for the Russian language: they showed that it could be a good alternative to model training techniques.…”
Section: Related Workmentioning
confidence: 99%
“…As a result, we obtained a relatively modest but high-quality dataset. In addition, it should be noted that we took into account the current dataset size and selected suitable methods, such as prompt-tuning and LoRA (see section 5), which can be successfully applied to such amounts of data (Konodyuk and Tikhonova, 2022).…”
Section: Sentence (English)mentioning
confidence: 99%
“…RuPrompts This baseline is based on the ruPrompts library 12 for fast language model tuning via automatic prompt search. The Continuous Prompt Tuning method (Konodyuk and Tikhonova, 2021) consists in training embeddings corresponding to the prompts. Such approach is cheaper than classic fine-tuning of big language models.…”
Section: Baselinesmentioning
confidence: 99%