Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension 2022
DOI: 10.1145/3524610.3527886
|View full text |Cite
|
Sign up to set email alerts
|

On the effectiveness of pretrained models for API learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 36 publications
0
0
0
Order By: Relevance
“…The models vary in size and training schemes and their success rate in performing code generation tasks . Several works leverage pre-trained models to map NL queries to sequences of API elements or other intermediate representations, which can be translated into code, though this may not be necessary with models that have been pre-trained on code (Hadi et al, 2022;Shin and Van Durme, 2022). They have also been used for in-place data transformations (Narayan et al, 2022).…”
Section: Program Synthesismentioning
confidence: 99%
“…The models vary in size and training schemes and their success rate in performing code generation tasks . Several works leverage pre-trained models to map NL queries to sequences of API elements or other intermediate representations, which can be translated into code, though this may not be necessary with models that have been pre-trained on code (Hadi et al, 2022;Shin and Van Durme, 2022). They have also been used for in-place data transformations (Narayan et al, 2022).…”
Section: Program Synthesismentioning
confidence: 99%