Proceedings of the 44th International Conference on Software Engineering 2022
DOI: 10.1145/3510003.3510159
|View full text |Cite
|
Sign up to set email alerts
|

Clear

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(18 citation statements)
references
References 33 publications
1
9
0
Order By: Relevance
“…Finally, we fine-tune a pre-trained model CodeT5 [38] to learn the potential API completion patterns. Since most of the previous studies [13,16,32,40,50] mainly modeled API recommendation as the recommendation task, we first consider MRR (Mean Reciprocal Rank) [31] and MAP (Mean Average Precision) [34] as model performance evaluation measures. Moreover, since we model this problem as an automatic completion task in this study.…”
Section: Case1 Case2mentioning
confidence: 99%
See 4 more Smart Citations
“…Finally, we fine-tune a pre-trained model CodeT5 [38] to learn the potential API completion patterns. Since most of the previous studies [13,16,32,40,50] mainly modeled API recommendation as the recommendation task, we first consider MRR (Mean Reciprocal Rank) [31] and MAP (Mean Average Precision) [34] as model performance evaluation measures. Moreover, since we model this problem as an automatic completion task in this study.…”
Section: Case1 Case2mentioning
confidence: 99%
“…Results. In our study, we first consider the classical API recommendation approaches (i,e., BIKER [13], RACK [32], and CLEAR [40]) as the baselines. Moreover, since we are the first to model this problem as the automatic completion task, we also consider pre-trained models (i,e., CodeBERT [8], UniXcoder [11] and PLBART [2]) as the baselines.…”
Section: Case1 Case2mentioning
confidence: 99%
See 3 more Smart Citations