2022
DOI: 10.48550/arxiv.2203.09095
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CodeReviewer: Pre-Training for Automating Code Review Activities

Abstract: Code review is an essential part to software development lifecycle since it aims at guaranteeing the quality of codes. Modern code review activities necessitate developers viewing, understanding and even running the programs to assess logic, functionality, latency, style and other factors. It turns out that developers have to spend far too much time reviewing the code of their peers. Accordingly, it is in significant demand to automate the code review process. In this research, we focus on utilizing pre-traini… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
14
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(19 citation statements)
references
References 25 publications
(57 reference statements)
2
14
0
Order By: Relevance
“…Numerous NMT techniques (especially Transformer-based models) have been increasingly adopted for automated code-based tasks. In addition, NMT models like T5 [37] have demonstrated the ability to effectively learn code representation from unlabeled data to conduct a wide range of downstream tasks given supervised discriminative fine-tuning on specific tasks (e.g., code completion [25,46], code search [17,40], code summarization [46], code review [24,41,42], and API recommendation [9]).…”
Section: Automated Code Generationmentioning
confidence: 99%
See 4 more Smart Citations
“…Numerous NMT techniques (especially Transformer-based models) have been increasingly adopted for automated code-based tasks. In addition, NMT models like T5 [37] have demonstrated the ability to effectively learn code representation from unlabeled data to conduct a wide range of downstream tasks given supervised discriminative fine-tuning on specific tasks (e.g., code completion [25,46], code search [17,40], code summarization [46], code review [24,41,42], and API recommendation [9]).…”
Section: Automated Code Generationmentioning
confidence: 99%
“…These existing approaches can be categorized into three types: encoder-based models such as CodeBERT [15], decoder-based models such as CodeGPT [26], and encoder-decoder-based models like CodeT5 [50]. Since prior works [7,24,53] have proven that encoder-based models and decoder-based models are not good at generation tasks, we only consider encoder-decoder-based code generation models in this study. There are many encoder-decoder-based models proposed for code generation tasks.…”
Section: Experimental Designmentioning
confidence: 99%
See 3 more Smart Citations