Proceedings of the 7th International Workshop on Cooperative and Human Aspects of Software Engineering 2014
DOI: 10.1145/2593702.2593705
|View full text |Cite
|
Sign up to set email alerts
|

Improving code review effectiveness through reviewer recommendations

Abstract: Effectively performing code review increases the quality of software and reduces occurrence of defects. However, this requires reviewers with experiences and deep understandings of system code. Manual selection of such reviewers can be a costly and time-consuming task. To reduce this cost, we propose a reviewer recommendation algorithm determining file path similarity called FPS algorithm. Using three OSS projects as case studies, FPS algorithm was accurate up to 77.97%, which significantly outperformed the pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 52 publications
(36 citation statements)
references
References 13 publications
0
36
0
Order By: Relevance
“…The common idea behind these approaches is to automatically identify potential reviewers who are the most suitable for a given change. The main proxy for suitability estimation is expertise (or familiarity) of candidates with code under review, which is estimated through analysis of artifacts of developers' prior work, such as histories of code changes and review participation [4], [12], [56].…”
Section: Reviewer Recommendationmentioning
confidence: 99%
See 1 more Smart Citation
“…The common idea behind these approaches is to automatically identify potential reviewers who are the most suitable for a given change. The main proxy for suitability estimation is expertise (or familiarity) of candidates with code under review, which is estimated through analysis of artifacts of developers' prior work, such as histories of code changes and review participation [4], [12], [56].…”
Section: Reviewer Recommendationmentioning
confidence: 99%
“…Some techniques are based on scoring of candidates, either based on changes history at line level [12] or on analysis of historical reviewers for files with similar paths [56]. Another approach is machine learning on change features [57].…”
Section: Reviewer Recommendationmentioning
confidence: 99%
“…isCollect(ActualReviewers r , TopN) has a value of 1 if TopN includes at least one reviewer involved in ActualReviewer r ; otherwise, it has a value of 0. According to [6,10,33], we chose the value of N to be 1, 3, 5, and 10.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…The exact number of activities in each day will be shown in the description box at the top right when the cursor is hovering over the graph 3(a). ReDA also shows the date for Android release version 5 via a dashed line to show how these activities are before/after the release date. For example, as shown in Fig.…”
Section: B Activity Statisticmentioning
confidence: 99%
“…Extracting knowledge from these datasets has produced promising research with the goal of improving the software quality and software development process. Recently, many studies have used code review datasets to understand and improve both review effort [2]- [5] and review quality [6], [7]. However, a raw code review dataset is generally imperfect since the data collection process in each support tool 1 https://code.google.com/p/gerrit/ 2 https://www.reviewboard.org/ 3 https://code.google.com/p/rietveld/ varies in methodology, accuracy, and degree of automation [8].…”
Section: Introductionmentioning
confidence: 99%