2021
DOI: 10.1109/tse.2019.2952614
|View full text |Cite
|
Sign up to set email alerts
|

PatchNet: Hierarchical Deep Learning-Based Stable Patch Identification for the Linux Kernel

Abstract: Linux kernel stable versions serve the needs of users who value stability of the kernel over new features. The quality of such stable versions depends on the initiative of kernel developers and maintainers to propagate bug fixing patches to the stable versions. Thus, it is desirable to consider to what extent this process can be automated. A previous approach relies on words from commit messages and a small set of manually constructed code features. This approach, however, shows only moderate accuracy. In this… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 27 publications
(17 citation statements)
references
References 53 publications
1
16
0
Order By: Relevance
“…Two techniques using deep neural networks, PatchNet [22] and DeepJIT [21], are most similar to our work. However, as discussed earlier, our work differs from theirs in various ways.…”
Section: Related Worksupporting
confidence: 53%
See 3 more Smart Citations
“…Two techniques using deep neural networks, PatchNet [22] and DeepJIT [21], are most similar to our work. However, as discussed earlier, our work differs from theirs in various ways.…”
Section: Related Worksupporting
confidence: 53%
“…State-of-the-art Approach. The state-of-the-art approach is PatchNet [22], which represents the removed (added) code as a three dimensional matrix. The dimensions of the matrix are the number of hunks, the number of lines in each hunk, and the number of words in each line.…”
Section: 22mentioning
confidence: 99%
See 2 more Smart Citations
“…We plan to add additional heuristics that look at other data sources, such as actual change patterns, to create an improved labelling model. We also plan to define models that look at data beyond the commit message, such as change metrics or actual source code changes [24]. A further way to improve performance will also be to scale up the end model, by training a larger capacity model on a larger dataset.…”
Section: Discussionmentioning
confidence: 99%