2018 IEEE International Conference on Software Maintenance and Evolution (ICSME) 2018
DOI: 10.1109/icsme.2018.00017
|View full text |Cite
|
Sign up to set email alerts
|

Improving Code: The (Mis) Perception of Quality Metrics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
36
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 52 publications
(37 citation statements)
references
References 37 publications
1
36
0
Order By: Relevance
“…Particularly interesting is the case of readability: despite it was mentioned multiple times as a relevant feature by developers in RQ 1 , the statistical results are not aligned. This finding is likely due to the poor ability of current readability metrics, as well as proxy indicators of this aspect like complexity metrics, to capture the actual understandability of source code [69,70]. In other words, our findings support the claim for which novel metrics should be devised to better capture both structural and conceptual aspects of test code.…”
Section: B Analysis Of the Resultsmentioning
confidence: 62%
“…Particularly interesting is the case of readability: despite it was mentioned multiple times as a relevant feature by developers in RQ 1 , the statistical results are not aligned. This finding is likely due to the poor ability of current readability metrics, as well as proxy indicators of this aspect like complexity metrics, to capture the actual understandability of source code [69,70]. In other words, our findings support the claim for which novel metrics should be devised to better capture both structural and conceptual aspects of test code.…”
Section: B Analysis Of the Resultsmentioning
confidence: 62%
“…Poor code quality can negatively impact the effectiveness of the testing process and the quality and reliability of the created IoT system. However, to maintain viewpoint consistency and because this topic has been sufficiently discussed in the literature [6,18,32,11,20], we decided not to include specific code quality metrics in the presented overview. The only exception is the high-level metric Quality of Code suggested by Chen et al [13], which we considered to fit well with the high-level framework used here.…”
Section: Discussionmentioning
confidence: 99%
“…Besides coupling and code complexity, Pantiuchina et al discuss other code quality metrics; cohesion and code readability in particular. Formulas to compute cohesion are also provided in [32]. The lack of code cohesion and coupling indicators is examined by Chaparro et al, and detailed formulas to quantify the properties are provided in their study [11].…”
Section: Related Workmentioning
confidence: 99%
“…Pantiuchina et al [38] aim at bridging this gap by empirically investigating whether quality metrics are able to capture code quality improvement as perceived by developers. While previous studies [4], [14], [41] surveyed developers to investigate whether metrics align with their perception of code quality, they mine commits in which developers clearly state in the commit message their aim of improving one of four quality attributes: cohesion, coupling, code readability, and code complexity.…”
Section: Code Quality Metrics In Practicementioning
confidence: 99%
“…However, recently, several models for the detection of source code readability have come under question regarding the extent of their usefulness in practice. Research by Pantiuchina et al [38] has shown that more often than not, in practice, state-of-the-art code quality models are unable to capture quality improvements in the source code. In other words, in the context of incremental changes made to a pre-existing file, models are unable to capture improvements in the source code's cohesion, complexity, coupling, and readability.…”
Section: Introductionmentioning
confidence: 99%